Skip to main content

A model for gene deregulation detection using expression data

Abstract

In tumoral cells, gene regulation mechanisms are severely altered. Genes that do not react normally to their regulators' activity can provide explanations for the tumoral behavior, and be characteristic of cancer subtypes. We thus propose a statistical methodology to identify the misregulated genes given a reference network and gene expression data.

Our model is based on a regulatory process in which all genes are allowed to be deregulated. We derive an EM algorithm where the hidden variables correspond to the status (under/over/normally expressed) of the genes and where the E-step is solved thanks to a message passing algorithm. Our procedure provides posterior probabilities of deregulation in a given sample for each gene. We assess the performance of our method by numerical experiments on simulations and on a bladder cancer data set.

Background

Various mechanisms affect gene expression in tumoral cells, including copy number alterations, mutations, modifications in the regulation network between the genes. A simple strategy to identify genes affected by these phenomena is to perform differential expression analysis. Results can then be extended to the scale of pathways using enrichment analysis [1] or functional class scoring [2]. However, such a strategy is blind to small variations in gene expression, especially as multiple testing correction applies. Moreover, it does not take interdependence between genes into account and can mark an expression change as abnormal when actually it is induced by a change in the regulators' activity. To overcome these drawbacks, an alternative strategy is to identify the affected genes by pointing important changes in the gene regulatory network (GRN) of the tumoral cell. Such an approach furthermore corresponds to the modelisation of phenomena altering regulation, as for instance mutations in regulatory regions [3].

The first step towards this is to procure a GRN. It can be obtained from curated databases or, in order to obtain tissue or condition-specific networks, reconstructed from expression data. In the latter case, the inference can be done by relying either on discrete or continuous models. In the discrete framework, gene expression profiles are discretized into binary or ternary valued variables (under-expressed/normal/over-expressed). The regulation structure is then given by a list of truth tables [4]. This approach allows in particular to take coregulation into account, that is to require the activity of a whole set of co-activators or co-inhibitors to activate or inhibit the target [5, 6]. In the continuous case, inference can be done in a regression framework, where the expression of each target gene is explained by all its potential regulator genes. An edge is drawn between two genes if the corresponding regression coefficient is significantly different from zero, which can be deciphered by performing variable selection in the regression model. A popular choice for this task is to rely on sparsity-inducing penalties like the Lasso and its by-products [7, 8]. In particular, some variants allow to account for co-regulation by favoring predefined groups of regulators acting together in a sign-coherent way [9]. Other forms of penalties encourage a predefined hierarchy between the predictors [10], i.e. the regulator genes in the case at hand.

To unravel deregulated genes by means of GRN, a first possibility is to infer several networks independently (one for each tissue) and to compare them. However, due to the noisy nature of transcriptomic data and the large number of features compared to the sample size, most of the differences found in the networks inferred independently may not be linked with underlying biological processes. Methods have therefore been developed to infer several networks jointly to share similarities between the different tissues and penalize the presence of an edge in only one of them. Such methods exist for both time series [11] or steady-state [12] data.

A second possibility is to assess the adequacy of gene expression in tumoral samples to a reference GRN, in order to exhibit the more striking discrepancies - i.e. the regulations which are not fulfilled by the data. In this perspective, [13] use an heuristic in a Boolean framework to update the regulatory structure by minimizing the discrepancies between the reference GRN and a new data set. A similar approach is depicted in [14] to predict the discrepancies and the unobserved genes of the network. More methods analyzing the coherence between known signaling pathways and gene data sets can be found in the review [15]. Still, they focus on checking the validity of the network rather than highlighting genes with an abnormal behavior.

At the pathway level rather than the gene level, it is possible to look for sample- specific regulation abnormalities by using SPIA [16]. PARADIGM [17] generalizes SPIA on heterogeneous data (DNA copies, mRNA and protein data). Moreover, it determines a score of activity for each gene of a pathway for each sample of the data set, and the use of hidden variables allows to compute this score even if some of the genes of the pathway are not measured. The method is however not network-wide in the sense that each gene has a deregulation score by pathway it belongs to, and pathways are treated independently. Moreover, as the pathways are extracted from curated databases, the regulations taken into account are not tissue-specific.

The aim of this paper is to develop a methodology to provide a network-wide deregulation score for each gene and each sample by taking the whole regulation network into account. For this purpose, we introduce a model based on a regulatory process in which genes are allowed to be deregulated, i.e. not respond to their regulators as expected. An EM strategy is proposed for parameter inference, where the hidden variables correspond to the status (under/over/normally expressed) of the genes. The E-step is solved thanks to a message passing algorithm. At the end of the day, the procedure provides posterior probabilities of deregulation in a given sample for each target gene. We assess the performance of our method for detecting deregulations on simulated data. We also illustrate its interest on a bladder cancer data set, where we study the deregulations according to two reference GRN obtained by two state-of-the-art network inference procedures on a consensus expression data set.

Methods

The model

Our model draws inspiration from LICORN [5], a model originally developed for network inference purposes. LICORN considers a regulation structure in which genes are either regulators (transcription factors - TFs) or target genes. The expressions are discretized and each gene g is characterized by a ternary value S g {−1, 0, +1} encoding its expression status - under-, normally, or over-expressed. The regulation of each target gene g is governed by a set of co-activators A(g) and co-inhibitors I(g) among the TFs. Those sets are endowed with some "collective status" described by variables S g A and S g I , assuming that regulation works in a cooperative way: hence, the collective state of a set of regulators is over- (resp. under-) represented if and only if all elements in the set share the same status. Finally, the status S g of the target gene g is deduced from S g A and S g I by following Truth Table 1.

Table 1 LICORN truth table.

In order to detect deregulated target genes given a regulatory network and gene expression profiles, we apply two major modifications to the LICORN model: first, we avoid discretization of the data by considering all the ternary variables introduced so far as hidden random variables. The expression X g of a gene g is assumed to follow a normal distribution with parameters that depend on the hidden status, i.e., X g |S g = s ~ N (µ s , σ s ). Second, we introduce for each gene an indicator variable D g for deregulation, such that D g = 1 with probability Ε. Renaming the result of the truth table by S g R , the final status of the target is then deduced from the values of D g and S g R :

S g = S g R if D g = 0 , s S g R , S g = s = 1 2 if D g = 1 .

For completeness, we must specify the distribution of the hidden states S g for each TF: we assume independent multinomial distributions with parameters α= (α , α0, α+).

The model is summarized for one target gene in Figure 1. For the sake of conciseness, the vector θentails all parameters of the models, that is, the means and standard deviations of the Gaussians, the vector αof proportions and the deregulation rate Ε. The data set contains n samples, r TFs and t target genes. We denote by Z the n × (r + 5t) matrix of all hidden states and by X the n × (r + t) matrix of all expression variables.

Figure 1
figure 1

The model for one target gene regulated by two co-inhibitors and three co-activators. The circled variables are hidden. A dashed edge indicates that the distribution of the variable depends on the corresponding parameter.

Note that the dependencies among variables are acyclic, implying that the likelihood can be decomposed in a product.

p ( X , Z | θ ) = p ( S j | α ) × p ( S i A | S j ) × p ( S i I | S j ) × p ( S i R | S i I , S i A ) × p ( D i | E ) × p ( S i | S i R , D i ) × p ( X k | S k , μ , σ )

For sake of readability, the indices of the products are omitted in the above formula. However, it should be clear when the product runs over target genes, regulator genes or all of them.

Estimation algorithm

As usual with latent variable models, the likelihood is intractable as the number of potential states of the hidden variables grows exponentially with the number of variables. Therefore, we adopt an EM-like strategy [18] by iterating the following steps, starting from an initial guess θ0 of the model parameters:

E-step: Fix θand compute the conditional probability distribution of the hidden variables, given the observed expression values: q ( Z ) = ( Z | X , θ )

M-step: Fix q and find θthat maximizes q Z log ( X , Z | θ )

Step E. The first issue at stake in the E-step is to deal with the number of potential states for the hidden variables of all the genes. Fortunately, we only need their marginal distributions in the M step, as will be shown in the corresponding section. Still, we need a way to compute these marginals without having to compute the joint distribution first.

To handle this issue, we rely on Belief Propagation [19] - a.k.a message-passing algorithm - to perform the E step, since the probability distribution arising from our model is easily represented as a factor graph. Indeed, consider a set of discrete values for all variables S g A , S g I , S g R and D g . Conditionally on X, the probability for the discrete variables to match the given value is proportional to the product of the following factors:

  1. 1.

    α Sg for each regulator gene g R;

  2. 2.

    E if D g = 1, and 1 - E 2 if D g = 0, for each target gene g T;

1 σ exp - ( X g - μ ) 2 2 σ 2 for each gene g G (regulator or target), where µ and σ are the mean expression and standard deviation associated to state S g ;

  1. 4.

    a factor equal to one if S g A correctly represents the collective state of g's activators, and zero otherwise;

  2. 5.

    a factor equal to one if S g I correctly represents the collective state of g's inhibitors, and zero otherwise;

  3. 6.

    a factor equal to one if S g R is the entry in Table 1 corresponding to S g A and S g I , and zero otherwise;

  4. 7.

    a factor equal to one if either D g = 0 and S g = S g R or D g = 1 and S g S g R , and zero otherwise.

This factorization translates into the factor graph depicted in Figure 2 (a graph whose nodes are the variables and the above factors, each factor being connected to the variables it depends on). We use the SumProduct Belief Propagation algorithm, implemented in the Dimple library [20] to compute approximated marginals of every hidden variable, given the regulation network, the parameter set, and the expression values. In the case where multiple samples are given, this can be done separately for each one since the samples are considered as independent.

Figure 2
figure 2

A partial view of the factor graph. The factor graph corresponding to Figure 1. The rectangles correspond to the factors, and are numbered according to the text. The algorithm iteratively updates the distribution of the circled variables.

Step M. In this step we keep the probability distribution q fixed and look for the parameters θthat maximize

Z q Z log ( X , Z | θ )

Since ( X , Z | θ ) is a product of simple factors, its logarithm is the sum of these factors. Also, note that boolean factors (4-7) can be omitted since they have no effect on the sum: whenever q Z 0 , these factors must be equal to 1 hence the logarithm is 0.

Calling G the set of genes, R G the set of regulators and T G the set of target genes, we are left to maximize the sum over all samples of

g R Z q Z log α S g + g T Z q Z D g log E + ( 1 - D g ) log 1 - ε 2 + g G Z q Z - ( x G - μ S g ) 2 2 σ S g 2 - log σ S g

These three terms depend on separate parameters and can be maximized separately. Moreover, we only require the marginals of variables S g and D g for this task, and not the full distribution q. Denoting by I the set of samples, it is straightforward to show that the former sum is maximized for the following parameters:

α - i I g R q S i , g = - 1 , α 0 - i I g R q S i , g = 0 , α + i I g R q S i , g = + 1 , ε i I g T q D i , g = 1 , ( 1 - ε ) i I g T q D i , g = 0 , μ s = i g q ( S i , g = S ) X i , g i g q ( S i , g = S ) , σ s 2 = i g q ( S i , g = S ) ( μ s - X i ) 2 i g q ( S i , g = S )

Complexity analysis

Step M only involves computing a few sums of size [number of genes]×[number of samples] and is not time-consuming. Step E performs for each sample a fixed number of passes of Belief Propagation in the factor graph. Each pass consists in updating every node with information from its neighbors. The complexity of updating a factor grows exponentially with its degree, therefore it is important to limit the number of variables of each factor. It is done by replacing the factors corresponding to the types (4) and (5) in Figure 2 by tree-like structures with many factors having 3 variables each.

With this approach the graph has approximately N = 2E + G nodes, where E is the number of regulator-target edges in the regulation network, and G the number of genes. A personal computer performs a few million node updates per second, thus step E will run in t seconds if N ×[number of passes]×[number of samples] is not much greater than t millions.

Regulatory network inference from expression data

To apply our methodology to real data, we use two different inference methods.

LICORN. The first one, named hLICORN, corresponds to the LICORN model and is available in the CoRegNet Bioconductor package [6]. In a first step, it efficiently searches the discretized gene expression matrix for sets of co-activators and co-repressors by frequent items search techniques and locally selects combinations of co-repressors and co-activators as candidate subnetworks. In a second step, it determines for each gene the best sets among those candidates by running a regression. hLICORN was shown to be suitable for cooperative regulation detection [5, 6].

Cooperative-Lasso + Stability Selection. The second inference procedure applies in a continuous setup. It consists in two steps: first, a selection step performed with a sparse procedure; and second, a resampling step whose purpose is to stabilize the selection for more robustness in the reconstructed network. Here are some details.

Step 1: selection. For each target gene, a sparse penalized regression method is used to select the set of relevant co-activators and co-inhibitors among all possible transcription factors. When no special structure is assumed in the network, this task can be performed with the Lasso penalty, as it was successfully applied for network inference in [8]. Here, however, we are looking for sets of regulators that work group-wise, either as co-activators or co-inhibitors. To favor such a structure, we build on the penalty proposed in [12, 9] that encourages selection of predefined groups of variables sharing the same sign (thus being either co-activators or co-inhibitors). This regularization scheme is known as the "cooperative-Lasso". It was originally designed to work with a set of groups that form a partition over the set of regulators. Here, we extend this method to a structure that defines a hierarchy (or tree) on the set of regulators R . We denote by H = { H 1 , . . . , H K } this structure, with H k the kth (non-empty) node of the hierarchy.

Technically, the optimization problem solved for selecting regulators of gene g is the following penalized regression problem

β ^ ( g ) = arg min β ( g ) | R | 1 2 X g X R β ( g ) 2 + λ k = 1 K ( β k ( g ) ) + 2 + ( β k ( g ) ) 2 ,

with X g the expression profile of gene g and X R the expression profiles of the regulators. The parameter λ >0 tunes the amount of regularization, and thus the number of regulators associated with gene g; v+ and v are the positive, respectively the negative elements of a vector v, and v k the restriction of v to the elements in node H k of the hierarchy. Hence, this penalty favors selection of sign-coherent groups of variables, like β H k ( g ) + , standing for the estimated co-activators of gene g in node H k of the hierarchy, or β H k ( g ) - , the corresponding co-inhibitors.

Step2: Stabilization. We fit a sparse model as described above for each target gene, regressing on the same set of regulators R. The hierarchy that we used is obtained by performing hierarchical clustering with average linkage on a distance based upon the correlation between expression profiles. We use the same λ for each gene, which is chosen large enough in order to select at least one set of regulators for all target genes. To select the final edges in the network, we rely on the stability selection procedure of [21], which was successfully applied to the reconstruction of robust regulatory networks in the case of a simple Lasso penalty [7], and is known to be less sensitive than selecting one λ per gene (e.g. by cross-validation). This technique consists in refitting the regression model on many subsamples obtained by drawing randomly n/2 observations from the original data set. We replicate 10,000 times this operation and obtain an estimated probability of selection for each edge. We fix the threshold in order to select a number of edges similar to LICORN, which corresponds to edges with a probability of selection greater than 0.65.

Results and discussion

Classification performances on simulated data sets

In our experiments, the score q(D i,g = 1) is used to determine if gene g is deregulated or not in sample i. Performances are evaluated with Precision-Recall (PR) curves, which are known to be more informative than ROC curves or accuracy [22] when considering classification problem with very imbalanced data sets.

We generate expression data sets according to the model described earlier and feed them to the EM algorithm to evaluate its performance. To study the impact of each parameter, we try several values of this parameter while all others remain fixed to their default value. Ten data sets are generated and processed in each setting, resulting in 10 PR curves. We thus obtain clouds of curves, measuring both the variability for a given parameter set and the influence of the varying parameter.

We unsurprisingly note that σ has dramatic effect (see Figure 3). As a rule of thumb to distinguish two states from one another, the associated standard deviations must be smaller than the difference between their mean expressions.

Figure 3
figure 3

Influence of σ. PR curves for simulations with varying σ, with means (µ , µ0, µ+) = (1, 0, 1). Ten simulations are run for each value.

Meanwhile, large values of E mechanically result in better PR: the more the deregulated genes, the more the true positives among all positives (Figure 4).

Figure 4
figure 4

Influence of Ε. PR curves for simulations with varying Ε. Ten simulations are run for each value.

On the contrary, all other parameter have little effect on the performance and we thus postpone the associated PR curves to the Additional File 1. Those parameters are µ, α, the number of passes in the Belief Propagation algorithm (as long as it is greater than five), the number of genes and the sample size (as long as their product is of several hundreds).

Managing the False Discovery Rate

Consider couples (i, g) whose deregulation score q(D i,g = 1) = s: this score being a posterior probability, the expected proportion of true (respectively false) positives is s (respectively 1 − s). Similarly, if K pairs pass the threshold, the expected number of true positives among them is the sum of their scores, denoted by S. The false discovery rate (FDR) may be estimated by (K − S)/K. In practice, aiming for a particular FDR, one can start with a threshold of 1 and lower it gradually: as more pairs get selected, the ratio (K − S)/K gradually increases. All one has to do is stop when it reaches the intended FDR. The concordance between the intended FDR and the actual proportion of false positives is illustrated on simulated data sets in the Additional File 1.

Tests on real data

We applied our method to the bladder cancer data set available in the R-package CoRegNet [6]. Expression data from patients with different status was pooled to infer gene co-regulatory networks with two independent procedures, namely hLICORN and the hierarchical Cooperative-Lasso. The inferred networks reflect the regulation trends over the whole set of 184 samples. Our EM algorithm is then run using the same expression data, but since samples are now treated individually, the results reflect how each sample violates the regulatory rules generally followed by the others.

On real data, the true deregulation status is unreachable. Hence, we match our result with Copy Number Alteration (CNA) data collected from the same samples, in order to support that our method correctly identifies deregulated gene-sample pairs. We do not expect CNAs to precisely coincide with failures of the regulation network, so we do not hope to detect exactly those pairs that present a CNA. However, the number of gene copies influences the expression independently from expression of the TFs [23]. We therefore expect to observe a link between CNA and gene deregulations.

To this end, we use CNA data provided by the CoRegNet package, associating to each gene-sample pair a copy number state: 0 for the diploid state (two copies), 1 for a copy number gain, 1 for a copy number loss, and 2 for a copy number amplification. Figure 5 compares the distribution of the perturbation scores across copy number states by representing, for each copy number class, the empirical cumulative distribution function of the perturbation scores. For each value s of the perturbation score in abscissa, the ordinate is the proportion of gene-sample pairs with a score greater than s. The fact that the curve corresponding to the diploid state is above all the other curves indicates that gene-sample pairs having a CNA are given a higher perturbation score than diploid gene-sample pairs by our deregulation model. Although the difference seems slight, it is highly significant given the large number of scores, as indicated by the p-value of the Student test for the pairwise differences between the diploid state and each of the other altered states. As expected, the scores of the "amplification" state 2 are also higher than the scores of "gain" state 1.

Figure 5
figure 5

Empirical cumulative distribution of scores, by Copy-Number status. Student's test is used to compare every altered state with the normal.

Conclusion

In the present article, we develop a statistical model for gene expression based on a hidden regulatory structure. Given a reference GRN, it allows to determine which genes are misregulated in a sample, meaning an expression which does not match the network given the expression of its regulators. Numerical experiments validate the algorithmic procedure: when applied to bladder cancer data with known CNA, the deregulation score is higher in samples in which genes have an altered number of copies.

We believe that our methodology will be useful to understand which regulation mechanisms are altered in different cancer subtypes. Indeed, the results of our methodology are sample-specific. However, characterizing the deregulations which are common to most of the individuals suffering a given cancer subtype is a promising perspective.

The integration of CNA to the methodology, as already done in the context of differential expression [24], will also be considered in future work, as it would allow a better power for detecting genes suffering misregulation due to a copy alteration.

Availability of supporting data

The EM algorithm described in this article is available as a Java archive at http://www.math-info.univ-paris5.fr/~ebirmele/index.php?choix=6/

Bladder cancer data and hLicorn are available through the CoRegNet Bioconductor package.

Abbreviations

CNA:

Copy Number Alteration GRN: Gene Regulatory Network PR curve: Precision-Recall ROC curve: Receiver Operating Characteristic curve TF: Transcription factor

References

  1. Khatri P, Draghici S, Ostermeier GC, Krawetz SA: Profiling gene expression using onto-express. Genomics. 2002, 79 (2): 266-270.

    Article  PubMed  CAS  Google Scholar 

  2. Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, et al: Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles. Proceedings of the National Academy of Sciences. 2005, 102 (43): 15545-15550.

    Article  CAS  Google Scholar 

  3. Melton C, Reuter JA, Spacek DV, Snyder M: Recurrent somatic mutations in regulatory regions of human cancer genomes. Nature Genetics. 2015, 47: 710-716.

    Article  PubMed  CAS  PubMed Central  Google Scholar 

  4. Elati M, Rouveirol C: Unsupervised Learning for Gene Regulation Network Inference from Expression Data: A Review. 2011, John Wiley and Sons, Inc, 955-978. doi:10.1002/9780470892107.ch41., [http://0-dx-doi-org.brum.beds.ac.uk/10.1002/9780470892107.ch41]

    Google Scholar 

  5. Elati M, Neuvial P, Bolotin-Fukuhara M, Barillot E, Radvanyi F, Rouveirol C: Licorn: learning cooperative regulation networks from gene expression data. Bioinformatics. 2007, 23 (18): 2407-2414.

    Article  PubMed  CAS  Google Scholar 

  6. Nicolle R, Radvanyi F, Elati M: Coregnet: reconstruction and integrated analysis of co-regulatory networks. Bioinformatics. 2015, 31 (18): 3066-3068.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Haury AC, Mordelet F, Vera-Licona P, Vert JP: Tigress: Trustful inference of gene regulation using stability selection. BMC Systems Biology. 2012, 6 (1): 145-

    Article  PubMed  PubMed Central  Google Scholar 

  8. Meinshausen N, Bu¨hlmann P: High-dimensional graphs and variable selection with the lasso. Ann. Statist. 2006, 34 (3): 1436-1462.

    Article  Google Scholar 

  9. Chiquet J, Grandvalet Y, Charbonnier C, et al: Sparsity with sign-coherent groups of variables via the cooperative-lasso. The Annals of Applied Statistics. 2012, 6 (2): 795-830.

    Article  Google Scholar 

  10. Jenatton R, Audibert JY, Bach F: Structured variable selection with sparsity-inducing norms. The Journal of Machine Learning Research. 2011, 12: 2777-2824.

    Google Scholar 

  11. Kojima K, Imoto S, Yamaguchi R, Fujita A, Yamauchi M, Gotoh N, Miyano S: Identifying regulational alterations in gene regulatory networks by state space representation of vector autoregressive models and variational annealing. BMC Genomics. 2012, 13 Suppl 1: S6-

    Article  PubMed  CAS  Google Scholar 

  12. Chiquet J, Grandvalet Y, Ambroise C: Inferring multiple graphical structures. Statistics and Computing. 2011, 21 (4): 537-553.

    Article  Google Scholar 

  13. Karlebach G, Shamir R: Constructing logical models of gene regulatory networks by integrating transcription factor-dna interactions with expression data: An entropy-based approach. J Comput Biol. 2012, 19 (1): 30-41.

    Article  PubMed  CAS  Google Scholar 

  14. Guziolowski C, Bourde A, Moreews F, Siegel A: Bioquali cytoscape plugin: analysing the global consistency of regulatory networks. BMC Genomics. 2009, 10 (1): 244-

    Article  PubMed  CAS  PubMed Central  Google Scholar 

  15. Samaga R, Klamt S: Modeling approaches for qualitative and semi-quantitative analysis of cellular signaling networks. Cell Commun Signal. 2013, 11 (1): 43-

    Article  PubMed  PubMed Central  Google Scholar 

  16. Tarca AL, Draghici S, Khatri P, Hassan SS, Mittal P, Kim JS, et al: A novel signaling pathway impact analysis. Bioinformatics. 2009, 25 (1): 75-82.

    Article  PubMed  CAS  PubMed Central  Google Scholar 

  17. Vaske CJ, Benz SC, Sanborn JZ, Earl D, Szeto C, Zhu J, et al: Inference of patient-specific pathway activities from multi-dimensional cancer genomics data using paradigm. Bioinformatics. 2010, 26 (12): i237-i245.

    Article  PubMed  CAS  PubMed Central  Google Scholar 

  18. Dempster AP, Laird NM, Rubin DB: Maximum likelihood from incomplete data via the em algorithm. JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B. 1977, 39 (1): 1-38.

    Google Scholar 

  19. Yedidia JS, Freeman WT, Weiss Y: Exploring artificial intelligence in the new millennium. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (2003). Chap. Understanding Belief Propagation and Its Generalizations, 239-269. [http://0-dl-acm-org.brum.beds.ac.uk/citation.cfm?id=779343.779352]

  20. Hershey S, Bernstein J, Bradley B, Schweitzer A, Stein N, Weber T, Vigoda B: Accelerating inference: towards a full language, compiler and hardware stack. CoRR abs/1212.2991. 2012

    Google Scholar 

  21. Meinshausen N, Bühlmann P: Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology). 2010, 72 (4): 417-473.

    Article  Google Scholar 

  22. Davis J, Goadrich M: The relationship between precision-recall and roc curves. Proceedings of the 23rd International Conference on Machine Learning. 2006, ACM, 233-240.

    Google Scholar 

  23. Pollack JR, Sørlie T, Perou CM, Rees CA, Jeffrey SS, Lonning PE, et al: Microarray analysis reveals a major direct role of dna copy number alteration in the transcriptional program of human breast tumors. Proc Natl Acad Sci U S A. 2002, 99 (20): 12963-12968.

    Article  PubMed  CAS  PubMed Central  Google Scholar 

  24. Salari K, Tibshirani R, Pollack JR: Dr-integrator: a new analytic tool for integrating dna copy number and gene expression data. Bioinformatics. 2010, 26 (3): 414-416.

    Article  PubMed  CAS  PubMed Central  Google Scholar 

Download references

Acknowledgements

The authors would like to thank François Radvanyi for helpful discussions.

Declarations

This work was partially supported the CNRS (CREPE, PEPS BMI). Publication charges were funded by CHIST-ERA grant (AdaLab, ANR 14-CHR2-0001-01).

This article has been published as part of BMC Systems Biology Volume 9 Supplement 6, 2015: Joint 26th Genome Informatics Workshop and 14th International Conference on Bioinformatics: Systems biology. The full contents of the supplement are available online at http://0-www-biomedcentral-com.brum.beds.ac.uk/bmcsystbiol/supplements/9/S6.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Etienne Birmelé.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

The work presented here was carried out in collaboration between all authors. ME and EB conceived the study. TP and EB designed it and wrote the manuscript. JC, PN and RN brought their expertise on inference and statistical interpretation on the real data. All authors provided valuable advises in developing the proposed method and modifying the manuscript. All authors read and approved the final manuscript.

Electronic supplementary material

12918_2015_1478_MOESM1_ESM.pdf

Additional File 1: File containing PR curves for varying α, µ, the number of genes/samples and the number of belief propagation iterations. It also contains figures illustrating the FDR estimation on simulated data. (PDF 607 KB)

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

The Creative Commons Public Domain Dedication waiver (https://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Picchetti, T., Chiquet, J., Elati, M. et al. A model for gene deregulation detection using expression data. BMC Syst Biol 9 (Suppl 6), S6 (2015). https://0-doi-org.brum.beds.ac.uk/10.1186/1752-0509-9-S6-S6

Download citation

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/1752-0509-9-S6-S6

Keywords