×

Estimation of covariance and precision matrix, network structure, and a view toward systems biology. (English) Zbl 07914939


MSC:

62-08 Computational methods for problems pertaining to statistics
Full Text: DOI

References:

[1] MardiaK, KentJ, BibbyJ. Multivariate Analysis. London; New York: Academic Press; 1979. · Zbl 0432.62029
[2] McLachlanGJ. Discriminant Analysis and Statistical Pattern Recognition. New York: John Wiley & Sons; 1992. · Zbl 0850.62481
[3] SteinC. Estimation of a covariance matrix. Rietz Lecture, 1975.
[4] DrtonM, PerlmanMD. Multiple testing and error control in Gaussian graphical model selection. Stat Sci2007, 22:430-449. · Zbl 1246.62143
[5] DrtonM, PerlmanMD. A SINful approach to Gaussian graphical model selection. J Stat Plan Inference2008, 138:1179-1200. · Zbl 1130.62068
[6] NaulB, TaylorJ. Sparse Steinian covariance estimation. J Comput Graph Stat2017, 26:355-366.
[7] MeinshausenN, BühlmannP. High dimensional graphs and variable selection with the LASSO. Ann Stat2006, 34:1436-1462. · Zbl 1113.62082
[8] ZhangB, HorvathS. A general framework for weighted gene coexpression network analysis. Stat Appl Genet Mol Biol2005, 4 Article 17. · Zbl 1077.92042
[9] FriedmanJ, HastieT, TibshiraniR. Sparse inverse covariance estimation with the graphical lasso. Biostatistics2008, 9:432-441. · Zbl 1143.62076
[10] WittenDM, FriedmanJH, SimonN. New insights and faster computations for the graphical lasso. J Comput Graph Stat2011, 20:892-900.
[11] MazumderR, HastieT. Exact covariance thresholding into connected components for large‐scale graphical lasso. J Mach Learn Res2012, 13:723-736.
[12] HsiehC‐J, SustikMA, DhillonIS, RavikumarPK, PoldrackR. BIG & QUIC: sparse inverse covariance estimation for a million variables. In: BurgesCJC (ed.), BottouL (ed.), WellingM (ed.), GhahramaniZ (ed.), WeinbergerKQ (ed.), eds. Advances in Neural Information Processing Systems, vol. 26. New York: Curran Associates, Inc.; 2013, 3165-3173.
[13] HsiehC‐J, SustikMA, DhillonIS, RavikumarP. QUIC: quadratic approximation for sparse inverse covariance estimation. J Mach Learn Res2014, 15:2911-2947. · Zbl 1319.65048
[14] LiuW, LuoX. Fast and adaptive sparse precision matrix estimation in high dimensions. J Multivar Anal2015, 135:153-162. · Zbl 1307.62148
[15] LiuH, WangL. TIGER: a tuning‐insensitive approach for optimally estimating Gaussian graphical models. Electron J Stat2017, 11:241-294. · Zbl 1395.62007
[16] WartonDI. Penalized normal likelihood and ridge regularization of correlation and covariance matrices. J Am Stat Assoc2008, 103:340-349. · Zbl 1471.62362
[17] LedoitO, WolfM. Honey, I shrunk the sample covariance matrix. J Portfolio Manage2004, 30:110-119.
[18] LedoitO, WolfM. A well‐conditioned estimator for large‐dimensional covariance matrices. J Multivar Anal2004, 88:365-411. · Zbl 1032.62050
[19] LedoitO, WolfM. Spectrum estimation: a unified framework for covariance matrix estimation and PCA in large dimensions. J Multivar Anal2015, 139:360-384. · Zbl 1328.62340
[20] LedoitO, WolfM. Numerical implementation of the QuEST function. Comput Stat Data Anal2017, 115:199-223. · Zbl 1466.62127
[21] LiuH, RoederK, WassermanL. Stability approach to regularization selection (StARS) for high dimensional graphical models. In Proceedings of the 23rd International Conference on Neural Information Processing Systems, NIPS’10, Vancouver, British Columbia, Canada. New York: Curran Associates Inc.; 2010, 1432-1440.
[22] BanerjeeO, GhaouiLE, d’AspremontA. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. J Mach Learn Res2008, 9:485-516. · Zbl 1225.68149
[23] FoygelR, DrtonM. Extended Bayesian information criteria for Gaussian graphical models. Adv Neural Inf Process Syst2010, 23:604-612.
[24] TibshiraniR. Regression shrinkage and selection via the LASSO. J R Stat Soc Series B1996, 58:267-288. · Zbl 0850.62538
[25] KarypisG, KumarV. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM J Sci Comput1998, 20:359-392. · Zbl 0915.68129
[26] vanWieringenW, PeetersC. Ridge estimation of inverse covariance matrices from high‐dimensional data. Comput Stat Data Anal2016, 103:284-303. · Zbl 1466.62204
[27] KuisminM, KemppainenJT, SillanpääMJ. Precision matrix estimation with ROPE. J Comput Graph Stat2017, 26:682-694.
[28] KuisminM, SillanpääMJ. Use of Wishart prior and simple extensions for sparse precision matrix estimation. PLoS One2016, 11:e0148171.
[29] CaiT, LiuW, LuoX. A constrained l_1 minimization approach to sparse precision matrix estimation. J Am Stat Assoc2011, 106:594-607. · Zbl 1232.62087
[30] CandésE, TaoT. The Dantzig selector: statistical estimation when p is much larger than n. Ann Stat2007, 35:2313-2351. · Zbl 1139.62019
[31] BelloniA, ChernozhukovV, WangL. Square‐root lasso: pivotal recovery of sparse signals via conic programming. Biometrika2011, 98:791-806. · Zbl 1228.62083
[32] YuanM. High dimensional inverse covariance matrix estimation via linear programming. J Mach Learn Res2010, 11:2261-2286. · Zbl 1242.62043
[33] ZhaoT, LiuH, RoederK, LaffertyJ, WassermanL. The \(\text{huge}\) package for high‐dimensional undirected graph estimation in R. J Mach Learn Res2012, 13:1059-1062. · Zbl 1283.68311
[34] BienJ, TibshiraniRJ. Sparse estimation of a covariance matrix. Biometrika2011, 98:807-820. · Zbl 1228.62063
[35] ZouH. The adaptive lasso and its oracle properties. J Am Stat Assoc2006, 101:1418-1429. · Zbl 1171.62326
[36] DengX, TsuiK‐W. Penalized covariance matrix estimation using a matrix‐logarithm transformation. J Comput Graph Stat2013, 22:494-512.
[37] WonJ‐H, LimJ, KimS‐J, RajaratnamB. Condition‐number‐regularized covariance estimation. J R Stat Soc Series B Stat Methodol2013, 75:427-450. · Zbl 1411.62146
[38] FangY, WangB, FengY. Tuning parameter selection in regularized estimations of large covariance matrices. J Stat Comput Simul2016, 86:494-509. · Zbl 1510.62238
[39] MeinshausenN, BühlmannP. Stability selection. J R Stat Soc Series B Stat Methodol2010, 72:417-473. · Zbl 1411.62142
[40] WhittakerJ. Graphical Models. West Sussex: John Wiley & Sons; 1990.
[41] EdwardsD. Introduction to Graphical Modelling. 2nd ed.New York: Springer‐Verlag; 2000. · Zbl 0952.62003
[42] HaMJ, SunW. Partial correlation matrix estimation using ridge penalty followed by thresholding and re‐estimation. Biometrics2014, 70:765-773.
[43] EfronB. Large‐scale simultaneous hypothesis testing: the choice of a null distribution. J Am Stat Assoc2004, 99:96-104. · Zbl 1089.62502
[44] DezeureR, BühlmannP, MeierL, MeinshausenN. High‐dimensional inference: confidence intervals, p‐values and R‐software \(\mathsf{h} \mathsf{d} \mathsf{i} \). Stat Sci2015, 30:533-558. · Zbl 1426.62183
[45] BühlmannP, KalischM, MeierL. High‐dimensional statistics with a view toward applications in biology. Ann Rev Stat Appl2014, 1:255-278.
[46] KrzakalaF, MooreC, MosselE, NeemanJ, SlyA, ZdeborováL, ZhangP. Spectral redemption in clustering sparse networks. Proc Natl Acad Sci2013, 110:20935-20940. · Zbl 1359.62252
[47] RanolaJM, LangfelderP, LangeK, HorvathS. Cluster and propensity based approximation of a network. BMC Syst Biol2013, 7:21.
[48] LangfelderP, MischelPS, HorvathS. When is hub gene selection better than standard meta‐analysis?PLoS One2013, 8:1-16.
[49] ÄijöT, BonneauR. Biophysically motivated regulatory network inference: progress and prospects. Hum Hered2016, 81:62-77.
[50] BickelPJ, LevinaE. Covariance regularization by thresholding. Ann Stat2008, 36:2577-2604. · Zbl 1196.62062
[51] EisenMB, SpellmanPT, BrownPO, BotsteinD. Cluster analysis and display of genome‐wide expression patterns. Proc Natl Acad Sci1998, 95:14863-14868.
[52] LangfelderP, HorvathS. WGCNA: an R package for weighted correlation network analysis. BMC Bioinformatics2008, 9:559.
[53] GhazalpourA, DossS, ZhangB, WangS, PlaisierC, CastellanosR, BrozellA, SchadtEE, DrakeTA, LusisAJ, et al. Integrating genetic and network analysis to characterize genes related to mouse weight. PLoS Genet2006, 2:1-11.
[54] LiY, JacksonSA. Gene network reconstruction by integration of prior biological knowledge. G32015, 5:1075-1079.
[55] WilleA, ZimmermannP, VranováE, FürholzA, LauleO, BleulerS, HennigL, PrelićA, vonRohrP, ThieleL, et al. Sparse graphical Gaussian modeling of the isoprenoid gene network in Arabidopsis thaliana. Genome Biol2004, 5:R92.
[56] ShimamuraT, ImotoS, YamaguchiR, MiyanoS. Weighted lasso in graphical Gaussian modeling for large gene network estimation based on microarray data. Genome Inform2007, 19:142-153.
[57] Jokipii‐LukkariS, SundellD, NilssonO, HvidstenTR, StreetNR, TuominenH. Norwood: a gene expression resource for evo‐devo studies of conifer wood development. New Phytol2017, 216:482-494. https://doi.org/10.1111/nph.14458. · doi:10.1111/nph.14458
[58] KrämerN, SchäferJ, BoulesteixA‐L. Regularized estimation of large‐scale gene association networks using graphical Gaussian models. BMC Bioinformatics2009, 10:384.
[59] RuanJ, DeanAK, ZhangW. A general co‐expression network‐based approach to gene expression analysis: comparison and applications. BMC Syst Biol2010, 4:8.
[60] KhondkerZS, ZhuH, ChuH, LinW, IbrahimJG. The Bayesian covariance lasso. Stat Interface2013, 6:243-259. · Zbl 1327.62138
[61] WangH. Bayesian graphical lasso models and efficient posterior computation. Bayesian Anal2012, 7:867-886. · Zbl 1330.62041
[62] BhadraA, MallickBK. Joint high‐dimensional Bayesian variable and covariance selection with an application to eQTL analysis. Biometrics2013, 69:447-457. · Zbl 1274.62722
[63] KubokawaT, SrivastavaMS. Estimation of the precision matrix of a singular Wishart distribution and its application in high‐dimensional data. J Multivar Anal2008, 99:1906-1928. · Zbl 1284.62092
[64] BourigaM, FéronO. Estimation of covariance matrices based on hierarchical inverse‐Wishart priors. J Stat Plan Inference2013, 143:795-808. · Zbl 1428.62216
[65] HuangA, WandMP. Simple marginally noninformative prior distributions for covariance matrices. Bayesian Anal2013, 8:439-452. · Zbl 1329.62135
[66] MohammadiA, WitEC. \( \text{BDgraph} \): an R package for Bayesian structure learning in graphical models. ArXiv e‐prints, January 2015.
[67] HastieT, TibshiraniR, FriedmanJ. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Berlin: Springer Series in Statistics Springer; 2009. · Zbl 1273.62005
[68] HastieT, TibshiraniR, WainwrightM. Statistical Learning with Sparsity: The Lasso and Generalizations. Boca Raton, FL: Chapman & Hall/CRC; 2015. · Zbl 1319.68003
[69] PourahmadiM. High‐Dimensional Covariance Estimation. New York: John Wiley & Sons; 2013. · Zbl 1276.62031
[70] TongT, WangC, WangY. Estimation of variances and covariances for high‐dimensional data: a selective review. Wiley Interdiscip Rev Comput Stat2014, 6:255-264. · Zbl 07912728
[71] FanJ, LiaoY, LiuH. An overview of the estimation of large covariance and precision matrices. Econom J2016, 19:C1-C32. · Zbl 1521.62083
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.