×

Optimal CUR matrix decompositions. (English) Zbl 1315.65042

Proceedings of the 46th annual ACM symposium on theory of computing, STOC ’14, New York, NY, USA, May 31 – June 3, 2014. New York, NY: Association for Computing Machinery (ACM) (ISBN 978-1-4503-2710-7). 353-362 (2014).

MSC:

65F30 Other matrix algorithms (MSC2010)
65F50 Computational methods for sparse matrices
65Y20 Complexity and performance of numerical algorithms

Software:

SuLQ

References:

[1] ”List of open problems in sublinear algorithms: Problem 12: Deterministic cur-type decompositions,” http://sublinear.info/12.
[2] D. Achlioptas, “Database-friendly random projections: Johnson-lindenstrauss with binary coins,” J. Comput. Syst. Sci., vol. 66, no. 4, pp. 671-687, 2003. 10.1016/S0022-0000(03)00025-4 · Zbl 1054.68040
[3] J. Batson, D. Spielman, and N. Srivastava, “Twice-ramanujan sparsifiers,” in Proceedings of the 41st annual ACM symposium on Theory of computing. ACM, 2009, pp. 255-262. 10.1145/1536414.1536451 · Zbl 1304.05130
[4] M. W. Berry, S. A. Pulatova, and G. Stewart, “Algorithm 844: Computing sparse reduced-rank approximations to sparse matrices,” ACM Transactions on Mathematical Software (TOMS), vol. 31, no. 2, pp. 252-269, 2005. 10.1145/1067967.1067972 · Zbl 1070.65539
[5] J. Bien, Y. Xu, and M. W. Mahoney, “Cur from a sparse optimization viewpoint,” arXiv preprint arXiv:1011.0413, 2010.
[6] C. Boutsidis, P. Drineas, and M. Magdon-Ismail, “Near optimal column based matrix reconstruction,” SIAM Journal on Computing (SICOMP), 2013.
[7] C. Boutsidis and A. Gittens, “Improved matrix algorithms via the subsampled randomized hadamard transform,” SIAM Journal on Matrix Analysis and Applications (SIMAX), 2013. · Zbl 1286.65054
[8] C. Boutsidis, M. W. Mahoney, and P. Drineas, “An improved approximation algorithm for the column subset selection problem,” in Proceedings of the Nineteenth Annual ACM -SIAM Symposium on Discrete Algorithms (SODA), 2009, pp. 968-977. · Zbl 1420.68235
[9] K. L. Clarkson and D. P. Woodruff, “Low rank approximation and regression in input sparsity time,” in In STOC, 2013. 10.1145/2488608.2488620
[10] K. L. Clarkson and D. P. Woodruff, “Low rank approximation and regression in input sparsity time,” ArxiV report: http://arxiv.org/pdf/1207.6365v4.pdf, 2013.
[11] K. Clarkson and D. Woodruff, “Numerical linear algebra in the streaming model,” in Proceedings of the 41st annual ACM symposium on Theory of computing (STOC), 2009. 10.1145/1536414.1536445 · Zbl 1304.65138
[12] A. Deshpande, L. Rademacher, S. Vempala, and G. Wang, “Matrix approximation and projective clustering via volume sampling,” in Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms, 2006, pp. 1117-1126. · Zbl 1192.68889
[13] A. Deshpande and S. Vempala, “Adaptive sampling and fast low-rank matrix approximation,” in RANDOM - APPROX, 2006. 10.1007/11830924_28 · Zbl 1155.68575
[14] P. Drineas and R. Kannan, “Fast Monte-Carlo algorithms for approximate matrix multiplication,” in Proceedings of the 42nd Annual IEEE Symposium on Foundations of Computer Science, 2001, pp. 452-459.
[15] P. Drineas and R. Kannan, “Pass efficient algorithms for approximating large matrices,” in SODA, 2003, pp. 223-232. · Zbl 1095.68748
[16] P. Drineas, R. Kannan, and M. Mahoney, “Fast Monte Carlo algorithms for matrices I: Approximating matrix multiplication,” SIAM Journal of Computing, vol. 36, no. 1, pp. 132-157, 2006. 10.1137/S0097539704442684 · Zbl 1111.68147
[17] P. Drineas, R. Kannan, and M. Mahoney, “Fast Monte Carlo algorithms for matrices II: Computing a low-rank approximation to a matrix,” SIAM Journal of Computing, vol. 36, no. 1, pp. 158-183, 2006. 10.1137/S0097539704442696 · Zbl 1111.68148
[18] P. Drineas, R. Kannan, and M. Mahoney, “Fast Monte Carlo algorithms for matrices III: Computing a compressed approximate matrix decomposition,” SIAM Journal of Computing, vol. 36, no. 1, pp. 184-206, 2006. 10.1137/S0097539704442702 · Zbl 1111.68149
[19] P. Drineas, M. W. Mahoney, and S. Muthukrishnan, “Relative-error cur matrix decompositions,” SIAM Journal Matrix Analysis and Applications, vol. 30, no. 2, pp. 844-881, 2008. 10.1137/07070471X · Zbl 1183.68738
[20] P. Drineas, M. Mahoney, and S. Muthukrishnan, “Polynomial time algorithm for column-row based relative-error low-rank matrix approximation,” DIMACS, Tech. Rep. 2006-04, March 2006.
[21] P. Drineas and M. W. Mahoney, “On the nyström method for approximating a gram matrix for improved kernel-based learning,” The Journal of Machine Learning Research, vol. 6, pp. 2153-2175, 2005. · Zbl 1222.68186
[22] S. Friedland and A. Torokhti, “Generalized rank-constrained matrix approximations,” SIAM Journal on Matrix Analysis and Applications, vol. 29, no. 2, pp. 656-659, 2007. 10.1137/06065551 · Zbl 1144.15004
[23] A. Frieze, R. Kannan, and S. Vempala, “Fast Monte-Carlo algorithms for finding low-rank approximations,” in Proceedings of the 39th Annual IEEE Symposium on Foundations of Computer Science, 1998, pp. 370-378.
[24] A. Gittens and M. W. Mahoney, “Revisiting the nystrom method for improved large-scale machine learning,” arXiv preprint arXiv:1303.1849, 2013.
[25] S. Goreinov, E. Tyrtyshnikov, and N. Zamarashkin, “A theory of pseudoskeleton approximations,” Linear Algebra and its Applications, vol. 261, no. 1-3, pp. 1-21, 1997. · Zbl 0877.65021
[26] S. Goreinov, N. Zamarashkin, and E. Tyrtyshnikov, “Pseudo-skeleton approximations by matrices of maximal volume,” Mathematical Notes, vol. 62, no. 4, pp. 515-519, 1997. · Zbl 0916.65040
[27] M. Gu and L. Miranian, “Strong rank revealing Cholesky factorization,” Electronic Transactions on Numerical Analysis, vol. 17, pp. 76-92, 2004. · Zbl 1069.65023
[28] V. Guruswami and A. K. Sinop, “Optimal column-based low-rank matrix reconstruction,” in Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms. SIAM, 2012, pp. 1207-1214. · Zbl 1421.68227
[29] N. Halko, P. Martinsson, and J. Tropp, “Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions,” SIAM Review, vol. 53, no. 2, p. 217-288, 2011. 10.1137/090771806 · Zbl 1269.65043
[30] T. Hwang, W. Lin, and D. Pierce, “Improved bound for rank revealing LU factorizations,” Linear algebra and its applications, vol. 261, no. 1-3, pp. 173-186, 1997. · Zbl 0883.65020
[31] R. Kannan, S. Vempala, and D. P. Woodruff, “Nimble algorithms for cloud computing,” arXiv preprint arXiv:1304.3162, v3, 2013.
[32] M. Magdon-Ismail, “Row Sampling for Matrix Algorithms via a Non-Commutative Bernstein Bound,” Arxiv preprint arXiv:1008.0587, 2010.
[33] M. W. Mahoney and P. Drineas, “Cur matrix decompositions for improved data analysis,” Proceedings of the National Academy of Sciences, vol. 106, no. 3, pp. 697-702, 2009. · Zbl 1202.68480
[34] A. McGregor, “Open problems in data streams and related topics,” in IITK Workshop on Algorithms For Data Streams, 2006.
[35] X. Meng and M. W. Mahoney, “Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression,” in In STOC. ACM, 2013, pp. 91-100. 10.1145/2488608.2488621 · Zbl 1293.68150
[36] L. Miranian and M. Gu, “Strong rank revealing LU factorizations,” Linear algebra and its applications, vol. 367, pp. 1-16, 2003. · Zbl 1020.65016
[37] J. Nelson and H. L. Nguyên, “Osnap: Faster numerical linear algebra algorithms via sparser subspace embeddings,” in In FOCS, 2013. 10.1109/FOCS.2013.21
[38] C. Pan, “On the existence and computation of rank-revealing LU factorizations,” Linear Algebra and its Applications, vol. 316, no. 1-3, pp. 199-222, 2000. · Zbl 0962.65023
[39] V. Rokhlin and M. Tygert, “A fast randomized algorithm for overdetermined linear least-squares regression,” Proceedings of the National Academy of Sciences, vol. 105, no. 36, p. 13212, 2008. · Zbl 1513.62144
[40] M. Rudelson and R. Vershynin, “Sampling from large matrices: An approach through geometric functional analysis,” JACM: Journal of the ACM, vol. 54, 2007. 10.1145/1255443.1255449 · Zbl 1326.68333
[41] T. Sarlos, “Improved approximation algorithms for large matrices via random projections,” in IEEE Symposium on Foundations of Computer Science (FOCS), 2006. 10.1109/FOCS.2006.37
[42] K. C. Sou and A. Ranzer, “On generalized matrix approximation problem in the spectral norm,” Linear Algebra and its Applications, vol. 436, no. 7, pp. 2331-2341, 2012. · Zbl 1236.15051
[43] G. Stewart, “Four algorithms for the efficient computation of truncated QR approximations to a sparse matrix,” Numerische Mathematik, vol. 83, pp. 313-323, 1999. · Zbl 0957.65031
[44] E. Tyrtyshnikov, “Mosaic-skeleton approximations,” Calcolo, vol. 33, no. 1, pp. 47-57, 1996. · Zbl 0906.65048
[45] E. Tyrtyshnikov, “Incomplete cross approximation in the mosaic-skeleton method,” Computing, vol. 64, no. 4, pp. 367-380, 2000. 10.1007/s006070070031 · Zbl 0964.65048
[46] S. Wang and Z. Zhang, “Improving cur matrix decomposition and the nystrom approximation via adaptive sampling,” Journal of Machine Learning Research, 2013. · Zbl 1318.65023
[47] A. Zouzias and N. Freris, “Randomized extended kaczmarz for solving least-squares,” SIAM Journal on Matrix Analysis and its Applications, 2013. · Zbl 1273.65053
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.