×

Quantum machine learning: a classical perspective. (English) Zbl 1402.68154

Summary: Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.

MSC:

68T05 Learning and adaptive systems in artificial intelligence
68-03 History of computer science
01A61 History of mathematics in the 21st century
81P68 Quantum computation

References:

[1] Krizhevsky, A.; Sutskever, I.; Hinton, GE, Imagenet classification with deep convolutional neural networks. In \(Advances in Neural Information Processing Systems\) (eds F Pereira, CJC Burges, L Bottou, KQ Weinberger), pp. 1097-1105. Red Hook, NY: Curran Associates, Inc, (2012)
[2] Silver, D., Mastering the game of Go with deep neural networks and tree search, Nature, 529, 484-489, (2016) · doi:10.1038/nature16961
[3] Markov, IL, Limits on fundamental limits to computation, Nature, 512, 147-154, (2014) · doi:10.1038/nature13570
[4] Shor, PW, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer, SIAM J. Comput., 26, 1484-1509, (1997) · Zbl 1005.11065 · doi:10.1137/S0097539795293172
[5] Van Dam, W.; Hallgren, S.; Ip, L., Quantum algorithms for some hidden shift problems, SIAM J. Comput., 36, 763-778, (2006) · Zbl 1126.81019 · doi:10.1137/S009753970343141X
[6] Hallgren, S., Polynomial-time quantum algorithms for Pell’s equation and the principal ideal problem, J. ACM, 54, 4, (2007) · Zbl 1311.11114 · doi:10.1145/1206035.1206039
[7] Adcock, J., Advances in quantum machine learning. (http://arxiv.org/abs/1512.02900), (2015)
[8] Biamonte, J.; Wittek, P.; Pancotti, N.; Rebentrost, P.; Wiebe, N.; Lloyd, S., Quantum machine learning, Nature, 549, 195-202, (2017) · doi:10.1038/nature23474
[9] Montanaro, A., Quantum algorithms: an overview, NPJ Quantum Inf., 2, 15023, (2016) · doi:10.1038/npjqi.2015.23
[10] Bacon, D.; Van Dam, W., Recent progress in quantum algorithms, Commun. ACM., 53, 84-93, (2010) · doi:10.1145/1646353.1646375
[11] Bishop, CM, \(Pattern recognition and machine learning\), vol. 1, (2006), Springer · Zbl 1107.68072
[12] Murphy, KP, \(Machine learning: a probabilistic perspective\), (2012), The MIT Press · Zbl 1295.68003
[13] de Touzalin, A.; Heijman, F.; Cirac, I.; Murray, R.; Calarco, T., The quantum manifesto. See http://qurope.eu/manifesto, (2016)
[14] Nielsen, MA; Chuang, IL, \(Quantum computation and quantum information\), (2010), Cambridge University Press · Zbl 1288.81001
[15] Jozsa, R.; Linden, N., On the role of entanglement in quantum-computational speed-up, Proc. R. Soc. Lond. A, 459, 2011-2032, (2003) · Zbl 1092.81518 · doi:10.1098/rspa.2002.1097
[16] Aaronson, S.; Ambainis, A., The need for structure in quantum speedups. (http://arxiv.org/abs/0911.0996), (2009)
[17] Preskill, J., Fault-tolerant quantum computation, (1998)
[18] Papadimitriou, CH, \(Computational complexity\), (2003), John Wiley and Sons Ltd
[19] Arora, S.; Barak, B., \(Computational complexity: a modern approach\), (2009), Cambridge University Press · Zbl 1193.68112
[20] Valiant, LG, A theory of the learnable, Commun. ACM., 27, 1134-1142, (1984) · Zbl 0587.68077 · doi:10.1145/1968.1972
[21] Vapnik, VN, \(Statistical learning theory\), vol. 1, (1998), Wiley · Zbl 0935.62007
[22] Bauer, F.; Pereverzev, S.; Rosasco, L., On regularization algorithms in learning theory, J. Complex., 23, 52-72, (2007) · Zbl 1109.68088 · doi:10.1016/j.jco.2006.07.001
[23] Cucker, F.; Smale, S., On the mathematical foundations of learning, Bull. Amer. Math. Soc., 39, 1-49, (2002) · Zbl 0983.68162 · doi:10.1090/S0273-0979-01-00923-5
[24] Rasmussen, CE; Williams, CKI, \(Gaussian processes for machine learning\), (2006), The MIT Press · Zbl 1177.68165
[25] Zhang, Y.; Duchi, J.; Wainwright, M., Divide and conquer kernel ridge regression. In \(Conf. on Learning Theory, Princeton, NJ\), pp. 592-617. Brookline, MA: Microtome Publishing, (2013)
[26] Rahimi, A.; Recht, B., Random features for large-scale kernel machines. In \(Advances Neural Information Processing Systems\) (eds JC Platt, D Koller, Y Singer, ST Roweis), vol. 3, pp. 1177-1184. Red Hook, NY: Curran Associates, Inc, (2007)
[27] Smola, AJ; Schölkopf, B., Sparse greedy matrix approximation for machine learning. In \(Proc. of the Int. Conf. on Machine Learning\), pp. 911-918. San Francisco, CA: Morgan Kaufmann Publishers, (2000)
[28] Williams, CK; Seeger, M., Using the Nyström method to speed up kernel machines. In \(Proc. of the 13th Int. Conf. on Neural Information Processing Systems, Vancouver, Canada\), pp. 661-667. Cambridge, MA: The MIT Press, (2000)
[29] Rudi, A.; Camoriano, R.; Rosasco, L., Less is more: Nyström computational regularization. In \(Advances in Neural Information Processing Systems\) (eds C Cortes, ND Lawrence, DD Lee, M Sugiyama, R Garnett), pp. 1657-1665. Red Hook, NY: Curran Associates, Inc, (2015)
[30] Arunachalam, S.; de Wolf, R., Guest column: a survey of quantum learning theory, SIGACT News, 48, 41-67, (2017) · doi:10.1145/3106700.3106710
[31] Bshouty, NH; Jackson, JC, Learning DNF over the uniform distribution using a quantum example oracle, SIAM J. Comput., 28, 1136-1153, (1998) · Zbl 0918.68033 · doi:10.1137/S0097539795293123
[32] Servedio, RA; Gortler, SJ, Equivalences and separations between quantum and classical learnability, SIAM J. Comput., 33, 1067-1092, (2004) · Zbl 1101.68623 · doi:10.1137/S0097539704412910
[33] Atici, A.; Servedio, RA, Improved bounds on quantum learning algorithms, Quantum Inf. Process., 4, 355-386, (2005) · Zbl 1130.81018 · doi:10.1007/s11128-005-0001-2
[34] Zhang, C., An improved lower bound on query complexity for quantum PAC learning, Inf. Process. Lett., 111, 40-45, (2010) · Zbl 1260.68192 · doi:10.1016/j.ipl.2010.10.007
[35] Arunachalam, S.; de Wolf, R., Optimal quantum sample complexity of learning algorithms. In \(Proc. 32nd Computational Complexity Conference, CCC 2017, Riga, Latvia, 6-9 July 2017\). LIPIcs 79. Wadern, Germany: Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, (2017) · Zbl 1440.68096
[36] Angluin, D., Queries and concept learning, Mach. Learn., 2, 319-342, (1988) · Zbl 1470.68050 · doi:10.1023/A:1022821128753
[37] Bshouty, NH; Cleve, R.; Kannan, S.; Tamon, C., Oracles and queries that are sufficient for exact learning. In \(Proc. of the 7th Annu. Conf. on Computational Learning Theory, New Brunswick, NJ\), pp. 130-139. New York, NY: ACM, (1994)
[38] Klivans, AR; Servedio, R., Learning DNF in time. In \(Proc. of the 33rd Annu. ACM Symp. on Theory of Computing, Crete, Greece\), pp. 258-265. New York, NY: ACM, (2001) · Zbl 1323.68328
[39] Verbeurgt, KA, Learning DNF under the uniform distribution in quasi-polynomial time. In \(COLT ’90: Proc. of the 3rd Annu. Workshop on Computational Learning Theory\), pp. 314-326. Cambridge, MA: The MIT Press, (1990)
[40] Ben-David, S.; Eiron, N.; Simon, HU, Limitations of learning via embeddings in Euclidean half spaces, J. Mach. Learn. Res., 3, 441-461, (2002) · Zbl 1084.68551
[41] Khardon, R.; Servedio, RA, Maximum margin algorithms with Boolean kernels, J. Mach. Learn. Res., 6, 1405-1429, (2005) · Zbl 1222.68098
[42] Bernstein, E.; Vazirani, U., Quantum complexity theory, SIAM J. Comput., 26, 1411-1473, (1997) · Zbl 0895.68042 · doi:10.1137/S0097539796300921
[43] Kearns, M.; Valiant, L., Cryptographic limitations on learning Boolean formulae and finite automata, J. ACM, 41, 67-95, (1994) · Zbl 0807.68073 · doi:10.1145/174644.174647
[44] Aaronson, S., \(Quantum computing since Democritus\), (2013), Cambridge University Press · Zbl 1353.68003
[45] Giovannetti, V.; Lloyd, S.; Maccone, L., Quantum random access memory, Phys. Rev. Lett., 100, 160501, (2008) · Zbl 1228.81125 · doi:10.1103/PhysRevLett.100.160501
[46] Giovannetti, V.; Lloyd, S.; Maccone, L., Architectures for a quantum random access memory, Phys. Rev. A, 78, 052310, (2008) · doi:10.1103/PhysRevA.78.052310
[47] Aaronson, S., Read the fine print, Nat. Phys., 11, 291-293, (2015) · doi:10.1038/nphys3272
[48] Arunachalam, S.; Gheorghiu, V.; Jochym-O’Connor, T.; Mosca, M.; Srinivasan, PV, On the robustness of bucket brigade quantum RAM, New. J. Phys., 17, 123010, (2015) · Zbl 1452.81061 · doi:10.1088/1367-2630/17/12/123010
[49] Grover, LK, A fast quantum mechanical algorithm for database search. In \(Proc. of the 28th Annu. ACM Symp. on Theory of Computing, Philadelphia, PA\), pp. 212-219. New York, NY: ACM, (1996) · Zbl 0922.68044
[50] Steiger, DS; Troyer, M., \(Racing in parallel: quantum versus classical\). Quantum Machine Learning Workshop. Waterloo, Canada: Perimeter Institute for Theoretical Physics. See http://pirsa.org/displayFlash.php?id=16080019, (2016)
[51] Csanky, L., Fast parallel matrix inversion algorithms, SIAM J. Comput., 5, 618-623, (1976) · Zbl 0353.68063 · doi:10.1137/0205040
[52] Prakash, A., Quantum algorithms for linear algebra and machine learning. PhD thesis, University of California, Berkeley, CA, USA, (2014)
[53] Bennett, CH; Bernstein, E.; Brassard, G.; Vazirani, U., Strengths and weaknesses of quantum computing, SIAM J. Comput., 26, 1510-1523, (1997) · Zbl 0895.68044 · doi:10.1137/S0097539796300933
[54] Grover, L.; Rudolph, T., Creating superpositions that correspond to efficiently integrable probability distributions. (http://arxiv.org/abs/quant-ph/0208112), (2002)
[55] Shewchuk, JR, An introduction to the conjugate gradient method without the agonizing pain. Technical Report no. ICG:865018. Carnegie-Mellon University, Department of Computer Science, Pittsburgh, PA, USA, (1994)
[56] Frieze, A.; Kannan, R.; Vempala, S., Fast Monte-Carlo algorithms for finding low-rank approximations, J. ACM, 51, 1025-1041, (2004) · Zbl 1125.65005 · doi:10.1145/1039488.1039494
[57] Harrow, AW; Hassidim, A.; Lloyd, S., Quantum algorithm for linear systems of equations, Phys. Rev. Lett., 103, 150502, (2009) · doi:10.1103/PhysRevLett.103.150502
[58] Lloyd, S.; Mohseni, M.; Rebentrost, P., Quantum principal component analysis, Nat. Phys., 10, 631-633, (2014) · doi:10.1038/nphys3029
[59] Rebentrost, P.; Mohseni, M.; Lloyd, S., Quantum support vector machine for big data classification, Phys. Rev. Lett., 113, 130503, (2014) · doi:10.1103/PhysRevLett.113.130503
[60] Kerenidis, I.; Prakash, A., Quantum recommendation systems. In \(Proc. 8th Innovations in Theoretical Computer Science Conf., ITCS 2017, Berkeley, CA, 9-11 January 2017\). LIPIcs 67. Wadern, Germany: Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, (2017) · Zbl 1402.68189
[61] Zhao, Z.; Fitzsimons, JK; Fitzsimons, JF, Quantum assisted Gaussian process regression. (http://arxiv.org/abs/1512.03929), (2015)
[62] Schuld, M.; Sinayskiy, I.; Petruccione, F., Prediction by linear regression on a quantum computer, Phys. Rev. A, 94, 022342, (2016) · doi:10.1103/PhysRevA.94.022342
[63] Cosnard, M.; Robert, Y., Complexity of parallel QR factorization, J. ACM, 33, 712-723, (1986) · doi:10.1145/6490.214102
[64] Li, S., \(Fast algorithms for sparse matrix inverse computations. PhD Thesis, Stanford University, Stanford, CA, USA\), (2009)
[65] Li, S.; Darve, E., Extension and optimization of the find algorithm: computing Green’s and less-than Green’s functions, J. Comput. Phys., 231, 1121-1139, (2012) · Zbl 1239.65027 · doi:1016/j.jcp.2011.05.027
[66] Halko, N.; Martinsson, P-G; Tropp, JA, Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions, SIAM Rev., 53, 217-288, (2011) · Zbl 1269.65043 · doi:10.1137/090771806
[67] Coppersmith, D.; Winograd, S., Matrix multiplication via arithmetic progressions, J. Symb. Comput., 9, 251-280, (1990) · Zbl 0702.65046 · doi:10.1016/S0747-7171(08)80013-2
[68] Golub, GH; Van Loan, CF, \(Matrix computations\), vol. 3, (2012), JHU Press
[69] Ambainis, A., Variable time amplitude amplification and quantum algorithms for linear algebra problems. In \(Proc. 29th Int. Symp. on Theoretical Aspects of Computer Science, STACS 2012, Paris, France, 29 February-3 March 2012\). LIPIcs 14. Wadern, Germany: Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, (2012) · Zbl 1245.68084
[70] Childs, AM; Kothari, R.; Somma, RD, Quantum linear systems algorithm with exponentially improved dependence on precision. (http://arxiv.org/abs/1511.02306), (2015)
[71] Clader, BD; Jacobs, BC; Sprouse, CR, Preconditioned quantum linear system algorithm, Phys. Rev. Lett., 110, 250504, (2013) · doi:10.1103/PhysRevLett.110.250504
[72] Wossnig, L.; Zhao, Z.; Prakash, A., A quantum linear system algorithm for dense matrices. (http://arxiv.org/abs/1704.06174), (2017)
[73] Childs, AM, Quantum algorithms: equation solving by simulation, Nat. Phys., 5, (2009) · doi:10.1038/nphys1473
[74] Spielman, DA; Teng, S-H, Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time, J. ACM, 51, 385-463, (2004) · Zbl 1192.90120 · doi:10.1145/990308.990310
[75] Spielman, DA; Teng, S-H, Smoothed analysis: an attempt to explain the behavior of algorithms in practice, Commun. ACM, 52, 76-84, (2009) · doi:10.1145/1562764.1562785
[76] Duchi, JC; Mackey, LW; Jordan, MI, On the consistency of ranking algorithms. In \(Proc. of the 27th Int. Conf. on Machine Learning (ICML-10), Haifa, Israel\), pp. 327-334. Brookline, MA: Microtome Publishing, (2010)
[77] Caponnetto, A.; De Vito, E., Optimal rates for the regularized least-squares algorithm, Found. Comput. Math., 7, 331-368, (2007) · Zbl 1129.68058 · doi:10.1007/s10208-006-0196-8
[78] Heller, D., A survey of parallel algorithms in numerical linear algebra, SIAM Rev., 20, 740-777, (1978) · Zbl 0408.68033 · doi:10.1137/1020096
[79] Jolliffe, IT, \(Principal component analysis\), pp. 115-128. New York, NY: Springer, (1986)
[80] Trefethen, LN; Bau, D., \(Numerical linear algebra\), vol. 50, (1997), SIAM · Zbl 0874.65013
[81] Szegedy, M., Quantum speed-up of Markov chain based algorithms. In \(Proc. 45th Annu. IEEE Symp. on Foundations of Computer Science, Rome, Italy\), pp. 32-41. New York, NY: IEEE, (2004)
[82] Neal, RM, Probabilistic inference using Markov chain Monte Carlo methods. Technical Report no. CRG-TR-93-1. Department of Computer Science, University of Toronto, Toronto, Canada, (1993)
[83] Neal, RM, Annealed importance sampling, Stat. Comput., 11, 125-139, (2001) · doi:10.1023/A:1008923215028
[84] Gilks, WR; Wild, P., Adaptive rejection sampling for Gibbs sampling, Appl. Stat., 41, 337-348, (1992) · Zbl 0825.62407 · doi:10.2307/2347565
[85] Propp, JG; Wilson, DB, Exact sampling with coupled Markov chains and applications to statistical mechanics, Random Struct. Algorithms, 9, 223-252, (1996) · Zbl 0859.60067 · doi:10.1002/(SICI)1098-2418(199608/09)9:1/2<223::AID-RSA14>3.0.CO;2-O
[86] Doucet, A.; De Freitas, N.; Gordon, N., \(Sequential Monte Carlo methods in practice\), pp. 3-14. New York, NY: Springer, (2001) · Zbl 0967.00022
[87] Del Moral, P.; Doucet, A.; Jasra, A., Sequential Monte Carlo samplers, J. R. Stat. Soc., 68, 411-436, (2006) · Zbl 1105.62034 · doi:10.1111/j.1467-9868.2006.00553.x
[88] Sinclair, A., Markov chains and rapid mixing. In \(Algorithms for random generation and counting: a Markov chain approach\), pp. 42-62. Berlin, Germany: Springer, (1993) · Zbl 0780.68096
[89] Wocjan, P.; Chiang, C-F; Nagaj, D.; Abeyesinghe, A., Quantum algorithm for approximating partition functions, Phys. Rev. A, 80, 022340, (2009) · doi:10.1103/PhysRevA.80.022340
[90] Somma, R.; Boixo, S.; Barnum, H.; Knill, E., Quantum simulations of classical annealing processes, Phys. Rev. Lett., 101, 130504, (2008) · doi:10.1103/PhysRevLett.101.130504
[91] Poulin, D.; Wocjan, P., Sampling from the thermal quantum Gibbs state and evaluating partition functions with a quantum computer, Phys. Rev. Lett., 103, 220502, (2009) · doi:10.1103/PhysRevLett.103.220502
[92] Chiang, C-F; Wocjan, P., Quantum algorithm for preparing thermal Gibbs states—detailed analysis, Quantum Cryptography Computing, 26, 138-147, (2010) · Zbl 1206.81032
[93] Bilgin, E.; Boixo, S., Preparing thermal states of quantum systems by dimension reduction, Phys. Rev. Lett., 105, 170405, (2010) · doi:10.1103/PhysRevLett.105.170405
[94] Temme, K.; Osborne, TJ; Vollbrecht, KG; Poulin, D.; Verstraete, F., Quantum metropolis sampling, Nature, 471, 87-90, (2011) · doi:10.1038/nature09770
[95] Schwarz, M.; Temme, K.; Verstraete, F., Preparing projected entangled pair states on a quantum computer, Phys. Rev. Lett., 108, 110502, (2012) · doi:10.1103/PhysRevLett.108.110502
[96] Chowdhury, AN; Somma, RD, Quantum algorithms for Gibbs sampling and hitting-time estimation. (http://arxiv.org/abs/1603.02940), (2016)
[97] Ambainis, A.; Kempe, J.; Rivosh, A., Coins make quantum walks faster. In \(Proc. of the 16th Annu. ACM-SIAM Symp. on Discrete algorithms, Stockholm, Sweden\), pp. 1099-1108. Philadelphia, PA: Society for Industrial and Applied Mathematics, (2005) · Zbl 1297.68076
[98] Krovi, H.; Brun, TA, Hitting time for quantum walks on the hypercube, Phys. Rev. A, 73, 032341, (2006) · doi:10.1103/PhysRevA.73.032341
[99] Krovi, H.; Ozols, M.; Roland, J., Adiabatic condition and the quantum hitting time of Markov chains, Phys. Rev. A, 82, 022333, (2010) · doi:10.1103/PhysRevA.82.022333
[100] Magniez, F.; Nayak, A.; Roland, J.; Santha, M., Search via quantum walk, SIAM J. Comput., 40, 142-164, (2011) · Zbl 1223.05289 · doi:10.1137/090745854
[101] Knill, E.; Ortiz, G.; Somma, RD, Optimal quantum measurements of expectation values of observables, Phys. Rev. A, 75, 012328, (2007) · doi:10.1103/PhysRevA.75.012328
[102] Vandenberghe, L.; Boyd, S., Semidefinite programming, SIAM Rev., 38, 49-95, (1996) · Zbl 0845.65023 · doi:10.1137/1038003
[103] Grötschel, M.; Lovász, L.; Schrijver, A., \(Geometric algorithms and combinatorial optimization\), vol. 2, (2012), Springer Science & Business Media
[104] Lanckriet, GR; Cristianini, N.; Bartlett, P.; Ghaoui, LE; Jordan, MI, Learning the kernel matrix with semidefinite programming, J. Mach. Learn. Res., 5, 27-72, (2004) · Zbl 1222.68241
[105] Weinberger, KQ; Sha, F.; Zhu, Q.; Saul, LK, Graph Laplacian regularization for large-scale semidefinite programming. In \(Advances in Neural Information Processing Systems\) (eds JC Platt, D Koller, Y Singer, ST Roweis), pp. 1489-1496. Red Hook, NY: Curran Associates, Inc, (2007)
[106] Jacob, L.; Obozinski, G.; Vert, J-P, Group Lasso with overlap and graph Lasso. In \(Proc. of the 26th Annu. Int. Conf. on Machine Learning, Montreal, Canada\), pp. 433-440. New York, NY: ACM, (2009)
[107] Lee, YT; Sidford, A.; Wong, SC-W, A faster cutting plane method and its implications for combinatorial and convex optimization. In \(Proc. 2015 IEEE 56th Annu. Symp. on Foundations of Computer Science (FOCS), Berkeley, CA\), pp. 1049-1065. New York, NY: IEEE, (2015)
[108] Arora, S.; Kale, S., A combinatorial, primal-dual approach to semidefinite programs. In \(Proc. of the 39th Annu. ACM Symp. on Theory of Computing, San Diego, CA\), pp. 227-236. New York, NY: ACM, (2007) · Zbl 1232.68177
[109] Brandão, FGSL; Svore, K., Quantum speed-ups for semidefinite programming. (http://arxiv.org/abs/1609.05537), (2016)
[110] van Apeldoorn, J.; Gilyén, A.; Gribling, S.; de Wolf, R., Quantum SDP-solvers: better upper and lower bounds. (http://arxiv.org/abs/1705.01843), (2017)
[111] Creignou, N.; Khanna, S.; Sudan, M., \(Complexity classifications of Boolean constraint satisfaction problems\), (2001), SIAM · Zbl 0981.68058
[112] Farhi, E.; Goldstone, J.; Gutmann, S., A quantum approximate optimization algorithm. (http://arxiv.org/abs/1411.4028), (2014)
[113] Farhi, E.; Goldstone, J.; Gutmann, S.; Neven, H., Quantum algorithms for fixed qubit architectures. (http://arxiv.org/abs/1703.06199), (2017)
[114] Farhi, E.; Goldstone, J.; Gutmann, S., A quantum approximate optimization algorithm applied to a bounded occurrence constraint problem. (http://arxiv.org/abs/1412.6062), (2014)
[115] Barak, B., Beating the random assignment on constraint satisfaction problems of bounded degree. (http://arxiv.org/abs/1505.03424), (2015)
[116] Farhi, E.; Goldstone, J.; Gutmann, S.; Sipser, M., Quantum computation by adiabatic evolution. (http://arxiv.org/abs/quant-ph/0001106), (2000)
[117] Kirkpatrick, S., Optimization by simulated annealing: quantitative studies, J. Stat. Phys., 34, 975-986, (1984) · doi:10.1007/BF01009452
[118] Messiah, A., \(Quantum mechanics\), vol. 1, (1958), Interscience
[119] Reichardt, BW, The quantum adiabatic optimization algorithm and local minima. In \(Proc. of the 36th Annu. ACM Symp. on Theory of Computing, Chicago, IL\), pp. 502-510. New York, NY: ACM, (2004) · Zbl 1192.81098
[120] Crosson, E.; Harrow, AW, Simulated quantum annealing can be exponentially faster than classical simulated annealing. In \(Proc. 2016 IEEE 57th Annu. Symp. on Foundations of Computer Science (FOCS), New Brunswick, NJ\), pp. 714-723. New York, NY: IEEE, (2016)
[121] Aharonov, D.; Van Dam, W.; Kempe, J.; Landau, Z.; Lloyd, S.; Regev, O., Adiabatic quantum computation is equivalent to standard quantum computation, SIAM Rev., 50, 755-787, (2008) · Zbl 1152.81008 · doi:10.1137/080734479
[122] Barahona, F., On the computational complexity of Ising spin glass models, J. Phys. A, 15, 3241-3253, (1982) · doi:10.1088/0305-4470/15/10/028
[123] Farhi, E.; Goldstone, J.; Gutmann, S., A numerical study of the performance of a quantum adiabatic evolution algorithm for satisfiability. (http://arxiv.org/abs/quant-ph/0007071), (2000)
[124] Farhi, E.; Goldstone, J.; Gutmann, S.; Lapan, J.; Lundgren, A.; Preda, D., A quantum adiabatic evolution algorithm applied to random instances of an NP-complete problem, Science, 292, 472-475, (2001) · Zbl 1226.81046 · doi:10.1126/science.1057726
[125] Farhi, E.; Gosset, D.; Hen, I.; Sandvik, A.; Shor, P.; Young, A.; Zamponi, F., Performance of the quantum adiabatic algorithm on random instances of two optimization problems on regular hypergraphs, Phys. Rev. A, 86, 052334, (2012) · doi:10.1103/PhysRevA.86.052334
[126] Hen, I.; Young, A., Exponential complexity of the quantum adiabatic algorithm for certain satisfiability problems, Phys. Rev. E, 84, 061152, (2011) · doi:10.1103/PhysRevE.84.061152
[127] Young, AP; Knysh, S.; Smelyanskiy, VN, Size dependence of the minimum excitation gap in the quantum adiabatic algorithm, Phys. Rev. Lett., 101, 170503, (2008) · doi:10.1103/PhysRevLett.101.170503
[128] Young, A.; Knysh, S.; Smelyanskiy, V., First-order phase transition in the quantum adiabatic algorithm, Phys. Rev. Lett., 104, 020502, (2010) · doi:10.1103/PhysRevLett.104.020502
[129] Kak, S., On quantum neural computing, Inf. Sci. (NY), 83, 143-160, (1995) · doi:10.1016/0020-0255(94)00095-S
[130] Smolensky, P., Information processing in dynamical systems: foundations of harmony theory. Technical Report no. CU-CS-321-86. University of Colorado Boulder, Department of Computer Science, Boulder, CO, USA, (1986)
[131] Long, PM; Servedio, R., Restricted Boltzmann machines are hard to approximately evaluate or simulate. In \(Proc. of the 27th Int. Conf. on Machine Learning (ICML-10), Haifa, Israel\), pp. 703-710. Brookline, MA: Microtome Publishing, (2010)
[132] Dumoulin, V.; Goodfellow, IJ; Courville, A.; Bengio, Y., On the challenges of physical implementations of RBMs. In \(Proc. 28th AAAI Conf. on Artificial Intelligence, Quebec City, Canada, 27-31 July 2014\). Palo Alto, CA: The AAAI Press, (2014)
[133] Wiebe, N.; Kapoor, A.; Svore, KM, Quantum deep learning. (http://arxiv.org/abs/1412.3489), (2014)
[134] Brassard, G.; Hoyer, P.; Mosca, M.; Tapp, A., Quantum amplitude amplification and estimation, Contemp. Math., 305, 53-74, (2002) · Zbl 1063.81024 · doi:10.1090/conm/305/05215
[135] Hinton, GE, Training products of experts by minimizing contrastive divergence, Neural. Comput., 14, 1771-1800, (2002) · Zbl 1010.68111 · doi:10.1162/089976602760128018
[136] Wiebe, N.; Kapoor, A.; Granade, C.; Svore, KM, Quantum inspired training for Boltzmann machines. (http://arxiv.org/abs/1507.02642), (2015)
[137] Adachi, SH; Henderson, MP, Application of quantum annealing to training of deep neural networks. (http://arxiv.org/abs/1510.06356), (2015)
[138] Denil, M.; De Freitas, N., Toward the implementation of a quantum RBM. Paper presented at the Learning and Unsupervised Feature Learning Workshop of the 25th Ann. Conf. on Neural Information Processing Systems (NIPS), Granada, Spain, 2011, (2011)
[139] Benedetti, M.; Realpe-Gómez, J.; Biswas, R.; Perdomo-Ortiz, A., Estimation of effective temperatures in quantum annealers for sampling applications: a case study with possible applications in deep learning, Phys. Rev. A, 94, 022308, (2016) · doi:10.1103/PhysRevA.94.022308
[140] Amin, MH; Andriyash, E.; Rolfe, J.; Kulchytskyy, B.; Melko, R., Quantum Boltzmann machine. (http://arxiv.org/abs/1601.02036), (2016)
[141] Kieferova, M.; Wiebe, N., Tomography and generative data modeling via quantum Boltzmann training. (http://arxiv.org/abs/1612.05204), (2016)
[142] Schuld, M.; Sinayskiy, I.; Petruccione, F., The quest for a quantum neural network, Quantum Inf. Process., 13, 2567-2586, (2014) · Zbl 1305.81055 · doi:10.1007/s11128-014-0809-8
[143] Cybenko, G., Approximation by superpositions of a sigmoidal function, Math. Control Signals Syst., 2, 303-314, (1989) · Zbl 0679.94019 · doi:10.1007/BF02551274
[144] Wan, KH; Dahlsten, O.; Kristjánsson, H.; Gardner, R.; Kim, M., Quantum generalisation of feedforward neural networks. (http://arxiv.org/abs/1612.01045), (2016)
[145] Romero, J.; Olson, J.; Aspuru-Guzik, A., Quantum autoencoders for efficient compression of quantum data, Quantum Sci. Technol., 2, 045001, (2017) · doi:10.1088/2058-9565/aa8072
[146] Bishop, CM, Training with noise is equivalent to Tikhonov regularization, Neural. Comput., 7, 108-116, (1995) · doi:10.1162/neco.1995.7.1.108
[147] An, G., The effects of adding noise during backpropagation training on a generalization performance, Neural. Comput., 8, 643-674, (1996) · doi:10.1162/neco.1996.8.3.643
[148] Neelakantan, A.; Vilnis, L.; Le, QV; Sutskever, I.; Kaiser, L.; Kurach, K.; Martens, J., Adding gradient noise improves learning for very deep networks. (http://arxiv.org/abs/1511.06807), (2015)
[149] Bottou, L., Stochastic gradient learning in neural networks, Proc. Neuro-Nımes. 91, (1991), EC2
[150] Welling, M.; Teh, YW, Bayesian learning via stochastic gradient Langevin dynamics. In \(Proc. of the 28th Int. Conf. on Machine Learning (ICML-11), Bellevue, WA\), pp. 681-688. Brookline, MA: Microtome Publishing, (2011)
[151] Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y., Generative adversarial nets. In \(Advances in Neural Information Processing Systems\) (eds Z Ghahramani, M Welling, C Cortes, ND Lawrence, KQ Weinberger), pp. 2672-2680. Red Hook, NY: Curran Associates, Inc, (2014)
[152] Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z., Rethinking the inception architecture for computer vision. In \(Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV\), pp. 2818-2826. New York, NY: IEEE, (2016)
[153] Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X., Improved techniques for training gans. In \(Advances in Neural Information Processing Systems\) (eds DD Lee, M Sugiyama, UV Luxburg, I Guyon, R Garnett), pp. 2234-2242. Red Hook, NY: Curran Associates, Inc, (2016)
[154] Warde-Farley, D.; Goodfellow, I., Adversarial perturbations of deep neural networks. In \(Perturbation, optimization, and statistics\) (eds T Hazan, G Papandreou, D Tarlow), pp. 311-342. Cambridge, MA: The MIT Press, (2016) · Zbl 1436.68322
[155] Breuer, H-P; Petruccione, F., \(The theory of open quantum systems\), (2002), Oxford University Press · Zbl 1053.81001
[156] Blum, A.; Furst, M.; Jackson, J.; Kearns, M.; Mansour, Y.; Rudich, S., Weakly learning DNF and characterizing statistical query learning using Fourier analysis. In \(Proc. of the 26th Annu. ACM Symp. on Theory of Computing, Santa Fe, NM\), pp. 253-262. New York, NY: ACM, (1994) · Zbl 1345.68186
[157] Cross, AW; Smith, G.; Smolin, JA, Quantum learning robust against noise, Phys. Rev. A, 92, 012327, (2015) · doi:10.1103/PhysRevA.92.012327
[158] Lyubashevsky, V., The parity problem in the presence of noise, decoding random linear codes, and the subset sum problem. In \(Approximation, randomization and combinatorial optimization. Algorithms and techniques\) (eds M Goemans, K Jansen, JDP Rolim, L Trevisan), pp. 378-389. Berlin, Germany: Springer, (2005) · Zbl 1142.68399
[159] Berlekamp, E.; McEliece, R.; Van Tilborg, H., On the inherent intractability of certain coding problems (corresp.), IEEE Trans. Inf. Theory, 24, 384-386, (1978) · Zbl 0377.94018 · doi:10.1109/TIT.1978.1055873
[160] Håstad, J., Some optimal inapproximability results, J. ACM, 48, 798-859, (2001) · Zbl 1127.68405 · doi:10.1145/502090.502098
[161] Grilo, AB; Kerenidis, I., Learning with errors is easy with quantum samples. (http://arxiv.org/abs/1702.08255), (2017)
[162] Fortnow, L., The status of the P versus NP problem, Commun. ACM, 52, 78-86, (2009) · doi:10.1145/1562164.1562186
[163] Suchanek, FM; Kasneci, G.; Weikum, G., Yago: a core of semantic knowledge. In \(Proc. of the 16th Int. Conf. on World Wide Web, Banff, Canada\), pp. 697-706. New York, NY: ACM, (2007)
[164] Dong, X.; Gabrilovich, E.; Heitz, G.; Horn, W.; Lao, N.; Murphy, K.; Strohmann, T.; Sun, S.; Zhang, W., Knowledge vault: a web-scale approach to probabilistic knowledge fusion. In \(Proc. of the 20th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, New York, NY\), pp. 601-610. New York, NY: ACM, (2014)
[165] Carlson, A.; Betteridge, J.; Kisiel, B.; Settles, B.; Hruschka, ER; Mitchell, TM, Toward an architecture for never-ending language learning. In \(Proc. of the 24th AAAI Conf. on Artificial Intelligence, Atlanta, GA, 11-15 July 2010\), vol. 5, p. 3. Palo Alto, CA: Association for the Advancement of Artificial Intelligence, (2010)
[166] Getoor, L.; Taskar, B., \(Introduction to statistical relational learning\), (2007), The MIT Press · Zbl 1141.68054
[167] Kolda, TG; Bader, BW, Tensor decompositions and applications, SIAM Rev., 51, 455-500, (2009) · Zbl 1173.65029 · doi:10.1137/07070111X
[168] Hillar, CJ; Lim, L-H, Most tensor problems are NP-hard, J. ACM, 60, 45, (2013) · Zbl 1281.68126 · doi:10.1145/2512329
[169] Mu, C.; Huang, B.; Wright, J.; Goldfarb, D., Square deal: lower bounds and improved relaxations for tensor recovery. In \(Proc. Int. Conf. on Machine Learning, Beijing, China\), pp. 73-81. Brookline, MA: Microtome Publishing, (2014)
[170] Richard, E.; Montanari, A., A statistical model for tensor PCA. In \(Advances in Neural Information Processing Systems\) (eds Z Ghahramani, M Welling, C Cortes, ND Lawrence, KQ Weinberger), pp. 2897-2905. Red Hook, NY: Curran Associates, Inc, (2014)
[171] Romera-Paredes, B.; Pontil, M., A new convex relaxation for tensor completion. In \(Advances in Neural Information Processing Systems\) (eds CJC Burges, L Bottou, M Welling, Z Ghahramani, KQ Weinberger), pp. 2967-2975. Red Hook, NY: Curran Associates, Inc, (2013)
[172] Romera-Paredes, B.; Aung, H.; Bianchi-Berthouze, N.; Pontil, M., Multilinear multitask learning. In \(Proc. Int. Conf. on Machine Learning, Atlanta, GA\), pp. 1444-1452. Brookline, MA: Microtome Publishing, (2013)
[173] Signoretto, M.; Dinh, QT; De Lathauwer, L.; Suykens, JA, Learning with tensors: a framework based on convex optimization and spectral regularization, Mach. Learn., 94, 303-351, (2014) · Zbl 1319.68191 · doi:10.1007/s10994-013-5366-3
[174] Gandy, S.; Recht, B.; Yamada, I., Tensor completion and low-n-rank tensor recovery via convex optimization, Inverse. Probl., 27, 025010, (2011) · Zbl 1211.15036 · doi:10.1088/0266-5611/27/2/025010
[175] Lin, H.; Bilmes, J., A class of submodular functions for document summarization. In \(Proc. of the 49th Annu. Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, Association for Computational Linguistics, Portland, OR\), pp. 510-520. Cambridge, MA: The MIT Press, (2011)
[176] Kempe, D.; Kleinberg, J.; Tardos, É, Maximizing the spread of influence through a social network. In \(Proc. of the 9th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, Washington, DC\), pp. 137-146. New York, NY: ACM, (2003)
[177] Narasimhan, M.; Bilmes, JA, Local search for balanced submodular clusterings. In \(IJCAI’07: Proc. of the 20th Int. Joint Conf. on Artificial Intelligence, Hyderabad, India, 6-12 January 2007\), pp. 981-986. San Francisco, CA: Morgan Kaufmann Publishers, (2007)
[178] Bach, F., Submodular functions: from discrete to continous domains. (http://arxiv.org/abs/1511.00394), (2015)
[179] Lovász, L., Submodular functions and convexity. In \(Mathematical programming: the state of the art\) (eds A Bachem, B Korte, M Grötschel), pp. 235-257. Berlin, Germany: Springer, (1983) · Zbl 0566.90060
[180] Schrijver, A., A combinatorial algorithm minimizing submodular functions in strongly polynomial time, J. Comb. Theory, 80, 346-355, (2000) · Zbl 1052.90067 · doi:10.1006/jctb.2000.1989
[181] Iwata, S.; Fleischer, L.; Fujishige, S., A combinatorial strongly polynomial algorithm for minimizing submodular functions, J. ACM, 48, 761-777, (2001) · Zbl 1127.90402 · doi:10.1145/502090.502096
[182] Orlin, JB, A faster strongly polynomial time algorithm for submodular function minimization, Math. Program., 118, 237-251, (2009) · Zbl 1179.90290 · doi:10.1007/s10107-007-0189-2
[183] Pearl, J., Bayesian networks: a model of self-activated memory for evidential reasoning. In \(Proc. of the 7th Conf. of the Cognitive Science Society, Irvine, CA, 15-17 August 1985\), pp. 329-334. Austin, TX: Cognitive Science Society, (1985)
[184] Kindermann, R.; Snell, L., \(Markov random fields and their applications, vol. 1. Providence, RI: American Mathematical Society\), (1980) · Zbl 1229.60003
[185] Cooper, GF, The computational complexity of probabilistic inference using Bayesian belief networks, Artif. Intell., 42, 393-405, (1990) · Zbl 0717.68080 · doi:10.1016/0004-3702(90)90060-D
[186] Lloyd, S.; Garnerone, S.; Zanardi, P., Quantum algorithms for topological and geometric analysis of big data. (http://arxiv.org/abs/1408.3106), (2014)
[187] Schuld, M.; Sinayskiy, I.; Petruccione, F., An introduction to quantum machine learning, Contemp. Phys., 56, 172-185, (2014) · doi:10.1080/00107514.2014.964942
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.