Abstract
Computational methods are proposed for solving a convex quadratic program (QP). Active-set methods are defined for a particular primal and dual formulation of a QP with general equality constraints and simple lower bounds on the variables. In the first part of the paper, two methods are proposed, one primal and one dual. These methods generate a sequence of iterates that are feasible with respect to the equality constraints associated with the optimality conditions of the primal–dual form. The primal method maintains feasibility of the primal inequalities while driving the infeasibilities of the dual inequalities to zero. The dual method maintains feasibility of the dual inequalities while moving to satisfy the primal inequalities. In each of these methods, the search directions satisfy a KKT system of equations formed from Hessian and constraint components associated with an appropriate column basis. The composition of the basis is specified by an active-set strategy that guarantees the nonsingularity of each set of KKT equations. Each of the proposed methods is a conventional active-set method in the sense that an initial primal- or dual-feasible point is required. In the second part of the paper, it is shown how the quadratic program may be solved as a coupled pair of primal and dual quadratic programs created from the original by simultaneously shifting the simple-bound constraints and adding a penalty term to the objective function. Any conventional column basis may be made optimal for such a primal–dual pair of shifted-penalized problems. The shifts are then updated using the solution of either the primal or the dual shifted problem. An obvious application of this approach is to solve a shifted dual QP to define an initial feasible point for the primal (or vice versa). The computational performance of each of the proposed methods is evaluated on a set of convex problems from the CUTEst test collection.
Similar content being viewed by others
References
Altman, A., Gondzio, J.: Regularized symmetric indefinite systems in interior point methods for linear and quadratic optimization. Optim. Methods Softw. 11/12(1–4), 275–302 (1999)
Bartlett, R.A., Biegler, L.T.: QPSchur: a dual, active-set, Schur-complement method for large-scale and structured convex quadratic programming. Optim. Eng. 7(1), 5–32 (2006)
Beale, E.M.L.: An introduction to Beale’s method of quadratic programming. In: Abadie, J. (ed.) Nonlinear Programming, pp. 143–153. North Holland, Amsterdam (1967)
Beale, E.M.L., Benveniste, R.: Quadratic programming. In: Greenberg, H.J. (ed.) Design and Implementation of Optimization Software, pp. 249–258. Sijthoff and Noordhoff, The Netherlands (1978)
Best, M.J.: An algorithm for the solution of the parametric quadratic programming problem. CORR 82-14, Department of Combinatorics and Optimization, University of Waterloo, Canada (1982)
Best, M.J.: An algorithm for the solution of the parametric quadratic programming problem. In: Fischer, H., Riedmüller, B., Schäffler, S. (eds.) Applied Mathematics and Parallel Computing: Festschrift for Klaus Ritter, pp. 57–76. Physica, Heidelberg (1996)
Bland, R.G.: New finite pivoting rules for the simplex method. Math. Oper. Res. 2(2), 103–107 (1977)
Boland, N.L.: A dual-active-set algorithm for positive semi-definite quadratic programming. Math. Program. Ser. A 78(1), 1–27 (1997)
Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, PhL: CUTE: constrained and unconstrained testing environment. ACM Trans. Math. Softw. 21(1), 123–160 (1995)
Bunch, J.R., Parlett, B.N.: Direct methods for solving symmetric indefinite systems of linear equations. SIAM J. Numer. Anal. 8, 639–655 (1971)
Bunch, J.R., Kaufman, L.: Some stable methods for calculating inertia and solving symmetric linear systems. Math. Comput. 31, 163–179 (1977)
Bunch, J.R., Kaufman, L.: A computational method for the indefinite quadratic programming problem. Linear Algebra Appl. 34, 341–370 (1980)
Cartis, C.C., Gould, N.I.M.: Finding a point in the relative interior of a polyhedron. Report RAL-TR-2006-016, Rutherford Appleton Laboratory, Oxon, UK, December (2006)
Charnes, A.: Optimality and degeneracy in linear programming. Econometrica 20(2), 160–170 (1952)
Chiche, A., Gilbert, J.C.: How the augmented Lagrangian algorithm can deal with an infeasible convex quadratic optimization problem. J. Convex Anal. 22(4) (2016)
Curtis, F.E., Han, Z., Robinson, D.P.: A globally convergent primal–dual active-set framework for large-scale convex quadratic optimization. Comput. Optim. Appl. (2014). doi:10.1007/s10589-014-9681-9:1-31
Dantzig, G.B., Orden, A., Wolfe, P.: The generalized simplex method for minimizing a linear form under linear inequality constraints. Pac. J. Math. 5, 183–195 (1955)
Delbos, F., Gilbert, J.C.: Global linear convergence of an augmented Lagrangian algorithm for solving convex quadratic optimization problems. J. Convex Anal. 12(1), 45–69 (2005)
Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. Ser. A 91(2), 201–213 (2002)
Duff, I.S.: MA57—a code for the solution of sparse symmetric definite and indefinite systems. ACM Trans. Math. Softw. 30(2), 118–144 (2004)
Duff, I.S., Reid, J.K.: The multifrontal solution of indefinite sparse symmetric linear equations. ACM Trans. Math. Softw. 9, 302–325 (1983)
Ferreau, H.J., Bock, H.G., Diehl, M.: An online active set strategy to overcome the limitations of explicit MPC. Int. J. Robust Nonlinear Control 18(8), 816–830 (2008)
Ferreau, H.J., Kirches, C., Potschka, A., Bock, H.G., Diehl, M.: qpOASES: a parametric active-set algorithm for quadratic programming. Math. Prog. Comp. 6(4), 327–363 (2014)
Fletcher, R.: A general quadratic programming algorithm. J. Inst. Math. Appl. 7, 76–91 (1971)
Fletcher, R.: Factorizing symmetric indefinite matrices. Linear Algebra Appl. 14, 257–272 (1976)
Fletcher, R.: Resolving degeneracy in quadratic programming. Ann. Oper. Res. 46/47(2), 307–334 (1993)
Fletcher, R.: Stable reduced Hessian updates for indefinite quadratic programming. Math. Program. Ser. B 87(2), 251–264 (2000)
Forsgren, A.: Inertia-controlling factorizations for optimization algorithms. Appl. Numer. Math. 43, 91–107 (2002)
Forsgren, A., Murray, W.: Newton methods for large-scale linear equality-constrained minimization. SIAM J. Matrix Anal. Appl. 14, 560–587 (1993)
Gilbert, J.C., Joannopoulos, É.: OQLA/QPALM—convex quadratic optimization solvers using the augmented Lagrangian approach, with an appropriate behavior on infeasible or unbounded problems. Technical report, INRIA, BP 105, 78153 Le Chesnay, France (2015)
Gill, P.E., Murray, W.: Numerically stable methods for quadratic programming. Math. Program. 14, 349–372 (1978)
Gill, P.E., Robinson, D.P.: A globally convergent stabilized SQP method. SIAM J. Optim. 23(4), 1983–2010 (2013)
Gill, P.E., Wong, E.: Methods for convex and general quadratic programming. Math. Program. Comput. 7(1), 71–112 (2015)
Gill, P.E., Gould, N.I.M., Murray, W., Saunders, M.A., Wright, M.H.: A weighted Gram–Schmidt method for convex quadratic programming. Math. Program. 30, 176–195 (1984)
Gill, P.E., Murray, W., Saunders, M.A., Wright, M.H.: A practical anti-cycling procedure for linearly constrained optimization. Math. Program. 45, 437–474 (1989)
Gill, P.E., Murray, W., Saunders, M.A., Wright, M.H.: A Schur-complement method for sparse quadratic programming. In: Cox, M.G., Hammarling, S.J. (eds.) Reliable Numerical Computation, pp. 113–138. Oxford University Press, Oxford (1990)
Gill, P.E., Murray, W., Saunders, M.A., Wright, M.H.: Inertia-controlling methods for general quadratic programming. SIAM Rev. 33(1), 1–36 (1991)
Gill, P.E., Murray, W., Saunders, M.A.: User’s guide for QPOPT 1.0: a Fortran package for quadratic programming. Report SOL 95-4, Department of Operations Research, Stanford University, Stanford, CA (1995)
Gill, P.E., Murray, W., Saunders, M.A.: SNOPT: an SQP algorithm for large-scale constrained optimization. SIAM Rev. 47, 99–131 (2005)
Gill, P.E., Murray, W., Saunders, M.A.: User’s guide for SQOPT Version 7: software for large-scale linear and quadratic programming. Numerical Analysis Report 06-1, Department of Mathematics, University of California, San Diego, La Jolla, CA (2006)
Goldfarb, D., Idnani, A.: A numerically stable dual method for solving strictly convex quadratic programs. Math. Program. 27(1), 1–33 (1983)
Gould, N.I.M.: An algorithm for large-scale quadratic programming. IMA J. Numer. Anal. 11(3), 299–324 (1991)
Gould, N.I.M., Toint, P.L.: An iterative working-set method for large-scale nonconvex quadratic programming. Appl. Numer. Math. 43(1–2), 109–128 (2002)
Gould, N.I.M., Toint, P.L.: Numerical methods for large-scale non-convex quadratic programming. In: Trends in industrial and applied mathematics (Amritsar, 2001), volume 72 of Appl. Optim., pp. 149–179. Kluwer, Dordrecht (2002)
Gould, N.I.M., Orban, D., Toint, P.L.: CUTEr and sifdec: a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29(4), 373–394 (2003)
Gould, N.I.M., Orban, D., Toint, P.L.: GALAHAD, a library of thread-safe Fortran 90 packages for large-scale nonlinear optimization. ACM Trans. Math. Softw. 29(4), 353–372 (2003)
Gould, N.I.M., Orban, D., Toint, P.L.: CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization. Comput. Optim. Appl. 60(3), 545–557 (2015)
Hall, J.A.J., McKinnon, K.I.M.: The simplest examples where the simplex method cycles and conditions where EXPAND fails to prevent cycling. Technical Report MS 96-010, Department of Mathematics and Statistics, University of Edinburgh (1996)
Harris, P.M.J.: Pivot selection methods of the Devex LP code. Math. Program. 5, 1–28 (1973). Reprinted in Math. Prog. Study, 4, 30–57 (1975)
Hoyle, S.C.: A single-phase method for quadratic programming. PhD thesis, Report SOL 86-9, Department of Operations Research, Stanford University, Stanford, CA (1986)
Huynh, H.M.: A large-scale quadratic programming solver based on block-LU updates of the KKT system. PhD thesis, Program in Scientific Computing and Computational Mathematics, Stanford University, Stanford, CA (2008)
Maes, C.M.: A regularized active-set method for sparse convex quadratic programming. PhD thesis, Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA, August (2010)
Morales, J.L.: A numerical study of limited memory BFGS methods. Appl. Math. Lett. 15(4), 481–487 (2002)
Potschka, A., Kirches, C., Bock, H., Schlöder, J.: Reliable solution of convex quadratic programs with parametric active set methods. Technical report, Interdisciplinary Center for Scientific Computing, Heidelberg University, Im Neuenheimer Feld 368, 69120 Heidelberg, Germany, November (2010)
Powell, M.J.D.: On the quadratic programming algorithm of Goldfarb and Idnani. Math. Program. Stud. 25, 46–61 (1985)
Ritter, K.: A method for solving nonlinear maximum problems depending on parameters. Nav. Res. Logist. Q. 14, 147–162 (1967)
Ritter, K.: On parametric linear and quadratic programming. MRC Technical report 2197, University of Wisconsin at Madison, Wisconsin, USA (1981)
Stoer, J.: On the realization of the quadratic programming algorithm of Goldfarb and Idnani. In: Vistas in applied mathematics, Transl. Ser. Math. Engrg., pp. 167–180. Optimization Software, New York (1986)
Wong, E.: Active-set methods for quadratic programming. PhD thesis, Department of Mathematics, University of California San Diego, La Jolla, CA (2011)
Wright, S.J.: Superlinear convergence of a stabilized SQP method to a degenerate solution. Comput. Optim. Appl. 11(3), 253–275 (1998)
Acknowledgments
The authors would like to thank two referees for constructive comments that significantly improved the presentation.
Author information
Authors and Affiliations
Corresponding author
Additional information
A. Forsgren—Research supported by the Swedish Research Council (VR).
P. E. Gill, E. Wong—Research supported in part by Northrop Grumman Aerospace Systems, and National Science Foundation Grants DMS-1318480 and DMS-1361421.
Appendix
Appendix
The appendix concerns some basic results used in previous sections. The first result shows that the nonsingularity of a KKT matrix may be established by checking that the two row blocks \(\big (H \ \ A^T \big )\) and \(\big (A \ \ -M \big )\) have full row rank.
Proposition 5
Assume that H and M are symmetric, positive semidefinite matrices. The vectors u and v satisfy
if and only if
Proof
If (26) holds, then (25) holds, which establishes the “if” direction. Now assume that u and v are vectors such that (25) holds. Then,
Adding these equations gives the identity \(u^T\!H u + v^T\!M v = 0\). But then, the symmetry and semidefiniteness of H and M imply \(u^T\!H u=0\) and \(v^T\!M v = 0\). This can hold only if \(H u = 0\) and \(M v=0\). If \(Hu=0\) and \(M v=0\), (25) gives \(A^T\!v=0\) and \(Au=0\), which implies that (26) holds, which completes the proof. \(\square \)
The next result shows that when checking a subset of the columns of a symmetric positive semidefinite matrix for linear dependence, it is only the diagonal block that is of importance. The off-diagonal block may be ignored.
Proposition 6
Let H be a symmetric, positive semidefinite matrix partitioned as
Then,
Proof
If H is positive semidefinite, then \(H_{11}\) is positive semidefinite, and it holds that
if and only if
if and only if \(H_{11} u = 0\), as required. \(\square \)
In the following propositions, the distinct integers k and l, together with integers from the index sets \(\mathcal {B}\) and \(\mathcal {N}\) define a partition of \(\mathcal {I}= \{1\), 2, ..., \(n\}\), i.e., \(\mathcal {I}= \mathcal {B}\cup \{k\} \cup \{l\} \cup \mathcal {N}\). If w is any n-vector, the \(n_{ B}\)-vector \(w_{ B}\) and \(w_{ N}\)-vector \(w_{ N}\) denote the vectors of components of w associated with \(\mathcal {B}\) and \(\mathcal {N}\). For the symmetric Hessian H, the matrices \(H_{ BB }\) and \(H_{ NN }\) denote the subset of rows and columns of H associated with the sets \(\mathcal {B}\) and \(\mathcal {N}\) respectively. The unsymmetric matrix of components \(h_{ij}\) with \(i\in \mathcal {B}\) and \(j\in \mathcal {N}\) will be denoted by \(H_{ BN }\). Similarly, \(A_{ B}\) and \(A_{ N}\) denote the matrices of columns associated with \(\mathcal {B}\) and \(\mathcal {N}\).
The next result concerns the row rank of the \(\big (A \ \ -M \big )\) block of the KKT matrix.
Proposition 7
If the matrix \(\big ( a_l \ \ a_k \ \ A_{ B} \ \ -M \big )\) has full row rank, and there exist \(\Delta x_l\), \(\Delta x_k\), \(\Delta x_{ B}\), and \(\Delta y\) such that \(a_l \Delta x_l + a_k \Delta x_k + A_{ B}\Delta x_{ B}+ M \Delta y=0\) with \(\Delta x_k\ne 0\), then \(\big ( a_l \ \ A_{ B} \ \ -M \big )\) has full row rank.
Proof
It must be established that \(u^T\!\begin{pmatrix} a_l&A_{ B}&-M \end{pmatrix}=0\) implies that \(u=0\). For a given u, let \(\gamma = -u^T\!a_k\), so that
Then,
As \(\Delta x_k\ne 0\), it must hold that \(\gamma = 0\), in which case
As \(\big ( a_l \ \ a_k \ \ A_{ B} \ \ -M \big )\) has full row rank by assumption, it follows that \(u=0\) and \(\big ( a_l \ \ A_{ B} \ \ -M \big )\) must have full row rank. \(\square \)
An analogous result holds concerning the \(\big (H \ \ A^T \big )\) block of the KKT matrix.
Proposition 8
If \(\big (H_{ BB }^{} \ \ A_{ B}^T \big )\) has full row rank, and there exist quantities \(\Delta x_{ N}\), \(\Delta x_{ B}\), \(\Delta y\), and \(\Delta z_k\) such that
with \(\Delta z_k \ne 0\), then the matrix
has full row rank.
Proof
Let \(\big (\mu \ \ v^T \big )\) be any vector such that
The assumed identity (27) gives
As \(\Delta z_k\ne 0\) by assumption, it must hold that \(\mu =0\). The full row rank of \(\big (H_{ BB }^{} \ \ A_{ B}^T \big )\) then gives \(v = 0\) and
must have full row rank. Proposition 5 implies that this is equivalent to
having full row rank. \(\square \)
The next proposition concerns the primal subiterations when the constraint index k is moved from \(\mathcal {B}\) to \(\mathcal {N}\). In particular, it is shown that the \(K_l\) matrix is nonsingular after a subiteration.
Proposition 9
Assume that (\(\Delta x_l\), \(\Delta x_k\), \(\Delta x_{ B}\), \(-\Delta y\), \(-\Delta z_l\)) is the unique solution of the equations
and that \(\Delta x_k\ne 0\). Then, the matrices \(K_l\) and \(K_k\) are nonsingular, where
Proof
By assumption, the Eq. (28) have a unique solution with \(\Delta x_k\ne 0\). This implies that there is no solution of the overdetermined equations
Given an arbitrary matrix D and nonzero vector f, the fundamental theorem of linear algebra implies that if \(D w = f\) has no solution, then there exists a vector v such that \(v^T\!f \ne 0\). The application of this result to (29) implies the existence of a nontrivial vector (\(\Delta \widetilde{x}_l\), \(\Delta \widetilde{x}_k\), \(\Delta \widetilde{x}_{ B}\), \(-\Delta \widetilde{y}\), \(-\Delta \widetilde{z}_l\), \(-\Delta \widetilde{z}_k\)) such that
with \(\Delta \widetilde{z}_l\ne 0\). The last equation of (30) gives \(\Delta \widetilde{x}_l + \Delta \widetilde{z}_l=0\), in which case \(\Delta \widetilde{x}_l \Delta \widetilde{z}_l = -\Delta \widetilde{z}_l^2 <0\) because \(\Delta \widetilde{z}_l\ne 0\). Any solution of (30) may be viewed as a solution of the equations \(H \Delta \widetilde{x}- A^T\!\Delta \widetilde{y}- \Delta \widetilde{z}= 0\), \(A \Delta \widetilde{x}+ M \Delta \widetilde{y}= 0\), \(\Delta \widetilde{z}_{ B}=0\), and \(\Delta \widetilde{x}_i = 0\) for \(i\in \{ 1\), 2, ..., \(n\} {\setminus } \{ l \} {\setminus } \{ k \}\). An argument similar to that used to establish Proposition 2 gives
which implies that \(\Delta \widetilde{x}_k \Delta \widetilde{z}_k > 0\), with \(\Delta \widetilde{x}_k\ne 0\) and \(\Delta \widetilde{z}_k\ne 0\).
As the search direction is unique, it follows from (28) that \(\big ( h_{{ Bl }}^{} \ \ H_{{ Bk }}^{} \ \ H_{ BB }^{} \ \ A_{ B}^T \big )\) has full row rank, and Proposition 6 implies that \(\big (H_{ BB }^{} \ \ A_{ B}^T \big )\) has full row rank. Hence, as \(\Delta \widetilde{z}_l\ne 0\), it follows from (30) and Proposition 8 that the matrix
has full row rank, which is equivalent to the matrix
having full row rank by Proposition 6,
Again, the search direction is unique and (28) implies that \(\big ( a_l \ \ a_k \ \ A_{ B} \ \ -M \big )\) has full row rank. As \(\Delta \widetilde{x}_k\ne 0\), Proposition 7 implies that \(\big ( a_l \ \ A_{ B} \ \ -M \big )\) must have full row rank. Consequently, Proposition 5 implies that \(K_l\) is nonsingular.
As \(\Delta \widetilde{x}_k\), \(\Delta \widetilde{x}_l\), \(\Delta \widetilde{z}_k\) and \(\Delta \widetilde{z}_l\) are all nonzero, the roles of k and l may be reversed to give the result that \(K_k\) is nonsingular. \(\square \)
The next proposition concerns the situation when a constraint index k is moved from \(\mathcal {N}\) to \(\mathcal {B}\) in a dual subiteration. In particular, it is shown that the resulting matrix \(K_{ B}\) defined after a subiteration is nonsingular.
Proposition 10
Assume that there is a unique solution of the equations
with \(\Delta z_k\ne 0\). Then, the matrices \(K_l\) and \(K_k\) are nonsingular, where
Proof
As (31) has a unique solution with \(\Delta z_k\ne 0\), there is no solution of
The fundamental theorem of linear algebra applied to (32) implies the existence of a solution of
with \(\Delta \widetilde{z}_l\ne 0\). It follows from (33) that \(\Delta \widetilde{x}_l+\Delta \widetilde{z}_l=0\). As \(\Delta \widetilde{z}_l\ne 0\), this implies \(\Delta \widetilde{x}_l \Delta \widetilde{z}_l<0\). The solution of (33) may be regarded as a solution of the homogeneous equations \(H \Delta x- A^T\!\Delta y- \Delta z= 0\), \(A \Delta x+ M \Delta y= 0\), with \(\Delta z_i = 0\), for \(i\in \mathcal {B}\), and \(\Delta x_i = 0\), for \(i\in \{1,\dots ,n\}{\setminus }\{k\}{\setminus }\{l\}\). Hence, Proposition 2 gives
so that \(\Delta \widetilde{x}_k \Delta \widetilde{z}_k > 0\). Hence, it must hold that \(\Delta \widetilde{x}_k\ne 0\) and \(\Delta \widetilde{z}_k\ne 0\).
As \(\Delta \widetilde{x}_k\ne 0\), \(\Delta \widetilde{x}_l\ne 0\), \(\Delta \widetilde{z}_k\ne 0\) and \(\Delta \widetilde{z}_l\ne 0\), the remainder of the proof is analogous to that of Proposition 9. \(\square \)
The next result gives expressions for the primal and dual objective functions in terms of the computed search directions.
Proposition 11
Assume that (x, y, z) satisfies the primal and dual equality constraints
Consider the partition \(\{1\), 2, ..., \(n\}= \mathcal {B}\cup \{l\}\cup \mathcal {N}\) such that \(x_{ N}+q_{ N}=0\) and \(z_{ B}+r_{ B}=0\). If the components of the direction (\(\Delta x\), \(\Delta y\), \(\Delta z\)) satisfy (9), then the primal and dual objective functions for (\(\hbox {PQP}_{q,r}\)) and (\(\hbox {DQP}_{q,r}\)), i.e.,
satisfy the identities
Proof
The directional derivative of the primal objective function is given by
where the identity \(Hx+c = A^T\!y +z\) has been used in (34a) and the identities \(A\Delta x+ M\Delta y=0\), \(\Delta x_N=0\) and \(z_{ B}+r_{ B}=0\) have been used in (34b).
The curvature in the direction (\(\Delta x,\Delta y\)) is given by
where the last step follows from Proposition 2.
The directional derivative of the dual objective function is given by
where the identity \(H \Delta x-A^T\!\Delta y-\Delta z=0\) has been used in (36c) and the identities \(Ax + My - b = 0\), \(x_{ N}+q_{ N}=0\) and \(\Delta z_{ B}=0\) have been used in (36e).
As z only appears linearly in the dual objective function, it follows from the structure of the Hessian matrices of \(f_{ P}(x,y)\) and \(f_{ D}(x,y,z)\) in combination with (35) that
\(\square \)
The final result shows that there is no loss of generality in assuming that \(\big (A \ \ M \big )\) has full row rank in (\(\hbox {PQP}_{q,r}\)).
Proposition 12
There is no loss of generality in assuming that \(\big (A \ \ M \big )\) has full row rank in (\(\hbox {PQP}_{q,r}\)).
Proof
Let (x, y, z) be any vector satisfying (2a) and (2b). Assume that \(\big (A \ \ M \big )\) has linearly dependent rows, and that \(\big (A \ \ M \big )\) and b may be partitioned conformally such that
with \(\big ( A_{1} \ \ M_{11} \ \ M_{12} \big )\) having full row rank, and
with \(A_1\in \mathbb {R}^{m_1\times n}\) and \(A_2\in \mathbb {R}^{m_2\times n}\) for some matrix \(N\in \mathbb {R}^{m_2\times m_1}\). From the linear dependence of the rows of \(\big (A \ \ M \big )\), it follows that x, y and z satisfy (2a) and (2b) if and only if
It follows from (37) that \(M_{12} = M_{11} N^T\) and \(A_2^T=A_1^T N^T\), so that x, y and z satisfy (2a) and (2b) if and only if
We may now define \(\widetilde{y}_1=y_1+N^T\!y_2\) and replace (2b) and (2a) by the system
By assumption, \(\big ( A_{1} \ \ M_{11} \ \ M_{12} \big )\) has full row rank. Proposition 6 implies that \(\big (A_{1} \ \ M_{11} \big )\) has full row rank. This gives an equivalent problem for which \(\big (A_{1} \ \ M_{11} \big )\) has full row rank. \(\square \)
Rights and permissions
About this article
Cite this article
Forsgren, A., Gill, P.E. & Wong, E. Primal and dual active-set methods for convex quadratic programming. Math. Program. 159, 469–508 (2016). https://doi.org/10.1007/s10107-015-0966-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-015-0966-2
Keywords
- Quadratic programming
- Active-set methods
- Convex quadratic programming
- Primal active-set methods
- Dual active-set methods