×

Rate of convergence for constrained stochastic approximation algorithms. (English) Zbl 1011.62082

The authors consider stochastic approximation-type algorithms of the form (essentially, extensions are stated and referred to) (1) \( \theta_{n+1} = \theta_{n} + \varepsilon_n Y_n\), decreasing step-size, and \(\varepsilon_n \rightarrow 0\). They treat the case where the stochastic approximation algorithm is constrained, that is the iterates are required not to leave a certain set \(H\). The constraint is enforced by orthogonal projection back onto \(H\), resulting in the inclusion of a correction term \(\varepsilon_n Z_n\) on the right-hand-side of (1). They assume that there is \(\overline \theta\), such that \(\theta_n\) converges to \(\overline \theta\) with probability one. Convergence of this kind of algorithms (with probability one or in the weak sense) is well-established, for the unconstrained, as well as for the constrained case. Results concerning the rate of convergence have been obtained for the unconstrained case and, under the restriction that the limit point \({\overline \theta}\) is in the interior of \(H\), for the constrained case.
The authors prove rate of convergence theorems for a large class of systems in the case where the limit \({\overline \theta}\) is on the boundary of \(H\). Under the assumption that the noise is a martingale difference, they prove weak convergence, that is, tightness of \(U_n\), the centered and normalized iterate, to an appropriate stationary reflected linear diffusion, whose variance then plays the usual role as a measure of the rate of convergence. Extensions and applications are also provided.

MSC:

62L20 Stochastic approximation
60F17 Functional limit theorems; invariance principles
60F05 Central limit and other weak theorems
93E15 Stochastic stability in control theory
93E20 Optimal stochastic control
Full Text: DOI