Acceleration of stochastic approximation by averaging. (English) Zbl 0762.62022
A new recursive algorithm of stochastic approximation type based on the idea of averaging of trajectories is investigated. This type of algorithm, in contradiction to classical optimal algorithms, does not require any large amount of a priori information (e.g. the matrix of second derivatives at the zero point, etc.).
The asymptotic normality of the estimates as well as the proofs of the statements of almost sure convergence are given. It is also demonstrated that the proposed algorithm achieves the highest possible rate of convergence. The use of essentially new techniques in the proofs allows to substantially weaken the conditions of the theorems.
The case of linear problem is discussed first because the formulation of the results and proofs are the most clear for this case. Then a variety of classical optimization and identification problems is mentioned such as the nonlinear problem, stochastic optimization and, mainly, the estimation of the regression parameters.
The asymptotic normality of the estimates as well as the proofs of the statements of almost sure convergence are given. It is also demonstrated that the proposed algorithm achieves the highest possible rate of convergence. The use of essentially new techniques in the proofs allows to substantially weaken the conditions of the theorems.
The case of linear problem is discussed first because the formulation of the results and proofs are the most clear for this case. Then a variety of classical optimization and identification problems is mentioned such as the nonlinear problem, stochastic optimization and, mainly, the estimation of the regression parameters.
Reviewer: T.Fiala (Praha)
MSC:
62L20 | Stochastic approximation |
60F15 | Strong limit theorems |
93E10 | Estimation and detection in stochastic control theory |
93E12 | Identification in stochastic control theory |