Distributed Hybrid Gradient Algorithm with Application to Cooperative Adaptive Estimation
Résumé
We address a classical identification problem that consists in estimating a vector of constant unknown parameters from a given linear input/output relationship. The proposed method relies on a network of gradient-descent-based estimators, each of which exploits only a portion of the input-output data. A key feature of the method is that the input-output signals are hybrid, so they may evolve in continuous time (i.e., they may flow), or they may change at isolated time instances (i.e., they may jump). The estimators are interconnected over a weakly-connected directed graph, so the alternation of flows and jumps combined with the distributed character of the algorithm introduce a rich behavior that is impossible to obtain using continuous-or discrete-time estimators. A condition of persistence of excitation in hybrid form ensures exponential convergence of the estimation errors. The proposed approach generalizes the existing centralized gradient-descent algorithms and yields relaxed sufficient conditions for (uniform-exponential) parameter estimation. In addition, we address the observation/identification problem for a class of hybrid systems with unknown parameters using a distributed network of adaptive observers/identifiers.
Origine | Fichiers produits par l'(les) auteur(s) |
---|