Decentralized approximate dynamic programming for dynamic networks of agents

H Lakshmanan, DP de Farias�- 2006 American control�…, 2006 - ieeexplore.ieee.org
H Lakshmanan, DP de Farias
2006 American control conference, 2006ieeexplore.ieee.org
We consider control systems consisting of teams of agents operating in stochastic
environments and communicating through a network with dynamic topology. An optimal
centralized control policy can be derived from the Q-function associated with the problem.
However, computing and storing the Q-function is intractable for systems of practical scale,
and having a centralized policy may lead to prohibitive requirements on communication
between agents. On the other hand, it has been shown that decentralized optimal control is�…
We consider control systems consisting of teams of agents operating in stochastic environments and communicating through a network with dynamic topology. An optimal centralized control policy can be derived from the Q-function associated with the problem. However, computing and storing the Q-function is intractable for systems of practical scale, and having a centralized policy may lead to prohibitive requirements on communication between agents. On the other hand, it has been shown that decentralized optimal control is NP-hard even in the case of small systems. Here we propose a general approach for decentralized control based on approximate dynamic programming. We consider approximations to the Q-function via local approximation architectures, which lead to decentralization of the task of choosing control actions and can be computed and stored efficiently. We propose and analyze an approximate dynamic programming approach for fitting the Q-function based on linear programming. We show that error bounds previously developed for cost-to-go function approximation via linear programming can be extended to the case of Q-function approximation. We then consider the problem of decentralizing the task of approximating the Q-function and show that it can be viewed as a resource allocation problem. Motivated by this observation, we propose a decentralized gradient-based algorithm for solving a class of resource allocation problems. Convergence of the algorithm is established and its convergence rate, measured in terms of the number of iterations required for magnitude of the gradient to approach zero, is shown to be O(n/sup 2.5/ ), where n is the number of agents in the network.
ieeexplore.ieee.org
Showing the best result for this search. See all results