×

A leader-follower partially observed, multiobjective Markov game. (English) Zbl 1358.91016

Summary: The intent of this research is to generate a set of non-dominated finite-memory policies from which one of two agents (the leader) can select a most preferred policy to control a dynamic system that is also affected by the control decisions of the other agent (the follower). The problem is described by an infinite horizon total discounted reward, partially observed Markov game (POMG). For each candidate finite-memory leader policy, we assume the follower, fully aware of the leader policy, determines a (perfect memory) policy that optimizes the follower’s (scalar) criterion. The leader-follower assumption allows the POMG to be transformed into a specially structured, partially observed Markov decision process that we use to determine the follower’s best response policy for a given leader policy. We then approximate the follower’s policy by a finite-memory policy. Each agent’s policy assumes that the agent knows its current and recent state values, its recent actions, and the current and recent possibly inaccurate observations of the other agent’s state. For each leader/follower policy pair, we determine the values of the leader’s criteria. We use a multi-objective genetic algorithm to create the next generation of leader policies based on the values of the leader criteria for each leader/follower policy pair in the current generation. Based on this information for the final generation of policies, we determine the set of non-dominated leader policies. We present an example that illustrates how these results can be used to support a manager of a liquid egg production process (the leader) in selecting a sequence of actions to maximize expected process productivity while mitigating the risk due to an attacker (the follower) who seeks to contaminate the process with a chemical or biological toxin.

MSC:

91A15 Stochastic games, stochastic differential games
90C40 Markov and semi-Markov decision processes
68T20 Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.)

Software:

IRIS; NSGA-II

References:

[1] Aberdeen, D. A. (2003). A (revised) survey of approximate methods for solving partially observable Markov decision processes. Technical report, Research School of Information Science and Engineering, Australia National University.
[2] Bakir, N. O. (2011). A Stackelberg game model for resource allocation in cargo container security. Annals of Operations Research, 187, 5-22. · Zbl 1233.90057 · doi:10.1007/s10479-010-0793-z
[3] Bakir, N. O., & Kardes, K. (2009). A stochastic game model on overseas cargo container security. Non-published research reports, CREATE center, Paper 6.
[4] Basilico, N., Gatti, N., & Amigoni, F. (2009). Developing a deterministic patrolling strategy for security agents. In Proceedings of the IEEE/WIC/ACM international conference on intelligent agent technology (IAT) (pp. 565-572). · Zbl 0833.90123
[5] Bean, J. C. (1994). Genetic algorithms and random keys for sequencing and optimization. ORSA Journal on Computing, 6, 154-160. · Zbl 0807.90060 · doi:10.1287/ijoc.6.2.154
[6] Becker, R., Zilberstein, S., Lesser, V., & Goldman, C. V. (2004). Solving transition independent decentralized Markov decision processes. Journal of Artificial Intelligence Research (JAIR), 22, 423-455. · Zbl 1080.68655
[7] Bernstein, D. S., Givan, R., Immerman, N., & Zilberstein, S. (2002). The complexity of decentralized control of Markov decision processes. Mathematics of Operations Research, 27(4), 819-840. · Zbl 1082.90593 · doi:10.1287/moor.27.4.819.297
[8] Bernstein, D. S., Hansen, E. A., & Zilberstein, S. (2005). Bounded policy iteration for decentralized POMDPs. In Proceedings of the nineteenth international joint conference on artificial intelligence (IJCAI) (pp. 1287-1292), Edinburgh. · Zbl 0379.60067
[9] Bier, V. M., Oliveros, S., & Samuelson, L. (2007). Choosing what to protect: Strategic defensive allocation against an unknown attacker. Journal of Public Economic Theory, 9(4), 563-587. · doi:10.1111/j.1467-9779.2007.00320.x
[10] Bier, V. M., Haphuriwat, N., Menoyo, J., Zimmerman, R., & Culpen, A. M. (2008). Optimal resource allocation for defense of targets based on differing measures of attractiveness. Risk Analysis, 28(3), 763-770. · doi:10.1111/j.1539-6924.2008.01053.x
[11] Bopardikar, S. D., & Hespanha, J. P. (2011). Randomized solutions to partial information dynamic zero-sum games. In American control conference (ACC), San Francisco, CA.
[12] Bowman, M., Briand, L. C., & Labiche, Y. (2010). Solving the class responsibility assignment problem in object-oriented analysis with multi-objective genetic algorithms. IEEE Transactions on Software Engineering (TSE), 36(6), 817-837. · doi:10.1109/TSE.2010.70
[13] Cardoso, J. M. P., & Diniz, P. C. (2009). Game theory models of intelligent actors in reliability analysis: An overview of the state of the art. Game Theoretic Risk Analysis of Security Threats, International Series in Operations Research & Management Science, 128, 1-19. · Zbl 1173.91012
[14] Canu, A., & Mouaddib, A. I. (2011). Collective decision-theoretic planning for planet exploration. In Proceedings of international conference on tools with artificial intelligence.
[15] Cassandra, A. R., Kaelbling, L. P., & Littman, M. L. (1994). Acting optimally in partially observable stochastic domains. In Proceedings twelfth national conference on artificial intelligence (AAAI-94), Seattle, WA (pp. 1023-1028). · Zbl 1398.91068
[16] Cassandra, A. R. (1994). Optimal policies for partially observable Markov decision processes. Technical report (CS-94-14). Providence, RI: Department of Computer Science, Brown University · Zbl 1226.90131
[17] Cassandra, A. R., Littman, M. L., & Zhang, N. L. (1997). Incremental pruning: A simple, fast, exact method for partially observable Markov decision processes. In Proceedings thirteenth annual conference on uncertainty in artificial intelligence (UAI-97), Morgan Kaufmann, San Francisco, CA (pp. 54-61). · Zbl 1131.90447
[18] Cavusoglu, H., & Kwark, Y. (2013). Passenger profiling and screening for aviation security in the presence of strategic attackers. Decision Analysis, 10(1), 63-81. · Zbl 1355.91024 · doi:10.1287/deca.1120.0258
[19] Chang, Y. (2015). A leader-follower partially observed Markov game. Ph.D. thesis. Atlanta: Georgia Institute of Technology (in preparation). · Zbl 1358.91016
[20] Cheng, H. T. (1988). Algorithms for partially observable Markov decision processes. Ph.D. thesis. Vancouver, BC: University of British Columbia. · Zbl 1080.68655
[21] Coello, C. A. C. (2000). An updated survey of GA-based multiobjective optimization techniques. ACM Computing Survey, 32(2), 109-143. · doi:10.1145/358923.358929
[22] Corne, D. W., Knowles, J. D., & Oates, M. J. (2000). The Pareto envelope-based selection algorithm for multiobjective optimization. In Proceedings of the parallel problem solving from nature VI conference (Vol. 1917, pp. 839-848).
[23] Deb, K. (2001). Nonlinear goal programming using multi-objective genetic algorithms. Journal of the Operational Research Society, 52(3), 291-302. · Zbl 1131.90447 · doi:10.1057/palgrave.jors.2601089
[24] Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. (2002). A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2), 182-197. · doi:10.1109/4235.996017
[25] Delle Fave, F. M., Jiang, A. X., Yin, Z., Zhang, C., Tambe, M., Kraus, S., & Sullivan, J. P. (2014). Game-theoretic security patrolling with dynamic execution uncertainty and a case study on a real transit system. Journal of Artificial Intelligence Research, 50, 321-367. · Zbl 1364.93013
[26] Doshi, P. (2012). Decision making in complex multiagent contexts: A tale of two frameworks. AI Magazine, 33(4), 82-95.
[27] Eagle, J. N. (1984). The optimal search for a moving target when the search path is constrained. Operations Research, 32(5), 1107-1115. · Zbl 0562.90035 · doi:10.1287/opre.32.5.1107
[28] Emery-Montemerlo, R., Gordon, G., Schneider, J., & Thrun, S. (2004). Approximate solutions for partially observable stochastic games with common payoffs. In Proceedings of the third international joint conference on autonomous agents and multi-agent systems (AAMAS) (pp. 136-143).
[29] Feng, Z., & Zilberstein, S. (2004). Region-based incremental pruning for POMDPs. In Proceedings of the twentieth conference on uncertainty in artificial intelligence (UAI-04). San Francisco: Morgan Kaufmann (pp. 146-153).
[30] Filar, J., & Vrieze, K. (1997). Competitive Markov decision processes. Heidelberg: Springer. · Zbl 0934.91002
[31] Forrest, S. (1993). Genetic algorithms: Principles of natural selection applied to computation. Science, 261, 872-878. · doi:10.1126/science.8346439
[32] Ghosh, M. K., McDonald, D., & Sinha, S. (2004). Zero-sum stochastic games with partial information. Journal of Optimization Theory and Applications, 121(1), 99-118. · Zbl 1140.91305 · doi:10.1023/B:JOTA.0000026133.56615.cf
[33] Gmytrasiewicz, P. J., & Doshi, P. (2005). A framework for sequential planning in multi-agent settings. Journal of Artificial Intelligence Research, 24, 49-79. · Zbl 1080.68664
[34] Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine learning. Reading, MA: Addison-Wesley. · Zbl 0721.68056
[35] Hansen, E. A. (1998a). An improved policy iteration algorithm for partially observable MDPs. In Advances in neural information processing systems (NIPS-97) (Vol. 10, pp. 1015-1021). Cambridge, MA: MIT Press.
[36] Hansen, E. A. (1998b). Solving POMDPs by searching in policy space. In Proceedings of uncertainty in artificial intelligence (Vol. 10, pp. 211-219). · Zbl 1101.91005
[37] Hansen, E. A., Bernstein, D. S., & Zilberstein, S. (2004). Dynamic programming for partially observable stochastic games. In Proceedings of the nineteenth national conference on artificial intelligence (pp. 709-715), San Jose, CA.
[38] Hausken, K., & Zhuang, J. (2011). Governments’ and terrorists’ defense and attack in a T-period game. Decision Analysis, 8(1), 46-70. · Zbl 1398.91368 · doi:10.1287/deca.1100.0194
[39] Hauskrecht, M. (1997). Planning and control in stochastic domains with imperfect information. Ph.D. thesis. Massachusetts Institute of Technology.
[40] Hauskrecht, M. (2000). Value-function approximations for partially observable Markov decision processes. Journal of Artificial Intelligence Research, 13, 33-94. · Zbl 0946.68131
[41] Hespanha, J. P., & Prandini, M. (2001). Nash equilibria in partial-information games on Markov chains. In IEEE conference on decision and control (pp. 2102-2107), Orlando, FL. · Zbl 1101.91005
[42] Holland, J. H. (1975). Adaptation in natural and artificial systems. Ann Arbor: University of Michigan Press. Reprinted in 1992 by MIT Press, Cambridge, MA. · Zbl 0946.68131
[43] Holloway, H., & White, C. C. (2008). Question selection and resolvability for imprecise multi-attribute alternative selection. IEEE Transactions on Systems, Man, and Cybernetics, Part A, 38(1), 162-169. · doi:10.1109/TSMCA.2007.909547
[44] Horn, J., Nafpliotis, N., & Goldberg, D. E. (1994). A niched Pareto genetic algorithm for multiobjective optimization. In Proceedings of the 1st IEEE conference on evolutionary computation, IEEE World Congress on Computational Intelligence (Vol. 1, pp. 82-87), Orlando, FL.
[45] Kandori, M., & Obara, I. (2010). Towards a belief-based theory of repeated games with private monitoring: An application of POMDP, manuscript. · Zbl 1080.68664
[46] Keeney, R. L., & Raiffa, H. (1993). Decisions with multiple objectives: Preferences and value trade-offs. Cambridge: Cambridge University Press. · Zbl 0488.90001 · doi:10.1017/CBO9781139174084
[47] Konak, A., Coit, D. W., & Smith, A. E. (2006). Multi-objective optimization using genetic algorithms: A tutorial. Reliability Engineering and System Safety, 91, 992-1007. · doi:10.1016/j.ress.2005.11.018
[48] Kumar, A., & Zilberstein, S. (2009). Dynamic programming approximations for partially observable stochastic games. In Proceedings of the twenty-second international FLAIRS conference (pp. 547-552), Sanibel Island, FL.
[49] Letchford, J., Macdermed, L., Conitzer, V., Parr, R., & Isbell, C. L. (2012). Computing Stackelberg strategies in stochastic games. ACM SIGecom Exchanges, 11(2), 36-40. · doi:10.1145/2509002.2509011
[50] Lin, A. Z.-Z., Bean, J., & White, C. C. (1998). Genetic algorithm heuristics for finite horizon partially observed Markov decision problems. Technical report. Ann Arbor: University of Michigan.
[51] Lin, A. Z.-Z., Bean, J., & White, C. C. (2004). A hybrid genetic/optimization algorithm for finite horizon partially observed Markov decision processes. Journal on Computing, 16(1), 27-38. · Zbl 1239.90102
[52] Lin, C. M., & Gen, M. (2008). Multi-criteria human resource allocation for solving multistage combinatorial optimization problems using multiobjective hybrid genetic algorithm. Expert Systems with Applications, 34(4), 2480-2490. · doi:10.1016/j.eswa.2007.04.016
[53] Littman, M. L. (1994a). The witness algorithm: Solving partially observable Markov decision processes. Technical report CS-94-40. Department of Computer Science, Brown University. · Zbl 1131.90447
[54] Littman, M. L. (1994b). Memoryless policies: Theoretical limitations and practical results. In Proceedings of the third international conference on simulation of adaptive behavior: From animals to animats (pp. 238-245).
[55] Lovejoy, W. S. (1991). A survey of algorithmic methods for partially observed Markov decision process. Annals of Operations Research, 28(1), 47-65. · Zbl 0717.90086 · doi:10.1007/BF02055574
[56] Manning, L., Baines, R., & Chadd, S. (2005). Deliberate contamination of the food supply chain. British Food Journal, 107(4), 225-245. · doi:10.1108/00070700510589512
[57] McEneaney, W. M. (2004). Some classes of imperfect information finite state-space stochastic games with finite-dimensional solutions. Applied Mathematics and Optimization, 50(2), 87-118. · Zbl 1101.91005 · doi:10.1007/s00245-004-0793-y
[58] Monahan, G. E. (1982). A survey of partially observable Markov decision processes: Theory, models, and algorithms. Management Science, 28(1), 1-16. · Zbl 0486.90084 · doi:10.1287/mnsc.28.1.1
[59] Mohtadi, H., & Murshid, A. P. (2009). Risk analysis of chemical, biological, or radionuclear threats: Implications for food security. Risk Analysis, 29, 1317-1335. · doi:10.1111/j.1539-6924.2009.01260.x
[60] Naser-Moghadasi, M. (2012). Evaluating effects of two alternative filters for the incremental pruning algorithm on quality of POMDP exact solutions. International Journal of Intelligence Science, 2(1), 1-8. · doi:10.4236/ijis.2012.21001
[61] Oliehoek, F. A., Spaan, M. T. J., & Vlassis, N. (2005). Best-response play in partially observable card game. In Proceedings of the 14th annual machine learning conference of Belgium and the Netherlands (pp. 45-50).
[62] Oliehoek, F. A., Spaan, M. T. J., & Vlassis, Nikos. (2008). Optimal and approximate Q-value functions for decentralized POMDPs. Journal of Artificial Intelligence Research, 32, 289-353. · Zbl 1182.68261
[63] Oliehoek, FA; Wiering, M. (ed.); Otterlo, MV (ed.), Decentralized POMDPs, No. 12, 471-503 (2012), Berlin · doi:10.1007/978-3-642-27645-3_15
[64] Ombuki, B., Ross, B. J., & Hanshar, F. (2006). Multi-objective genetic algorithms for vehicle routing problem with time windows. Applied Intelligence, 24, 17-30. · doi:10.1007/s10489-006-6926-z
[65] O’Ryan, M., Djuretic, T., Wall, P., Nichols, G., Hennessy, T., Slutsker, L., et al. (1996). An outbreak of salmonella infection from ice cream. New England Journal of Medicine, 335(11), 824-825. · doi:10.1056/NEJM199609123351118
[66] Paruchuri, P., Tambe, M., Ordonez, F., & Kraus, S. (2004). Towards a formalization of teamwork with resource constraints, In International joint conference on autonomous agents and multiagent systems (pp. 596-603). · Zbl 0727.90089
[67] Pineau, J., Gordon, G. J., & Thrun, S. (2003). Point-based value iteration: An anytime algorithm for POMDPs. In International joint conference on artificial intelligence (pp. 1025-1032). · Zbl 1080.68655
[68] Pita, J., Jain, M., Ordonez, F., Portway, C., Tambe, M., Western, C., et al. (2009). Using game theory for Los Angeles airport security. AI Magazine, 43-57. · Zbl 1226.90131
[69] Platzman, L. K. (1977). Finite memory estimation and control of finite probabilistic systems. Ph.D. thesis. Cambridge, MA: Massachusetts Institute of Technology. · Zbl 0379.60067
[70] Platzman, L. K. (1980). Optimal infinite-horizon undiscounted control of finite probabilistic systems. SIAM Journal on Control and Optimization, 18, 362-380. · Zbl 0447.49012 · doi:10.1137/0318028
[71] Ponnambalam, S. G., Ramkumar, V., & Jawahar, N. (2001). A multiobjective genetic algorithm for job shop scheduling. Production Planning & Control: The Management of Operations, 12(8), 764-774. · doi:10.1080/09537280110040424
[72] Poupart, P., & Boutilier, C. (2004). Bounded finite state controllers. In Advances in neural information processing systems (NIPS) 16: Proceedings of the 2003 conference. MIT Press.
[73] Poupart, P. (2005). Exploiting structure to efficiently solve large scale partially observable Markov decision processes. Ph.D. thesis. Department of Computer Science, University of Toronto.
[74] Puterman, M. L. (1994). Markov decision processes: Discrete dynamic programming. New York: Wiley. · Zbl 0829.90134 · doi:10.1002/9780470316887
[75] Rabinovich, Z., Goldman, C. V., & Rosenschein, J. S. (2003). The complexity of multiagent systems: The price of silence. In Proceedings of the second international joint conference on autonomous agents and multi-agent systems (AAMAS) (pp. 1102-1103), Melbourne.
[76] Raghavan, T. E. S., & Filar, J. A. (1991). Algorithms for stochastic games—A survey. Methods and Models of Operations Research, 35, 437-472. · Zbl 0736.90082 · doi:10.1007/BF01415989
[77] Rothschild, C., McLay, L., & Guikema, S. (2012). Adversarial risk analysis with incomplete information: A level-k approach. Risk Analysis, 32(7), 1219-1231. · doi:10.1111/j.1539-6924.2011.01701.x
[78] Schaffer, J. D. (1985). Multiple objective optimisation with vector evaluated genetic algorithm. In Proceedings of the 1st international conference on genetic algorithms (pp. 93-100). San Mateo: Morgan Kaufmann. · Zbl 0275.93059
[79] Seuken, S., & Zilberstein, S. (2007). Improved memory-bounded dynamic programming for decentralized POMDPs. In Proceedings of the 23rd conference on uncertainty in artificial intelligence, Vancouver.
[80] Shani, G., Pineau, J., & Kaplow, R. (2013). A survey of point-based POMDP solvers. Autonomous Agents and Multi-Agent Systems, 27, 151. · doi:10.1007/s10458-012-9200-2
[81] Shapley, L. S. (1953). Stochastic games. Proceedings of the National Academy of Sciences of the U.S.A., 39, 1095-1100. · Zbl 0051.35805 · doi:10.1073/pnas.39.10.1095
[82] Smallwood, R. D., & Sondik, E. J. (1973). The optimal control of partially observable Markov decision processes over a finite horizon. Operations Research, 21(5), 1071-1088. · Zbl 0275.93059 · doi:10.1287/opre.21.5.1071
[83] Sobel, J., Khan, A., & Swerdlow, D. (2002). Threat of a biological terrorist attack on the US food supply: The CDC perspective. The Lancet, 359(9309), 874-880. · doi:10.1016/S0140-6736(02)07947-3
[84] Sondik, E. J. (1978). The optimal control of partially observable Markov processes over the infinite horizon: Discounted costs. Operations Research, 26(2), 282-304. · Zbl 0379.60067 · doi:10.1287/opre.26.2.282
[85] Srinivas, M., & Patnaik, L. M. (1994). Genetic algorithms: A survey. IEEE Computer, 27(6), 17-26. · doi:10.1109/2.294849
[86] Tsai, J., Rathi, S., Kiekintveld, C., Ordonez, F., & Tambe, M. (2009). IRIS a tool for strategic security allocation in transportation networks. Non-published research report, Paper 71, CREATE Research Archive.
[87] Ummels, M. (2010). Stochastic multiplayer games: Theory and algorithms. Ph.D. thesis. RWTH Aachen University.
[88] Vorobeychik, Y., & Singh, S. (2012). Computing Stackelberg equilibria in discounted stochastic games. In Twenty-sixth national conference on artificial intelligence.
[89] Vorobeychik, Y., An, B., & Tambe, M. (2012). Adversarial patrolling games. In AAAI spring symposium on security, sustainability, and health.
[90] Vorobeychik, Y., An, B., Tambe, M., & Singh, S. (2014). Computing solutions in infinite-horizon discounted adversarial patrolling games. In International conference on automated planning and scheduling.
[91] Wang, C., & Bier, V. M. (2011). Target-hardening decisions based on uncertain multiattribute terrorist utility. Decision Analysis, 8(4), 286-302. · Zbl 1398.91068 · doi:10.1287/deca.1110.0218
[92] White, C. C. (1991). A survey of solution techniques for the partially observed Markov decision process. Annals of Operations Research, 32(1), 215-230. · Zbl 0727.90089 · doi:10.1007/BF02204836
[93] White, C. C., & Scherer, W. T. (1989). Solution procedures for partially observed Markov decision processes. Operations Research, 37(5), 791-797. · Zbl 0684.90099 · doi:10.1287/opre.37.5.791
[94] White, C. C., & Scherer, W. T. (1994). Finite-memory suboptimal design for partially observed Markov decision processes. Operations Research, 42(3), 439-455. · Zbl 0833.90123 · doi:10.1287/opre.42.3.439
[95] Yildirim, M. B., & Mouzon, G. (2012). Single-machine sustainable production planning to minimize total energy consumption and total completion time using a multiple objective genetic algorithm. IEEE Transactions on Engineering Management, 59(4), 585-597. · doi:10.1109/TEM.2011.2171055
[96] Yu, H. (2007). Approximation solution methods for partially observable Markov and semi-Markov decision processes. Ph.D. thesis. Cambridge, MA: Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology.
[97] Zhang, H. (2010). Partially observable Markov decision processes: A geometric technique and analysis. Operations Research, 58(1), 214-228. · Zbl 1226.90131 · doi:10.1287/opre.1090.0697
[98] Zhang, Y. (2013). Contributions in supply chain risk assessment and mitigation. Ph.D. thesis. Georgia Institute of Technology.
[99] Zhuang, J., & Bier, V. M. (2007). Balancing terrorism and natural disasters defensive strategy with endogenous attacker effort. Operations Research, 55(5), 976-991. · Zbl 1167.91331 · doi:10.1287/opre.1070.0434
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.