Abstract
Message passing provides a powerful communication abstraction in both distributed and shared memory environments. It is particularly successful at preventing problems arising from shared state, such as data races, as it avoids sharing in general. Message passing is less effective when concurrent access to large amounts of data is needed, as the overhead of messaging may be prohibitive. In shared memory environments, this issue could be alleviated by supporting direct access to shared data; but then ensuring proper synchronization becomes again the dominant problem. This paper proposes a safe and efficient approach to data sharing in message-passing concurrency models based on the idea of distinguishing active and passive computational units. Passive units do not have execution capabilities but offer to active units exclusive and direct access to the data they encapsulate. The access is transparent due to a single primitive for both data access and message passing. By distinguishing active and passive units, no additional infrastructure for shared data is necessary. The concept is applied to SCOOP, an object-oriented concurrency model, where it reduces execution time by several orders of magnitude on data-intensive parallel programs.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Bono, V., Messa, C., Padovani, L.: Typing copyless message passing. In: Barthe, G. (ed.) ESOP 2011. LNCS, vol. 6602, pp. 57–76. Springer, Heidelberg (2011)
Brightwell, R.: Exploiting direct access shared memory for MPI on multi-core processors. International Journal of High Performance Computing Applications 24(1), 69–77 (2010)
Bull, J.M., Enright, J.P., Guo, X., Maynard, C., Reid, F.: Performance evaluation of mixed-mode OpenMP/MPI implementations. International Journal of Parallel Programming 38(5-6), 396–417 (2010)
Buntinas, D., Mercier, G., Gropp, W.: Implementation and evaluation of shared-memory communication and synchronization operations in MPICH2 using the Nemesis communication subsystem. Parallel Computing 33(9), 634–644 (2007)
Clavel, M., Durán, F., Eker, S., Lincoln, P., Martí-Oliet, N., Meseguer, J., Talcott, C.: All About Maude - A High-Performance Logical Framework. LNCS, vol. 4350. Springer, Heidelberg (2007)
Coulouris, G., Dollimore, J., Kindberg, T., Blair, G.: Distributed Systems: Concepts and Design, 5th edn. Addison-Wesley (2011)
Dunn, I.N., Meyer, G.G.: Parallel QR factorization for hybrid message passing/shared memory operation. Journal of the Franklin Institute 338(5), 601–613 (2001)
Ericsson: Erlang/OTP system documentation. Tech. rep., Ericsson (2012)
ETH Zurich: EVE (2014), https://trac.inf.ethz.ch/trac/meyer/eve/
ETH Zurich: SCOOP executable formal specification repository (2014), http://bitbucket.org/bmorandi/
Graham, R.L., Shipman, G.M.: MPI support for multi-core architectures: Optimized shared memory collectives. In: Lastovetsky, A., Kechadi, T., Dongarra, J. (eds.) EuroPVM/MPI 2008. LNCS, vol. 5205, pp. 130–140. Springer, Heidelberg (2008)
Gruber, O., Boyer, F.: Ownership-based isolation for concurrent actors on multi-core machines. In: Castagna, G. (ed.) ECOOP 2013. LNCS, vol. 7920, pp. 281–301. Springer, Heidelberg (2013)
Gustedt, J.: Data handover: Reconciling message passing and shared memory. In: Foundations of Global Computing (2005)
Hewitt, C., Bishop, P., Steiger, R.: A universal modular ACTOR formalism for artificial intelligence. In: International Joint Conference on Artificial Intelligence, pp. 235–245 (1973)
Hoare, C.A.R.: Communicating Sequential Processes. Prentice Hall (1985)
International Organization for Standardization: Ada. Tech. Rep. ISO/IEC 8652:2012, International Organization for Standardization (2012)
Jones, D.L.: Decimation-in-time (DIT) radix-2 FFT (2014), http://cnx.org/content/m12016/1.7/
Mattson, T.G., Sanders, B.A., Massingill, B.L.: Patterns for Parallel Programming. Addison-Wesley (2004)
Message Passing Interface Forum: MPI: A message-passing interface standard. Tech. rep., Message Passing Interface Forum (2012)
Meyer, B.: Object-Oriented Software Construction, 2nd edn. Prentice-Hall (1997)
Morandi, B., Schill, M., Nanz, S., Meyer, B.: Prototyping a concurrency model. In: International Conference on Application of Concurrency to System Design, pp. 177–186 (2013)
Nienaltowski, P.: Practical framework for contract-based concurrent object-oriented programming. Ph.D. thesis, ETH Zurich (2007)
OpenMP Architecture Review Board: OpenMP application program interface. Tech. rep., OpenMP Architecture Review Board (2013)
Rabenseifner, R., Hager, G., Jost, G.: Hybrid MPI/OpenMP parallel programming on clusters of multi-core SMP nodes. In: Euromicro International Conference on Parallel, Distributed and Network-Based Processing, pp. 427–436 (2009)
Schill, M., Nanz, S., Meyer, B.: Handling parallelism in a concurrency model. In: Lourenço, J.M., Farchi, E. (eds.) MUSEPAT 2013 2013. LNCS, vol. 8063, pp. 37–48. Springer, Heidelberg (2013)
Villard, J., Lozes, É., Calcagno, C.: Proving copyless message passing. In: Hu, Z. (ed.) APLAS 2009. LNCS, vol. 5904, pp. 194–209. Springer, Heidelberg (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 IFIP International Federation for Information Processing
About this paper
Cite this paper
Morandi, B., Nanz, S., Meyer, B. (2014). Safe and Efficient Data Sharing for Message-Passing Concurrency. In: Kühn, E., Pugliese, R. (eds) Coordination Models and Languages. COORDINATION 2014. Lecture Notes in Computer Science(), vol 8459. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-43376-8_7
Download citation
DOI: https://doi.org/10.1007/978-3-662-43376-8_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-43375-1
Online ISBN: 978-3-662-43376-8
eBook Packages: Computer ScienceComputer Science (R0)