skip to main content
extended-abstract
Public Access

Embodied Human-Computer Interactions through Situated Grounding

Published: 19 October 2020 Publication History

Abstract

In this paper, we introduce a simulation platform for modeling and building Embodied Human-Computer Interactions (EHCI). This system, VoxWorld, is a multimodal dialogue system enabling communication through language, gesture, action, facial expressions, and gaze tracking, in the context of task-oriented interactions. A multimodal simulation is an embodied 3D virtual realization of both the situational environment and the co-situated agents, as well as the most salient content denoted by communicative acts in a discourse. It is built on the modeling language VoxML [7], which encodes objects with rich semantic typing and action affordances, and actions themselves as multimodal programs, enabling contextu-ally salient inferences and decisions in the environment. VoxWorld enables an embodied HCI by situating both human and computational agents within the same virtual simulation environment, where they share perceptual and epistemic common ground.

References

[1]
Yiannis Gatsoulis, Muhannad Alomari, Chris Burbridge, Christian Dondrup, Paul Duckworth, Peter Lightbody, Marc Hanheide, Nick Hawes, DC Hogg, AG Cohn, et al. 2016. QSRlib: a software library for online acquisition of Qualitative Spatial Relations from Video. (2016).
[2]
Nikhil Krishnaswamy, Scott Friedman, and James Pustejovsky. 2019. Combining Deep Learning and Qualitative Spatial Reasoning to Learn Complex Structures from Sparse Examples with Noise. In 33rd AAAI Conference on Artificial Intelligence. AAAI.
[3]
Nikhil Krishnaswamy and James Pustejovsky. 2016. VoxSim: A Visual Platform for Modeling Motion Language. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. ACL.
[4]
David G McNeely-White, Francisco R Ortega, J Ross Beveridge, Bruce A Draper, Rahul Bangar, Dhruva Patil, James Pustejovsky, Nikhil Krishnaswamy, Kyeong-min Rim, Jaime Ruiz, and Isaac Wang. 2019. User-Aware Shared Perception for Embodied Agents. In 2019 IEEE International Conference on Humanized Computing and Communication (HCC). IEEE, 46--51.
[5]
Pradyumna Narayana, Nikhil Krishnaswamy, Isaac Wang, Rahul Bangar, Dhruva Patil, Gururaj Mulay, Kyeongmin Rim, Ross Beveridge, Jaime Ruiz, James Pustejovsky, and Bruce Draper. 2018. Cooperating with Avatars Through Gesture, Language and Action. In Intelligent Systems Conference (IntelliSys).
[6]
James Pustejovsky. 1995. The Generative Lexicon. MIT Press, Cambridge, MA.
[7]
James Pustejovsky and Nikhil Krishnaswamy. 2016. VoxML: A Visualization Modeling Language. Proceedings of LREC (2016).

Cited By

View all
  • (2024)Point Target Detection for Multimodal CommunicationDigital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management10.1007/978-3-031-61060-8_25(356-373)Online publication date: 1-Jun-2024
  • (2023)A Methodology for Evaluating Multimodal Referring Expression Generation for Embodied Virtual AgentsCompanion Publication of the 25th International Conference on Multimodal Interaction10.1145/3610661.3616548(164-173)Online publication date: 9-Oct-2023
  • (2021)Shifting from face-to-face learning to Zoom online teaching, research, and internship supervision in a technologically developing ‘female students’ university in Pakistan: A psychology teacher’s and students’ perspectivePsychology Teaching Review10.53841/bpsptr.2021.27.1.4227:1(42-55)Online publication date: 1-Jan-2021
  • Show More Cited By

Index Terms

  1. Embodied Human-Computer Interactions through Situated Grounding

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    IVA '20: Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents
    October 2020
    394 pages
    ISBN:9781450375863
    DOI:10.1145/3383652
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 October 2020

    Check for updates

    Author Tags

    1. multimodal embodiment
    2. simulation
    3. situated grounding
    4. virtual agent

    Qualifiers

    • Extended-abstract
    • Research
    • Refereed limited

    Funding Sources

    Conference

    IVA '20
    Sponsor:
    IVA '20: ACM International Conference on Intelligent Virtual Agents
    October 20 - 22, 2020
    Scotland, Virtual Event, UK

    Acceptance Rates

    Overall Acceptance Rate 53 of 196 submissions, 27%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)70
    • Downloads (Last 6 weeks)10
    Reflects downloads up to 24 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Point Target Detection for Multimodal CommunicationDigital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management10.1007/978-3-031-61060-8_25(356-373)Online publication date: 1-Jun-2024
    • (2023)A Methodology for Evaluating Multimodal Referring Expression Generation for Embodied Virtual AgentsCompanion Publication of the 25th International Conference on Multimodal Interaction10.1145/3610661.3616548(164-173)Online publication date: 9-Oct-2023
    • (2021)Shifting from face-to-face learning to Zoom online teaching, research, and internship supervision in a technologically developing ‘female students’ university in Pakistan: A psychology teacher’s and students’ perspectivePsychology Teaching Review10.53841/bpsptr.2021.27.1.4227:1(42-55)Online publication date: 1-Jan-2021
    • (2021)Embodied Human Computer InteractionKI - Künstliche Intelligenz10.1007/s13218-021-00727-535:3-4(307-327)Online publication date: 16-Sep-2021
    • (2021)The Role of Embodiment and Simulation in Evaluating HCI: Theory and FrameworkDigital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Human Body, Motion and Behavior10.1007/978-3-030-77817-0_21(288-303)Online publication date: 24-Jul-2021

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media