Guessing human intentions to avoid dangerous situations in caregiving robots

Avatar
Connected to paperThis paper is a preprint and has not been certified by peer review

Guessing human intentions to avoid dangerous situations in caregiving robots

Authors

Noé Zapata, Gerardo Pérez, Lucas Bonilla, Pedro Núñez, Pilar Bachiller, Pablo Bustos

Abstract

For robots to interact socially, they must interpret human intentions and anticipate their potential outcomes accurately. This is particularly important for social robots designed for human care, which may face potentially dangerous situations for people, such as unseen obstacles in their way, that should be avoided. This paper explores the Artificial Theory of Mind (ATM) approach to inferring and interpreting human intentions. We propose an algorithm that detects risky situations for humans, selecting a robot action that removes the danger in real time. We use the simulation-based approach to ATM and adopt the 'like-me' policy to assign intentions and actions to people. Using this strategy, the robot can detect and act with a high rate of success under time-constrained situations. The algorithm has been implemented as part of an existing robotics cognitive architecture and tested in simulation scenarios. Three experiments have been conducted to test the implementation's robustness, precision and real-time response, including a simulated scenario, a human-in-the-loop hybrid configuration and a real-world scenario.

Follow Us on

0 comments

Add comment