INTERACT: Intuitive Interaction for Robots among Humans
In this project, interactions of mobile robots and humans is key. This concept is considered on multiple spatio-temporal granularities ranging from individual interactions
to the macro interaction of a robot fleet with humans, and from short term (local) to long term (global) effects of the interaction.
INTERACT will lay the foundation for intuitive multi-robot interaction, make it possible for teams of
mobile robots to safely interact in human-centric environments and enable a new level of automation in factories and cities.
People
Saray Bakker
Andreu Matoses Gimenez
Dr. Clarence Chen
Prof. Javier Alonso-Mora
Key collaborators: Prof. Wendelin Bohmer
Funding
This project is founded the ERC Starting Grant project "Intuitive Interaction for Humans among Robots (INTERACT)".
Links
None
Partners
Amsterdam Institute for Advanced Metropolitan Solutions (AMS).
Publications
J21 W. Schwarting, A. Pearson, J. Alonso-Mora, S. Karaman, D. Rus; Social behavior for autonomous vehicles; Proceedings of the National Academy of Sciences USA (PNAS), Nov. 2019
Abstract: Deployment of autonomous vehicles on public roads promises increased efficiency and safety. It requires understanding the intent of human drivers and adapting to their driving styles. Autonomous vehicles must also behave in safe and predictable ways without requiring explicit communication. We integrate tools from social psychology into autonomous-vehicle decision making to quantify and predict the social behavior of other drivers and to behave in a socially compliant way. A key component is Social Value Orientation (SVO), which quantifies the degree of an agent’s selfishness or altruism, allowing us to better predict how the agent will interact and cooperate with others. We model interactions between agents as a best-response game wherein each agent negotiates to maximize their own utility. We solve the dynamic game by finding the Nash equilibrium, yielding an online method of predicting multiagent interactions given their SVOs. This approach allows autonomous vehicles to observe human drivers, estimate their SVOs, and generate an autonomous control policy in real time. We demonstrate the capabilities and performance of our algorithm in challenging traffic scenarios: merging lanes and unprotected left turns. We validate our results in simulation and on human driving data from the NGSIM dataset. Our results illustrate how the algorithm’s behavior adapts to social preferences of other drivers. By incorporating SVO, we improve autonomous performance and reduce errors in human trajectory predictions by 25%.
J21 W. Schwarting, A. Pearson, J. Alonso-Mora, S. Karaman, D. Rus; Social behavior for autonomous vehicles; Proceedings of the National Academy of Sciences USA (PNAS), Nov. 2019
Abstract: Deployment of autonomous vehicles on public roads promises increased efficiency and safety. It requires understanding the intent of human drivers and adapting to their driving styles. Autonomous vehicles must also behave in safe and predictable ways without requiring explicit communication. We integrate tools from social psychology into autonomous-vehicle decision making to quantify and predict the social behavior of other drivers and to behave in a socially compliant way. A key component is Social Value Orientation (SVO), which quantifies the degree of an agent’s selfishness or altruism, allowing us to better predict how the agent will interact and cooperate with others. We model interactions between agents as a best-response game wherein each agent negotiates to maximize their own utility. We solve the dynamic game by finding the Nash equilibrium, yielding an online method of predicting multiagent interactions given their SVOs. This approach allows autonomous vehicles to observe human drivers, estimate their SVOs, and generate an autonomous control policy in real time. We demonstrate the capabilities and performance of our algorithm in challenging traffic scenarios: merging lanes and unprotected left turns. We validate our results in simulation and on human driving data from the NGSIM dataset. Our results illustrate how the algorithm’s behavior adapts to social preferences of other drivers. By incorporating SVO, we improve autonomous performance and reduce errors in human trajectory predictions by 25%.