Perception and Planning for Mobile Manipulation in Changing Environments

IROS 2025 Workshop

8:50-13:00 AM, Oct. 20, 2025

Venue 309, Hangzhou International Expo Center, China

About the Workshop

Human workers in factories, warehouses, and hospitality settings effortlessly perceive their surroundings, adapt to continuous changes, and adjust their actions based on new assignments or environmental modifications. In contrast, mobile robots often struggle with real-time perception and task and motion planning in these changing environments, limiting their ability to function effectively in real-world scenarios.

To advance mobile manipulation, robots must continuously perceive environmental changes—such as shifted objects, human and other robot’s activities, and unforeseen obstacles—and update their task and motion plans accordingly. However, most existing research assumes relatively static environments, limiting adaptability in practical applications.

Key challenges remain in:

This workshop will explore techniques for efficient environment modeling and real-time task and motion planning in dynamic environments. We will also discuss how perception and planning can be tightly integrated to improve the adaptability and robustness of mobile manipulation systems in real-world settings.

Schedule

Time Event Comments
8:50 - 9:00 Opening
9:00 - 9:30 Speaker 1. Abhinav Valada*
9:30 - 10:00 Speaker 2. Stefan Leutenegger*
10:00 - 10:15 Spotlight Talks 1 min lightning talk
10:15 - 10:45 Poster Session
10:45 - 11:00 Coffee Break
11:00 - 11:30 Speaker 3. Niko Suenderhauf*
11:30 - 12:00 Speaker 4. Renaud Detry*
12:00 - 12:15 Award and Closing Remarks
12:15 - 13:00 Networking and lunch

* The order of the speakers might change. The organizers reserve the right to make changes to the program as needed.

Invited Speakers

* Ranked by the first letter of name.

Speaker 1

Abhinav Valada

Professor, University of Freiburg

Show Bio

Abhinav Valada is a Full Professor (W3) at the University of Freiburg, where he directs the Robot Learning Lab. He is a member of the Department of Computer Science, the BrainLinks-BrainTools center, and a founding faculty of the ELLIS unit Freiburg. Abhinav is a DFG Emmy Noether AI Fellow, Scholar of the ELLIS Society, IEEE Senior Member, and Chair of the IEEE Robotics and Automation Society Technical Committee on Robot Learning. He received his PhD (summa cum laude) working with Prof. Wolfram Burgard at the University of Freiburg in 2019, his MS in Robotics from Carnegie Mellon University in 2013, and his BTech. in Electronics and Instrumentation Engineering from VIT University in 2010. After his PhD, he worked as a Postdoctoral researcher and subsequently an Assistant Professor (W1) from 2020 to 2023. He co-founded and served as the Director of Operations of Platypus LLC from 2013 to 2015, a company developing autonomous robotic boats in Pittsburgh, and has previously worked at the National Robotics Engineering Center and the Field Robotics Center of Carnegie Mellon University from 2011 to 2014. Abhinav’s research lies at the intersection of robotics, machine learning, and computer vision with a focus on tackling fundamental robot perception, state estimation, and planning problems to enable robots to operate reliably in complex and diverse domains. The overall goal of his research is to develop scalable lifelong robot learning systems that continuously learn multiple tasks from what they perceive and experience by interacting with the real world. For his research, he received the IEEE RAS Early Career Award in Robotics and Automation, IROS Toshio Fukuda Young Professional Award, NVIDIA Research Award, AutoSens Most Novel Research Award, among others. Many aspects of his research have been prominently featured in wider media such as the Discovery Channel, NBC News, Business Times, and The Economic Times.

Speaker 4

Lerrel Pinto (tentative)

Assistant Professor, New York University

Show Bio

I am an Assistant Professor of Computer Science at NYU Courant and part of the CILVR group. Before that, I was at UC Berkeley for a postdoc, at CMU Robotics Institute for a PhD, and at IIT Guwahati for undergrad. Research: I run the General-purpose Robotics and AI Lab (GRAIL) with the goal of getting robots to generalize and adapt in the messy world we live in. Our research focuses broadly on robot learning and decision making, with an emphasis on large-scale learning (both data and models), representation learning for sensory data, developing algorithms to model actions and behavior, reinforcement learning for adapting to new scenarios, and building open-sourced affordable robots. A talk on our recent robotics efforts is here. If you are interested in joining our lab, please read this.

Speaker 5

Niko Suenderhauf

Professor, Queensland University of Technology

Show Bio

I am a Professor at Queensland University of Technology (QUT) in Brisbane and Deputy Director of the QUT Centre for Robotics, where I lead the Visual Understanding and Learning Program. I am also Deputy Director (Research) for the ARC Research Hub in Intelligent Robotic Systems for Real-Time Asset Management (2022-2027) and was Chief Investigator of the Australian Centre for Robotic Vision 2017-2020. I conduct research in robotic vision and robot learning, at the intersection of robotics, computer vision, and machine learning. My research interests focus on robotic learning for manipulation, interaction and navigation, scene understanding, and the reliability of deep learning for open-set and open-world conditions.

Speaker 6

Renaud Detry

Associate Professor, KU Leuven

Show Bio

I am an Associate Professor of robot learning at KU Leuven in Belgium, where I hold a dual appointment in electrical and mechanical engineering (groups PSI and RAM). I sit on the steering committee of Leuven.AI, I am a member of the ELLIS Society, and I am a technical advisor for OpalAI. Formerly I was group lead for the Perception Systems group at NASA JPL, Pasadena, CA, and an Assistant Professor at UCLouvain, Belgium. I am a visiting researcher at ULiege and KTH. My research interests include robot learning and computer vision, and their application to manufacturing, agriculture, healthcare, on-orbit operations and planetary exploration. At JPL, I was machine-vision lead for the surface mission of the NASA/ESA Mars Sample Return campaign.

Speaker 3

Stefan Leutenegger

Associate Professor, ETH Zürich

Show Bio

Prof. Leutenegger's field of research is the area of mobile robotics, with focus on robot navigation through potentially unknown environments. He develops algorithms and software, which allow a robot (e.g. drone) using its sensors (e.g. video) to reconstruct 3D structure as well as to categorise it with the help of modern Machine Learning (including Deep Learning). This understanding enables safe navigation through challenging environments, as well as the interaction with it (including humans).

Accepted Papers

The following papers have been accepted for poster presentation and a spotlight talk at the workshop.

Authors should print and bring their own posters and at least one author must be present during the Poster Session. Posters discussions can be continued during the coffee break and the lunch. Posters should adhere to the IROS poster guidelines (no larger than 36 inches wide and 48 inches in height, link). The author of the accepted paper can promote their work in a 1-minute spotlight talk. The authors are requested to submit a 1-slide presentation (pdf or ppt) for their spotlight talk before the 16th of October (23:59) by sending an email to irosworkshop.pm2ce@gmail.com. The organizers will display the slide during the spotlight talk.

Accepted papers will be published on the workshop website a few days before IROS 2025, unless otherwise specified by the authors.

Perception and Motion Planning for Mobile Manipulation with Synthetic Generated Data in Dynamic Environments

Artur J. Cordeiro, Pedro A. Dias, Luís F. Rocha, Frederico M. Borges, Manuel F. Silva, José Boaventura and João P.C. de Souza

Show Abstract

A Benchmark for Multi-Modal Multi-Robot Multi-Goal Path Planning

Valentin N. Hartmann, Tirza Heinle, Yijiang Huang and Stelian Coros

Show Abstract

Accelerated Multi-Modal Motion Planning Using Context-Conditioned Diffusion Models

Edward Sandra, Lander Vanroye, Dries Dirckx, Ruben Cartuyvels, Jan Swevers and Wilm Decré

Show Abstract

Humanoid Occupancy: Enabling A Generalized Multimodal Occupancy Perception System on Humanoid Robots

Wei Cui, Haoyu Wang, Jiaru Zhong, Wenkang Qin, Yijie Guo, Gang Han, Wen Zhao, Jiahang Cao, Zhang Zhang, Jingkai SUN, Pihai Sun, Shuai Shi, Botuo Jiang, Jiahao Ma, Jiaxu Wang, Hao Cheng, Zhichao Liu, Yang Wang, Zheng Zhu, Guan Huang, Lingfeng Zhang, Jun Ma, Junwei Liang, Renjing Xu, Jian Tang and Qiang Zhang

Show Abstract

Autonomous Robotic Manipulation with a Clutter-Aware Pushing Policy

Sanraj Lachhiramka, Pradeep J, Archanaa A. Chandaragi, Arjun Achar and Shikha Tripathi

Show Abstract

Jacobian-Guided Active Learning for Gaussian Process-Based Inverse Kinematics

Shibao Yang, Pengcheng Liu and Nick Pears

Show Abstract

Dynamic Objects Relocalization in Changing Environments with Flow Matching

Francesco Argenziano, Miguel Saavedra-Ruiz, Sacha Morin, Daniele Nardi and Liam Paull

Show Abstract

Open-Vocabulary and Semantic-Aware Reasoning for Search and Retrieval of Objects in Dynamic and Concealed Spaces

Rohit Menon, Yasmin Schmiede, Maren Bennewitz and Hermann Blum

Show Abstract

HANDO: Hierarchical Autonomous Navigation and Dexterous Omni-loco-manipulation

Jingyuan Sun, Chaoran Wang, Mingyu Zhang, Cui Miao, Hongyu Ji, Zihan Qu, Sun Han, Bing Wang and Qingyi Si

Show Abstract

T-FunS3D: Task-Driven Hierarchical Open-Vocabulary 3D Functionality Segmentation via Vision-Language Models

Jingkun Feng and Reza Sabzevari

Show Abstract

Air-ground Robotic Collaborative Framework for Autonomous Cramped-space Post-earthquake Search

Ruiyang Yang, Ming Xue, Yue Zeng, Zihao Yang, Yudong Fang, Jixing Yang, Yongqiang Liu, Xiaohui Huang and Jingya Liu

Show Abstract

Perception-Driven Adaptive Mobile Manipulation for Autonomous Container Unloading

Maria S. Lopes, João Pedro C. de Souza, Pedro Costa, José A. Beça and Manuel F. Silva

Show Abstract

U-LAG: Uncertainty-Aware, Lag-Adaptive Goal Retargeting for Robotic Manipulation

Anamika J. H. and Anujith Muraleedharan

Show Abstract

Organizers

Organizer 1

Gang Chen (Clarence)

Postdoc, TU Delft

Organizer 1

Saray Bakker

PhD, TU Delft

Organizer 3

Liam Paull

Associate Professor, Université de Montréal

Organizer 4

Jia Zeng

Researcher, Shanghai AI Lab

Organizer 2

Javier Alonso-Mora

Associate Professor, TU Delft

Sponsor

Sponsor Logo