Perception and Planning for Mobile Manipulation in Changing Environments
IROS 2025 Workshop
Date: TBD Place: Hangzhou, China
Human workers in factories, warehouses, and hospitality settings effortlessly perceive their surroundings, adapt to continuous changes, and adjust their actions based on new assignments or environmental modifications. In contrast, mobile robots often struggle with real-time perception and task and motion planning in these changing environments, limiting their ability to function effectively in real-world scenarios.
To advance mobile manipulation, robots must continuously perceive environmental changes—such as shifted objects, human and other robot’s activities, and unforeseen obstacles—and update their task and motion plans accordingly. However, most existing research assumes relatively static environments, limiting adaptability in practical applications.
Key challenges remain in:
Efficient real-time perception and modeling of dynamic environments.
Task and motion planning strategies that can quickly adapt to new/changing environments while ensuring smooth and effective execution.
This workshop will explore techniques for efficient environment modeling and real-time task and motion planning in dynamic environments. We will also discuss how perception and planning can be tightly integrated to improve the adaptability and robustness of mobile manipulation systems in real-world settings.
Time | Event | Comments |
---|---|---|
8:30 - 8:40 | Opening | |
8:40 - 9:05 | Speaker 1 | |
9:05 - 9:30 | Speaker 2 | |
9:30 - 9:55 | Speaker 3 | |
9:55 - 10:10 | Spotlight Talk Videos | (Quick video rounds) |
10:10 - 10:30 | Poster Session | |
10:30 - 10:45 | Coffee Break | |
10:45 - 11:10 | Speaker 4 | |
11:10 - 11:35 | Speaker 5 | |
11:35 - 12:00 | Speaker 6 | |
12:00 - 12:30 | Discussion with Audience |
* Ranked by the first letter of name.
Professor, University of Freiburg
Show Bio
Abhinav Valada is a Full Professor (W3) at the University of Freiburg, where he directs the Robot Learning Lab. He is a member of the Department of Computer Science, the BrainLinks-BrainTools center, and a founding faculty of the ELLIS unit Freiburg. Abhinav is a DFG Emmy Noether AI Fellow, Scholar of the ELLIS Society, IEEE Senior Member, and Chair of the IEEE Robotics and Automation Society Technical Committee on Robot Learning. He received his PhD (summa cum laude) working with Prof. Wolfram Burgard at the University of Freiburg in 2019, his MS in Robotics from Carnegie Mellon University in 2013, and his BTech. in Electronics and Instrumentation Engineering from VIT University in 2010. After his PhD, he worked as a Postdoctoral researcher and subsequently an Assistant Professor (W1) from 2020 to 2023. He co-founded and served as the Director of Operations of Platypus LLC from 2013 to 2015, a company developing autonomous robotic boats in Pittsburgh, and has previously worked at the National Robotics Engineering Center and the Field Robotics Center of Carnegie Mellon University from 2011 to 2014. Abhinav’s research lies at the intersection of robotics, machine learning, and computer vision with a focus on tackling fundamental robot perception, state estimation, and planning problems to enable robots to operate reliably in complex and diverse domains. The overall goal of his research is to develop scalable lifelong robot learning systems that continuously learn multiple tasks from what they perceive and experience by interacting with the real world. For his research, he received the IEEE RAS Early Career Award in Robotics and Automation, IROS Toshio Fukuda Young Professional Award, NVIDIA Research Award, AutoSens Most Novel Research Award, among others. Many aspects of his research have been prominently featured in wider media such as the Discovery Channel, NBC News, Business Times, and The Economic Times.
Assistant Professor, University of Michigan
Show Bio
I am an Assistant Professor in the Robotics Department (primary) and Computer Science and Engineering Department at University of Michigan. My research interests lie in the intersection of robotics, computer vision, and machine learning. My research is on learning interpretable visual representations and estimating their uncertainty for use in downstream science and robotics tasks particularly autonomous mobile manipulation. Previously, I have worked at the Boston Dynamics AI Institute, NVIDIA Research, and Lockheed Martin Corporation. I received my PhD in computer science in the GRASP lab at University of Pennsylvania co-advised by Dr. Kostas Daniilidis and Dr. Nikolai Matni. I received an M.A. in Mathematics, M.A. in Economics, and B.S. in Mathematics and Economics from The University of Alabama in 2014.
Assistant Professor, Peking University
Show Bio
I am a tenure-track assistant professor in the Center on Frontiers of Computing Studies (CFCS) at Peking University. I founded and lead the Embodied Perception and InteraCtion (EPIC) Lab with the mission of developing generalizable skills and embodied multimodal large model for robots to facilitate embodied AGI. I am also the director of the PKU-Galbot joint lab of Embodied AI and the BAAI center of Embodied AI. I have published more than 50 papers in top conferences and journals of computer vision, robotics, and learning, including CVPR/ICCV/ECCV/TRO/ICRA/IROS/NeurIPS/ICLR/AAAI. My pioneering work on category-level 6D pose estimation, NOCS, received the 2022 World Artificial Intelligence Conference Youth Outstanding Paper (WAICYOP) Award, and my work also received ICCV 2023 best paper finalist, ICRA 2023 outstanding manipulation paper award finalist and Eurographics 2019 best paper honorable mention. I serve as an associate editor of Image and Vision Computing and serve as an area chair in CVPR 2022 and WACV 2022. Prior to joining Peking University, I received my Ph.D. degree from Stanford University in 2021 under the advisory of Prof. Leonidas J.Guibas and my Bachelor's degree from Tsinghua University in 2014.
Assistant Professor, New York University
Show Bio
I am an Assistant Professor of Computer Science at NYU Courant and part of the CILVR group. Before that, I was at UC Berkeley for a postdoc, at CMU Robotics Institute for a PhD, and at IIT Guwahati for undergrad. Research: I run the General-purpose Robotics and AI Lab (GRAIL) with the goal of getting robots to generalize and adapt in the messy world we live in. Our research focuses broadly on robot learning and decision making, with an emphasis on large-scale learning (both data and models), representation learning for sensory data, developing algorithms to model actions and behavior, reinforcement learning for adapting to new scenarios, and building open-sourced affordable robots. A talk on our recent robotics efforts is here. If you are interested in joining our lab, please read this.
Professor, Queensland University of Technology
Show Bio
I am a Professor at Queensland University of Technology (QUT) in Brisbane and Deputy Director of the QUT Centre for Robotics, where I lead the Visual Understanding and Learning Program. I am also Deputy Director (Research) for the ARC Research Hub in Intelligent Robotic Systems for Real-Time Asset Management (2022-2027) and was Chief Investigator of the Australian Centre for Robotic Vision 2017-2020. I conduct research in robotic vision and robot learning, at the intersection of robotics, computer vision, and machine learning. My research interests focus on robotic learning for manipulation, interaction and navigation, scene understanding, and the reliability of deep learning for open-set and open-world conditions.
Assistant Professor, Columbia University
Show Bio
I am an Assistant Professor of Computer Science at Columbia University. Before joining Columbia, I was an Assistant Professor at UIUC CS. I also spent time as a Postdoc at the Stanford Vision and Learning Lab (SVL), working with Fei-Fei Li and Jiajun Wu. I received my PhD from the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, where I was advised by Antonio Torralba and Russ Tedrake, and I obtained my bachelor's degree from Peking University..
Details coming soon...