This project was funded by Disney Research Zurich
In today’s digital media landscape, we are constantly surrounded by displays, from the LCDs found on the phones in our pockets to the ubiquitous screens that greet us whenever we enter a store, airport or educational institution. This project aims to take these displays and enable human gestures, movements and abilties to be applied to them. This helps bring displays into a kinetic form and break the stereotypical rectangular or oval experience for digital displays. Pixels become mobile entities and their positioning and motion are used to produce a novel user experience.
This has been done through the use of robotic displays which basically comprises of a robot swarm where each robot can be remotely controlled and is equivalent to one mobile pixel with RGB LED for controllable color. This display can then be used to reproduce images or animations through a goal generation algorithm that computes a goal position for each mobile pixel. A control loop is then used to firstly allocate positions to each robot, compute a velocity and enable collision avoidance so that the mobile pixels can quickly reach their desired location.
The next step in the project is to then enable gesture based human control over this robot swarm. This has been done by providing two modes of interaction: free-form and shape contrained. In free-form interaction, users can select individual robots and change their positions or move them along a trajectory. In shape-constrained interaction, user can only control a subset of the degrees of freedom so that the configuration is maintained. This work can be extended to several essential applications in addition to entertainment such as surveillance or search-and-rescue operations.
This project has received funding from the Netherlands Organisation for Scientific Research (NWO) Applied Sciences with project Veni 15916. Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the NWO. Neither the NWO nor the granting authority can be held responsible for them.
Pixelbots 2014
In Proc. ACM SIGGRAPH 2016 Art Gallery (SIGGRAPH '16), ACM, New York, NY, USA, 366-367,
2016.
Gesture based human - robot swarm interaction applied to an interactive display
In Proc. of the IEEE Int. Conf. Robotics and Automation (ICRA),
2015.
Customized Sensing for Robot Swarms
In Proc. of the Int. Symposium on Experimental Robotics,
2014.
Viewpoint and Trajectory Optimization for Animation Display with a Large Group of Aerial Vehicles
In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA),
2014.
Human - Robot Swarm Interaction for entertainment: From animation display to gesture based control
In ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI),
2014.
Best video Award 2nd Prize.
Multi-robot Control and Interaction with a Hand-held Tablet
In Workshop Crossing the Reality Gap: Control, Human Interaction and Cloud Technology for Multi- and Many- Robot Systems at the IEEE Int. Conf. on Robotics and Automation (ICRA),
2014.
Design and Control of a Spherical Omnidirectional Blimp
In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS),
2013.
Image and Animation Display with Multiple Robots
In International Journal of Robotics Research, vol. 31, no. 6, pp. 753-773,
2012.
Object and Animation Display with Multiple Aerial Vehicles
In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS),
2012.
Multi-Robot Formation Control via a Real-Time Drawing Interface
In Proc. of the 8th Int. Conf. on Field and Service Robots (FSR),
2012.
Human-Robot Shared Control in a Large Robot Swarm
In Workshop Many-Robot Systems: Crossing the Reality Gap at the IEEE Int. Conf. on Robotics and Automation (ICRA),
2012.
Multi-Robot System for Artistic Pattern Formation
In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA),
2011.
DisplaySwarm: A robot swarm displaying images
In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Symposium: Robot Demonstrations,
2011.