Pixelbots: Pixels with personality

project image

People

Roland Siegwart - Autonomous Systems Lab, ETH Zurich
Paul Beardsley - Disney Research Zurich
Prof. Javier Alonso-Mora - Autonomous Systems Lab, ETH Zurich

Funding

This project was funded by Disney Research Zurich

About the Project

In today’s digital media landscape, we are constantly surrounded by displays, from the LCDs found on the phones in our pockets to the ubiquitous screens that greet us whenever we enter a store, airport or educational institution. This project aims to take these displays and enable human gestures, movements and abilties to be applied to them. This helps bring displays into a kinetic form and break the stereotypical rectangular or oval experience for digital displays. Pixels become mobile entities and their positioning and motion are used to produce a novel user experience.

This has been done through the use of robotic displays which basically comprises of a robot swarm where each robot can be remotely controlled and is equivalent to one mobile pixel with RGB LED for controllable color. This display can then be used to reproduce images or animations through a goal generation algorithm that computes a goal position for each mobile pixel. A control loop is then used to firstly allocate positions to each robot, compute a velocity and enable collision avoidance so that the mobile pixels can quickly reach their desired location.

The next step in the project is to then enable gesture based human control over this robot swarm. This has been done by providing two modes of interaction: free-form and shape contrained. In free-form interaction, users can select individual robots and change their positions or move them along a trajectory. In shape-constrained interaction, user can only control a subset of the degrees of freedom so that the configuration is maintained. This work can be extended to several essential applications in addition to entertainment such as surveillance or search-and-rescue operations.

Project Demonstrations

Funding & Partners

This project has received funding from the Netherlands Organisation for Scientific Research (NWO) Applied Sciences with project Veni 15916. Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the NWO. Neither the NWO nor the granting authority can be held responsible for them.


Data Materialities Art Gallery: Introduction and Gallery
J. Brucker-Cohen, T. Bech, A. Rowe, G. Bushell, L. Birtles, C. Bennewith, O. Bown, D. Sun, P. Su, N. Roy, V. Jan, D. Morozov, T. Digumarti, J. Alonso-Mora, R. Siegwart, P. Beardsley, M. Jacobsen, D.A. Chanel, R. Constant, B. Grosser. In Leonardo, vol. 49, no. 4, pp. 352-374, MIT Press Journals, 2016.

Pixelbots 2014
T. Digumarti, J. Alonso-Mora, R. Siegwart, P. Beardsley. In Proc. ACM SIGGRAPH 2016 Art Gallery (SIGGRAPH '16), ACM, New York, NY, USA, 366-367, 2016.

In today's digital media landscape, we are constantly surrounded by displays, from the LCDs found on the phones in our pockets to the ubiquitous screens that greet us whenever we enter a store, airport, taxicab, doctor's office, or educational institution. This plethora of displays both allures us and contributes to the media's saturation of our lives. The truth remains that we are never far from the next form of information display. Disney Research's Pixelbots takes this truth as an inevitability and brings the display into a kinetic form, breaking the screen out of the confines of a rectangular or oval experience. Billed as "Pixels with Personality," the Pixelbots' enticing presence rests on our ability to project human qualities onto objects that move as we do in physical space.

Gesture based human - robot swarm interaction applied to an interactive display
J. Alonso-Mora, S. Haegeli Lohaus, P. Leemann, R. Siegwart, P. Beardsley. In Proc. of the IEEE Int. Conf. Robotics and Automation (ICRA), 2015.

A taxonomy for gesture-based interaction between a human and a group (swarm) of robots is described. Methods are classified into two categories. First, free-form interaction, where the robots are unconstrained in position and motion and the user can use deictic gestures to select subsets of robots and assign target goals and trajectories. Second, shape-constrained interaction, where the robots are in a configuration shape that can be modified by the user. In the later, the user controls a subset of meaningful degrees of freedom defining the overall shape instead of each robot directly. A multi-robot interactive display is described where a depth sensor is used to recognize human gesture, determining the commands sent to a group comprising tens of robots. Experimental results with a preliminary user study show the usability of the system.

Customized Sensing for Robot Swarms
D. Jud, J. Alonso-Mora, J. Rehder, R. Siegwart, P. Beardsley. In Proc. of the Int. Symposium on Experimental Robotics, 2014.

This paper describes a novel and compact design for an omni-directional stereo camera. A key goal of the work is to investigate the use of rapid prototyping to make the mirrors for the device, by 3D printing the mirror shape and chroming the surface. The target application is in robot swarms, and we discuss how the ability to create a customized omni-camera enables sensing to become an integrated part of system design, avoiding the constraints that arise when using commercial sensors.

Viewpoint and Trajectory Optimization for Animation Display with a Large Group of Aerial Vehicles
M. Schoch, J. Alonso-Mora, R. Siegwart, P. Beardsley. In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), 2014.

This paper presents a method to optimize the position and trajectory of each aerial vehicle within a large group that displays objects and animations in 3D space. The input is a single object or an animation created by an artist. In a first step, goal positions for the given number of vehicles and representing the object are optimized with respect to a known viewpoint. For displaying an animation, an optimal trajectory satisfying the dynamic constraints of each vehicle is computed using B-splines. Finally, a trajectory following controller is described, which provides the preferred velocity, later optimized to be collision-free with respect to all neighboring vehicles.

Human - Robot Swarm Interaction for entertainment: From animation display to gesture based control
J. Alonso-Mora, R. Siegwart, P. Beardsley. In ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI), 2014. Best video Award 2nd Prize.

This work shows experimental results with three systems that take real-time user input to direct a robot swarm formed by tens of small robots. These are: real-time drawing, gesture based interaction with an RGB-D sensor and control via a hand-held tablet computer.

Multi-robot Control and Interaction with a Hand-held Tablet
R. Grieder, J. Alonso-Mora, C. Bloeglinger, R. Siegwart, P. Beardsley. In Workshop Crossing the Reality Gap: Control, Human Interaction and Cloud Technology for Multi- and Many- Robot Systems at the IEEE Int. Conf. on Robotics and Automation (ICRA), 2014.

Real-time interaction with a group of robots is shown with a hand-held tablet, which tracks the robots and computes collision-free trajectories. Efficient algorithms are described and experiments are performed in scenarios with changing illumination. Augmented reality and a multi-player setup are also described.

Design and Control of a Spherical Omnidirectional Blimp
M. Burri, L. Gasser, M. K. ch, M. Krebs, S. Laube, A. Ledergerber, D. Meier, R. Michaud, L. Mosimann, L. Muri, C. Ruch, A. Schaffner, N. Vuilliomenet, J. Weichart, K. Rudin, S. Leutenegger, J. Alonso-Mora, R. Siegwart, P. Beardsley. In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2013.

This paper presents Skye, a novel blimp design. Skye is a helium-filled sphere of diameter 2.7m with a strong inelastic outer hull and an impermeable elastic inner hull. Four tetrahedrally-arranged actuation units (AU) are mounted on the hull for locomotion, with each AU having a thruster which can be rotated around a radial axis through the sphere center. This design provides redundant control in the six degrees of freedom of motion, and Skye is able to move omnidirectionally and to rotate around any axis. A multi-camera module is also mounted on the hull for capture of aerial imagery or live video stream according to an ‘eyeball’ concept — the camera module is not itself actuated, but the whole blimp is rotated in order to obtain a desired camera view. Skye is safe for use near people — the double hull minimizes the likelihood of rupture on an unwanted collision; the propellers are covered by grills to prevent accidental contact; and the blimp is near neutral buoyancy so that it makes only a light impact on contact and can be readily nudged away. The system is portable and deployable by a single operator — the electronics, AUs, and camera unit are mounted externally and are detachable from the hull during transport; operator control is via an intuitive touchpad interface. The motivating application is in entertainment robotics. Skye has a varied motion vocabulary such as swooping and bobbing, plus internal LEDs for visual effect. Computer vision enables interaction with an audience. Experimental results show dexterous maneuvers in indoor and outdoor environments, and non-dangerous impacts between the blimp and humans.

Image and Animation Display with Multiple Robots
J. Alonso-Mora, A. Breitenmoser, M. Rufli, R. Siegwart, P. Beardsley. In International Journal of Robotics Research, vol. 31, no. 6, pp. 753-773, 2012.

In this article we present a novel display that is created using a group of mobile robots. In contrast to traditional displays that are based on a fixed grid of pixels, such as a screen or a projection, this work describes a display in which each pixel is a mobile robot of controllable color. Pixels become mobile entities, and their positioning and motion are used to produce a novel experience. The system input is a single image or an animation created by an artist. The first stage is to generate physical goal configurations and robot colors to optimally represent the input imagery with the available number of robots. The run-time system includes goal assignment, path planning and local reciprocal collision avoidance, to guarantee smooth, fast and oscillation-free motion between images. The algorithms scale to very large robot swarms and extend to a wide range of robot kinematics. Experimental evaluation is done for two different physical swarms of size 14 and 50 differentially driven robots, and for simulations with 1,000 robot pixels.

Object and Animation Display with Multiple Aerial Vehicles
J. Alonso-Mora, M. Schoch, A. Breitenmoser, R. Siegwart, P. Beardsley. In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2012.

This paper presents a fully automated method to display objects and animations in 3D with a group of aerial vehicles. The system input is a single object or an animation (sequence of objects) created by an artist. The first stage is to generate physical goal configurations and robot colors to represent the objects with the available number of robots. The run-time system includes algorithms for goal assignment, path planning and local reciprocal collision avoidance that guarantee smooth, fast and oscillation-free motion. The presented algorithms are tested in simulations and verified with real quadrotor helicopters and scale to large robot swarms.

Multi-Robot Formation Control via a Real-Time Drawing Interface
S. Hauri, J. Alonso-Mora, A. Breitenmoser, R. Siegwart, P. Beardsley. In Proc. of the 8th Int. Conf. on Field and Service Robots (FSR), 2012.

This paper describes a system that takes real-time user input to direct a robot swarm. The user interface is via drawing, and the user can create a single drawing or an animation to be represented by robots. For example, the drawn input could be a stick figure, with the robots automatically adopting a physical configuration to represent the figure. Or the input could be an animation of a walking stick figure, with the robots moving to represent the dynamic deforming figure. Each robot has a controllable RGB LED so that the swarm can represent color drawings. The computation of robot position, robot motion, and robot color is automatic, including scaling to the available number of robots. The work is in the field of entertainment robotics for play and making robot art, motivated by the fact that a swarm of mobile robots is now affordable as a consumer product. The technical contribution of the paper is three-fold. Firstly the paper presents shaped flocking, a novel algorithm to control multiple robots—this extends existing flocking methods so that robot behavior is driven by both flocking forces and forces arising from a target shape. Secondly the new work is compared with an alternative approach from the existing literature, and the experimental results include a comparative analysis of both algorithms with metrics to compare performance. Thirdly, the paper describes a working real-time system with results for a physical swarm of 60 differential-drive robots.

Vision-Based Localization for Multiple Robots with Absolute and Relative Measurements
P. Gohl, J. Alonso-Mora, R. Siegwart, P. Beardsley. In , tech report, 2012.

Human-Robot Shared Control in a Large Robot Swarm
J. Alonso-Mora, A. Breitenmoser, S. Wismer, R. Siegwart, P. Beardsley. In Workshop Many-Robot Systems: Crossing the Reality Gap at the IEEE Int. Conf. on Robotics and Automation (ICRA), 2012.

Multi-Robot System for Artistic Pattern Formation
J. Alonso-Mora, A. Breitenmoser, M. Rufli, R. Siegwart, P. Beardsley. In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), 2011.

This paper describes work on multi-robot pattern formation. Arbitrary target patterns are represented with an optimal robot deployment, using a method that is independent of the number of robots. Furthermore, the trajectories are visually appealing in the sense of being smooth, oscillation free, and showing fast convergence. A distributed controller guarantees collision free trajectories while taking into account the kinematics of differentially driven robots. Experimental results are provided for a representative set of patterns, for a swarm of up to ten physical robots, and for fifty virtual robots in simulation.

DisplaySwarm: A robot swarm displaying images
J. Alonso-Mora, A. Breitenmoser, M. Rufli, S. Haag, G. Caprari, R. Siegwart, P. Beardsley. In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Symposium: Robot Demonstrations, 2011.