Human-Robot Cooperative Dynamic Manipulation of Complex Pendulum-like Objects

2016.04.25

With robots entering everyday life, cooperative tasks involving physical human robot interaction gain in importance. Besides object transfer through pure point to-point movement trajectories, humans also often apply repetitive swing movements that enable incremental energy injection when manipulating bulky and heavy objects. In our work, we split flexible object swinging into two extremes: an oscillating entity formed by the partners' arms and a rigid object and a pendulum-like object that can oscillate itself. This video shows our work on cooperative swinging of complex pendulum-like objects, which, besides a desired oscillation also exhibit disturbance oscillations.

This video shows experiments with a light weight robot arm and a trapezoidal pendulum. Active disturbance damping is compared to pure excitation of the desired oscillation while the other end of the pendulum is either fixed to the environment or controlled by a human interaction partner. In human-robot interaction the human behaves as a leader and the robot as a follower. Through imitation of the human's energy flow, the robot follower actively assists with the swing-up task.

More videos are available HERE.

Interactive Urban RObot Navigating around Marienplatz in Munich

2015.01.01

This video presents the results from the Interactive Urban RObot project. The developed robot IURO navigates autonomously around the city-center of Munich and commences interaction with pedestrians. IURO approaches a person and asks for help to find the way to Marienplatz, a square in the Munich city center. People grasp a microphone from IURO's compartment and start chatting with the robot. IURO starts the conversation with a question for the counterparts mood. By emotionally adapting to the reported mood, he attains the person's full attention and successfully retrieves the information he is looking for. Besides verbal input, IURO also asks the interaction partner to point into the direction of the Marienplatz. Obviously, people enjoy the experience to interact with an autonomous mobile robot.

Resonance driven dynamic manipulation: Ultra-fast ball dribbling

2015.01.01

The video presents a resonance-driven dynamic munipulation. It consists of dribbling and juggling tasks with elastic beam. Vibration modes are excited to control a periodic orbit of a ping-pong ball. Change in curvature of the beam influences the overall task dynamics and stability.

Robotic Nonprehensile Box Throwing

2015.01.01

The video presents a robust nonprehensile planar throw of an object to a goal destination. Three different trajectories are presented. The objects are released from a goal manifold. The goal 6-dimensional state is depicted as a black square box. Optimal motion planner generates the trajectories that are robust to the unmodeled parameters.

Robotic Nonprehensile Box Catching

2015.01.01

The video presents the simulation and the experiment of direct catching of an rigid object. Multiple phases of catching are shown in sequence. Compound motion for approaching follows after a free flight of the object. Optimization of contact forces is shown together with a resulting balancing trajectory.

Human-Robot Cooperative Swinging: Evaluation in Virtual Reality

2014.08.19

 

The video shows our first works on cooperative swinging of pendulum-like objects. An energy based control concept enables a robot to cooperate with a human in a goal-directed swing-up task. We conducted virtual reality experiments to compare effort sharing and performance of mixed human-human and human-robot dyads. In particular, in this video a human leader interacts with a simulated robot follower. Whereas the leader knows the desired pendulum energy, i.e. sees a goal sphere, the robot follower does not know the desired pendulum energy and has to infer the human intention from the interaction in order to contribute to the task.

 

The Autonomous City Explorer

2008.10.02

In this video we present the Autonomous City Explorer (ACE) project. Its goal was to create a robot capable of navigating unknown urban environments without the use of GPS data or prior map knowledge. The robot had to find its way from the university campus to the town square of Munich solely by interacting with pedestrians and building a topological representation of its surroudings.

This video shows necessary ingredients for successfull low-level navigation on sidewalks, human detection and tracking, information retrieval from pedestria.

Increasing helpfulness towards a robot

2012.11.14

This video illustrates how emotional adaption strategies affect human robot interaction. Specifically, here we focus on proactively inducing  emotional similarity in the human to increase her prosocial behavior towards the robot.

Human Approach in Dynamic Urban Environments

2012.04.02

Since interaction with humans is the main information source for the IURO robot, it needs to get close to a person before starting a conversation. In order to fulfil this task a trajectory planner is needed that allows the robot to approach a human. In this setting, predictability and readability of actions are aspects that need to be taken care of to let the approach behavior appear natural, unobtrusive and raise social acceptance. In the IURO project state of the art optimization methods for trajectory generation and results from user studies are combined to develop a motion planning algorithm that enables a robot to approach walking and standing persons in a dynamic urban environment incorporating social constraints.

Miscommunication Handling in HRI

2012.02.24

As communication quality in public spaces often is impaired by noisy environment, it is difficult for a robot to retrieve missing task-information from humans. Therefore, different dialog strategies are modeled and evaluated with respect to user experience and error handling capabilities in HRI in order to cope with erroneous speech recognition. Since correct recognition of spoken language is a bottleneck for real-world dialog systems, special emphasis is placed on the issue of adapting dialog strategies to the conditions under which the dialog is held to thereby provide for adaptability of the dialog strategy to variable speech recognition performance. Experimental evaluations are conducted in a fully automated indoor setting, and in a Wizard-of-Oz outdoor setting. Results indicate that a critical point exists, up to which the use of requests for handling miscommunication improves the user experience of a dialog strategy.

How can a robot find its way in a city?

2012.02.29

For humans it is easy to walk down the street, wait for the green light at the crossing, enter the grocery store to buy some apples and after that find our way back home.  But it is often these things, which we as humans consider simple, that robots have big troubles dealing with. The reason is that we are excellent in many tasks,  like recognizing where the sidewalk is,  avoid getting in the way of other pedestrians or finding an entrance to a shop, that we think they are easy. Robots, on the other hand, are not so good at understanding their environment. For instance, if we ask a robot simple questions like where we should cross the street, or about the color of the building in front, it will have a hard time giving an answer.  For this reason, one of the research lines within the IURO project is to investigate how a robot can acquire this semantic level of understanding of the environment. We can do this by letting the robot learn about the different objects of the environment. For example, if we tell the robot that a specific region in an image is a tree, after multiple trials will be able to associate specific texture and color attributes to the concept of tree. In this way, the next time it sees these similar color and texture patterns,  it will be able to recognize it as a tree.  In practice, recognizing objects as cars, buildings, or surfaces like sidewalk, sky is a challenging task, because how they look varies with the viewpoint, the light conditions and many other factors, making it hard for robots to generalize. Below we see a video of the robot building a semantic map of the street as it moves down the sidewalk, where different colors represent different categories. Of course,  it doesn't always get it right, and sometimes the robot hallucinates and sees cars inside of  buildings.