Emerging Control Techniques

Dynamical Processes over Social Networks: Modeling, Analysis and Control

Social networks constituted by social agents and their social relations are ubiquitous in our daily lives. Dynamic processes over social networks, which are highly related to our social activities and decision making, are prominent research topics in both theory and practice.

In this research, two typical social-network-based dynamic processes, information epidemics and opinion dynamics, are inspected aiming at filling the gap between social network analysis and control theory. Analogous to epidemics spreading in population, information epidemics are introduced to describe information diffusion in social networks. Existence of the endemic and disease-free equilibria is thoroughly studied as well as their stability conditions. Additionally, the desired diffusion performance is achieved via an novel optimal control framework, which is promising to solve the open problem of optimal control for information epidemics. Apart from diffusion processes, opinion dynamics describing the evolution of individual opinions under social influence is inspected from control theoretical point of view. We focus on opinion dynamics over social networks with cooperative-competitive interactions and address the existence question: under what conditions, there exist certain kind of distributed protocols such that the opinions are polarized, consensus and neutralized. Particular emphasis is on the joint impact of the dynamical properties of (both homogeneous and heterogeneous) individual opinions and the interaction topology w.r.t. the static diffusive coupling protocols.

Contact / Publication details: Fangzhou Liu

Adaptive Control

Robust Adaptive Control for Euler-Lagrange Systems

Dynamics of a robot is nonlinear and depends on the inertial parameters, frictions and its geometry. Different controllers had been proposed to control a manipulator such as robust control, adaptive control, model predictive control and disturbance observer control. The challenges in controlling a manipulator are un-modeled dynamics, torque saturation, uncertainties, tuning the controller, and under-actuation.

We designed a robust-adaptive controller using inverse-optimal controller method with quantitative performance analysis. The controller acts as an adaptive computed-torque controller that uses feedback linearization but does not require prior knowledge about system parameters or regressor matrix. For the validation of the adaptive controller, a 3-degree-of-freedom and a 7-degree-of-freedom robot manipulator are used.

Contact / Publication details: Rameez Hayat

Advanced Learning-Q Control

Feedback control exploiting machine learning and artificial intelligence concepts is a promising direction for data-driven adaptive control. An important aspect is the stability of the closed loop; it should be imposed on the learning system as a basic prerequisite. This is of particular importance for systems that are realized on hardware, for example, robotic manipulator control systems. Our approach to tackle this problem is based on the Q-parameterization of stabilizing controllers. By a suitable factorization of both plant and controller models, one may characterize a set of controllers yielding stability to the feedback system, only by means of designing a stable parameter system. The analysis of the corresponding stability and robustness properties is thereby greatly reduced, albeit at the cost of requiring a model for control design. Given the general dynamic structure of robot manipulators, however, we do think that such domain knowledge should not be ignored but rather taken for advantage in designing intelligent control strategies. Function approximators to learn the system Q in a robust performance framework were already employed in the 1990s by T. T. Tay, J. B. Moore, and co-workers. Our current work is to extend this so-called learning-Q framework to provide stability guarantees to advanced machine learning methods in robotics, e.g. reinforcement learning. We also work to generalize the approach to certain classes of nonlinear systems.

Contact / Publication details: Stefan Friedrich

Adaptive and Learning Control of Hybrid Systems

Many physical systems are characterized by the hybrid phenomenon, namely, continuous state evolving and discrete mode switching. The challenges to design controllers for hybrid systems are uncertainties and switching behavior. If the uncertainty is very large or the system parameters vary, a single fixed controller may not be able to stabilize the whole system. In such cases, the uncertainty and varying parameters should be captured by learning approaches and the controller is required to be adaptive. Furthermore, the switching behavior increases the hazard to stabilize the whole system.

The goal of this work is to develop intelligent control approaches, which are also compatible with learning methods, for hybrid systems with uncertainties. Applications include robot impacts (walking, jumping, pick-and-place), soft robotics, flight control, etc.


Contact / Publication details: Tong Liu

Optimal Control, Learning and Optimization

Optimal Motion Planning for HRI

While learning control policies for human movement allows us to predict and imitate human motions, they can be exploited fully only if they are incorporated into robot motion plans effectively. For achieving natural interaction, human perception of a robotic agent’s movement has to be considered as well. In addition to the safety and efficiency aspects of robot movement, it should be predictable by the human partner. In essence, it is necessary to integrate not only physical but also social constraints into robot motion planning. Current motion planners are largely ignorant of this crucial objective.

We introduce (i) a stochastic trajectory optimization framework for safe and effective dyadic interaction, and (ii) a policy improvement formulation for the robot motion adaptation w.r.t. the feedback of the human partner. These motion planners have become available by consolidating model-based optimization with data-driven methods and also by updating the policy online using the perception data obtained during interaction. Overall, these two novel approaches verify the significance of human-in-the-loop planning for close proximity safe and intuitive human-robot interaction (HRI). In addition, they pave the way for a long-term learning framework for personalized assistance.

Contact / Publication details: Ozgur S. Oguz

Classification and Learning of Human-Human Interaction Behavior

Even though modeling human motor control informs us about the characteristics of human movement, non-verbal close proximity human-human interaction (HHI) behavior requires further investigation due to the dynamic nature of motion planning. In essence, there is a permanent action-perception loop running for both partners during dyadic interaction. Humans rely on not only some physical constraints, but also social signals during collaboration. A natural dyadic interaction cannot be achieved solely by generating collision free trajectories. As the robotic agent is expected to collaborate with humans skillfully similar to how humans interact with each other, these latent features have to be incorporated into robot control and motion generation frameworks. Similar to finding control models for human movement in isolation, can such effective interaction behavior skills be learned for autonomous systems?

Our research contributes a novel categorization of interaction scenarios and provides two imitation learning formulations to model close proximity interaction movements. These studies encourage exploring non-physical joint movement and building control policies for dyadic interaction sys- tematically and comprehensively in a unified way in future studies.

Contact / Publication details: Ozgur S. Oguz

Safe Learning for Event Driven Hybrid Systems

While learning techniques have achieved impressive performance in various control tasks, it is unavoidable that during the learning process, the intermediate policies may be unsafe and hence lead the system to dangerous behaviors.  In literature, there are mainly two ways to impose safety in the learning algorithms. One is to modify the cost function, and the other is to guide the exploration process.

We aim to utilize the insights from invariance control and supervisory control to impose safety guarantees for learning-based controllers in event-driven hybrid systems. A supervisor is constructed based on the safe set from reachability analysis or invariance function, and this supervisor could guide the learning process to ensure that the system remain inside the safe region when the learning-based controller is searching for an optimal policy.

Contact / Publication details: Zhehua Zhou

Identification and Fault Diagnosis

Switching hyperplane estimation for Hybrid Systems

The hybrid system is a dynamical system that exhibits both continuous and discrete dynamic behavior. Piecewise affine system (PWA) is a special class of hybrid system which can describe systems with switching behavior. The switching signal of PWA system is not exogenous but depends on a partitioning of the state-input space. Thus the estimation of switching hyperplane is a critical point for the identification of the PWA system.

We plan to design an algorithm that can be utilized to estimate the switching hyperplane with the machine learning method, e.g. Support vector machine, Extreme learning machine. The algorithm will realize the estimation with less priori information of the system. like without the orders and number of the subsystems and use the state-input vectors without labels. After achieve the goal of estimating hyperplane of PWA systems, the algorithm will be extended to estimate other hybrid systems.

Contact / Publication details: Yingwei Du

Sliding mode control and human robot collaboration

Sliding mode control is a popular variant structure control method devoted to control of non-linear systems. It has been widely applied to mechatronic engineering systems due to its excellent robustness against external disturbances and simple decoupled control structure. Recently the sliding mode technology has been developed to solve safety problems in human robot collaborations, including collision detection and classification, post-collision control and human motion prediction. Our work is dedicated to achieve a safety control framework for human robot collaboration by using sliding mode controllers and observers. External collisions are detected online without force sensors. Safe controllers are designed to guarantee the safety of humans and robots. Human motions are taken into consideration to enhance a friendly human robot collaboration.

Contact / Publication details: Zengjie Zhang


Stochastic Model Predictive Control in Autonomous Driving

Autonomous vehicles face the challenge of providing efficient transportation while safely maneuvering in an uncertain environment. Uncertainties arise in various forms, among these are sensor inaccuracy and that the controlled ego vehicle is unable to perfectly predict future maneuvers and motion of surrounding vehicles, also known as target vehicles. To give an example, the ego vehicle must be prepared for a sudden target vehicle lane change maneuver.

We develop Model Predictive Control (MPC) methods to advance autonomous driving. In MPC at every sampling time step an optimization problem is solved over a finite prediction horizon. The result is a series of optimal input values for the considered prediction horizon. Then, only the first input is applied to the vehicle and the next optimization step begins, resulting in a closed loop controller. MPC effectively handles constraints that the vehicle must meet, e.g. lane boundaries, speed limits, and collision avoidance.

We tackle uncertainties in the environment by applying Stochastic Model Predictive Control (SMPC). Accounting for all uncertainties may result in overly conservative driving, greatly limiting performance, especially in urban driving situations. In some scenarios collision is even inevitable. In SMPC constraints are adapted, yielding so called chance-constraints. They are not required to always hold, but a probability parameter is chosen that specifies the probability of constraint satisfaction, considering system uncertainty. In other words, a lower probability parameter allows for constraint violation that occurs more often, however, performance is increased.

Contact / Publication details: Tim Brüdigam

Fast and Efficient Optimal Control/Model Predictive Control framework for Autonomous Driving

In the field of autonomous cars, it is crucial that the controlled car can react to the highly dynamic environment i.e. other vehicles or pedestrians. Since other participants in traffic cannot be predicted beforehand, the car has to adapt fast to changing situations. Furthermore, it is desired that beside safety, also other aspects can be considered such as fuel consumption, passenger comfort, etc. Therefore, optimization and optimal control methods are widely used since it allows the consideration of these aspects by defining a suitable cost function that will be minimized. However, the bottleneck is that optimization requires heavy computational effort which make it inapplicable in online application.

In this work, we look at different approaches to improve the efficiency of optimal control/MPC methods. Model reduction and simplification are also investigated to reduce the computation time. For applications, we are interested in trajectory tracking, reaching motion for car-like systems in the context of dynamic environment i.e. avoiding other cars on-the-fly, overtaking a car that is running slower, etc.

Contact: Khoi Hoang Dinh

Manipulability Analysis of Redundant Robots
Directional manipulability map of a redundant robot structure

Redundant robots offer many interesting opportunities in manipulation tasks. This redundancy comes with the mathematical problem of an infinite number of joint configurations to result in a desired end-effector pose. However, the performance capabilities of the robot highly relies on the joint configuration, especially when joint limits are considered. In this context we are not interested in finding only the single best possible joint configuration for a task, but on evaluating all possible solutions at once. Therefore we developed an analytical approach of finding a manipulability map for a specific 7dof serial robot configuration. Knowledge about this manipulability map can answer many interesting research questions e.g. optimal mounting pose of the robot, optimal end-effector configuration, optimal kinematic structure etc.

Contact / Publication details: Gerold Huber

Dynamics of Highly Deformable Objects
Dynamic evolution of large planar beam deformations

There is a vast literature existing on mathematical treatment of elastic bodies. While the classical works offer a very analytical treatment, many of the more recent ones make heavy use of modern computation power for numerical treatment. The motivation for this study comes from a manipulation point of view. While usually in dynamic applications elasticity is tried to be avoided, compensated or suppressed, we want to explore the field of exploiting elastic dynamics. Known ideas either exploit elasticity from a mechanical design viewpoint for passive compliance or safety features. We are interested how highly deformable elastic objects can exploited in an active manner by sophisticated control inputs. As a first case study we consider a classical thin beam that is clamped on one side and free on the other. In contrary to case studied in literature, we want to allow very large deformations.

Contact / Publication details: Gerold Huber

Adaptive Action Selection in Human-Robot Collaboration

The main focus of this research area is set particularly on industrial assembly processes with mixed Human-Robot teams, which are typically well understood and defined in advance. In order to seamlessly interact with humans, an autonomous agent is required to use the basic rules of an ongoing task to plan ahead the individual possibilities each agent is granted and adapt actions on-the-fly. The main challenge when interacting with a human is that unlike robots, humans do not always follow the same sequence of actions even when a detailed plan is provided. This has thus to be incorporated in the planning, allocation and execution phase accordingly. Therefore, it is of utmost importance to analyze the mutual interference of each action. Thereby, the sequence of actions can be adjusted w.r.t. the human coworkers, unlike classic robot planning in which a robot follows a predetermined sequence of actions.

We propose an assembly planning and execution framework, which incorporates well understood methods of interactive game theory, optimal planning and multi-agent reinforcement learning to grant robots the ability of not only adapting, but to actually cooperate with human co-workers in joint assembly processes.

Contact / Publication details: Volker Gabler