Seminars, Minicourses & Lectures

The IRIM Seminar Series | April 14, 2021 | 12:15PM EDT
Conflict-Aware Risk-averse and Safe Reinforcement Learning: A Meta-Cognitive Learning Framework
Hamidreza Modares | Assistant Professor; Department of Mechanical Engineering, Michigan State University
While the success of reinforcement learning (RL) in computer games has shown impressive engineering feat, unlike the computer games, safety-critical settings such as unmanned vehicles must thrash around in the real world, which makes the entire enterprise unpredictable.  Standard RL practice generally implants pre-specified performance metrics or objectives into the RL agent to encode the designers’ intention and preferences in achieving different and sometimes conflicting goals (e.g., cost efficiency, safety, speed of response, accuracy, etc.). Optimizing pre- specified performance metrics, however, cannot provide safety and performance guarantees across a vast variety of circumstances that the system might encounter in non-stationary and hostile environments. In this talk, I will discuss novel metacognitive RL algorithms to learn not only a control policy that optimizes accumulated reward values, but also what reward functions to optimize in the first place to formally assure safety with a good enough performance. I will present safe RL algorithms that adapt the focus of attention of RL algorithm to its variety of performance and safety objectives to resolve conflict and thus assure the feasibility of the reward function in a new circumstance. Moreover,  model-free RL algorithms will be presented to solve the risk-averse optimal control (RAOC) problem to optimize the expected utility of outcomes while reducing the variance of cost under aleatory uncertainties (i.e., randomness). This is because, performance-critical systems must not only optimize the expected performance, but also reduce its variance to avoid performance fluctuation during RL’s course of operation. To solve the RAOC problem, I will present the three variants of RL algorithms and analyze their advantages and preferences for different situations/systems:  1) a one-shot static convex program based RL, 2) an iterative value iteration algorithm that solves a linear programming optimization at each iteration, and 3) an iterative policy iteration algorithm that solves a convex optimization at each iteration and guarantees the stability of the consecutive control policies.
Hamidreza Modares is an Assistant Professor in the Department of Mechanical Engineering at Michigan State University. Prior to joining Michigan State University, he was an Assistant professor in the Department of Electrical Engineering, Missouri University of Science and Technology. His current research interests include control and security of cyber–physical systems, machine learning in control, distributed control of multi-agent systems, and robotics. He is an Associate Editor of IEEE Transactions on Neural Networks and Learning Systems.


Visiting Faculty Fellows Mini-Courses

IRIM’s Visiting Faculty Fellows program supports extended visits (one to six months) to the Georgia Tech Atlanta campus by faculty members from other institutions or industry/government laboratories who are engaged in research activities focusing on robotics. IRIM provides Visiting Fellows with partial salary support, along with support for travel and living expenses. Visiting Fellows interact with IRIM faculty and students and teach a minicourse on their current research during their stay at Georgia Tech.

 IRIM Fellows Emeritus


Nonlinear Control for Robots
Mark W. Spong - Professor of Systems Engineering, Professor of Electrical and Computer Engineering, and Excellence in Education Chair in the Erik Jonsson School of Engineering and Computer Science
The University of Texas at Dallas

Mark W. Spong received the Doctor of Science degree in systems science and mathematics in 1981 from Washington University in St. Louis. He has held faculty positions at Lehigh University, Cornell University, and at the University of Illinois at Urbana-Champaign. Currently, he is a professor of Systems Engineering, professor of Electrical and Computer Engineering and holder of the Excellence in Education Chair in the Erik Jonsson School of Engineering and Computer Science at the University of Texas at Dallas. He was Dean of the Jonsson School at UT Dallas from 2008-2017. During his tenure as dean he added four departments of engineering, nine new degree programs, and more than doubled the number of students and faculty.

Review Dynamics of Robot, Feedback Linearization, I/O Linearization and Zero Dynamics

Control of Underactuated Robots I

Control of Underactuated Robots II

Control of Underactuated Robots III, Control of Nonholonomic Systems I

Control of Nonholonomic Systems II

Control of Nonholonomic Systems III


Stochastic Methods for Robotics
Gregory S. Chirikjian - Professor; Department of Mechanical Engineering, Johns Hopkins

Chirikjian’s research interests lie in robotics, automation and manufacturing; biomolecular mechanics, conformational analysis and nanoscience; mathematical crystallography; medical image registration, fiducial design and reconstruction; and in mathematical modeling and computational mathematics. He has developed numerical and analytical techniques for efficient computation of motion in binary robot arm design. He holds four patents for his work.

Lecture 1: Stochastic Methods for Robotics

Lecture 2: Stochastic Methods for Robotics

Lecture 3: Stochastic Methods for Robotics

Lecture 4: Stochastic Methods for Robotics

Lecture 5: Stochastic Methods for Robotics

Lecture 6: Stochastic Methods for Robotics

Lecture 7: Stochastic Methods for Robotics

Lecture 8: Stochastic Methods for Robotics

“Life as a Professor” Video Series

Affiliated Center Seminars

Two of IRIM's affiliated centers also host weekly seminars. The Machine Learning Center holds seminars on Wednesdays at 12:15 p.m., alternating weekly with IRIM's schedule. The Decision and Control Laboratory (DCL) typically holds seminars on Fridays at 11 a.m.