Open Educational Resources


Open Educational Resources (OER) are freely accessible, openly licensed documents and media that are useful for teaching, learning, educational, assessment and research purposes.

Reviews (1)

Shubham Vyas


Robots today move far too conservatively, using control systems that attempt to maintain full control authority at all times. Humans and animals move much more aggressively by routinely executing motions which involve a loss of instantaneous control authority. Controlling nonlinear systems without complete control authority requires methods that can reason about and exploit the natural dynamics of our machines.

This course discusses nonlinear dynamics and control of underactuated mechanical systems, with an emphasis on machine learning methods. Topics include nonlinear dynamics of passive robots (walkers, swimmers, flyers), motion planning, partial feedback linearization, energy-shaping control, analytical optimal control, reinforcement learning/approximate optimal control, and the influence of mechanical design on control. Discussions include examples from biology and applications to legged locomotion, compliant manipulation, underwater robots, and flying machines.

Over 28 hours of lectures.

23 lectures.

Assignments and exams.

Reference Texts


Russ Tedrake is an Associate Professor in the Department of Electrical Engineering and Computer Science at MIT, and a member of the Computer Science and Artificial Intelligence Lab. He received his B.S.E. in Computer Engineering from the University of Michigan, Ann Arbor, in 1999, and his Ph.D. in Electrical Engineering and Computer Science from MIT in 2004, working with Sebastian Seung. After graduation, he spent a year with the MIT Brain and Cognitive Sciences Department as a Postdoctoral Associate. During his education, he has spent time at Microsoft, Microsoft Research, and the Santa Fe Institute.

Underactuated Robotics by Prof. Russell Tedrake is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at

Course content

  • Lecture 1: Introduction

  • Lecture 2: The Simple Pendulum

  • Lecture 3: Optimal Control of the Double Integrator

  • Lecture 4: Optimal Control of the Double Integrator (cont.)

  • Lecture 5: Numerical Optimal Control (Dynamic Programming)

  • Lecture 6: Acrobot and Cart-pole

  • Lecture 7: Swing-up Control of Acrobot and Cart-pole Systems

  • Lecture 8: Dynamic Programming (DP) and Policy Search

  • Lecture 9: Trajectory Optimization

  • Lecture 10: Trajectory Stabilization and Iterative Linear Quadratic Regulator

  • Lecture 11: Walking

  • Lecture 12: Walking (cont.)

  • Lecture 13: Running

  • Lecture 14: Feasible Motion Planning

  • Lecture 15: Global Policies from Local Policies

  • Lecture 16: Introducing Stochastic Optimal Control

  • Lecture 17: Stochastic Gradient Descent

  • Lecture 18: Stochastic Gradient Descent 2

  • Lecture 19: Temporal Difference Learning

  • Lecture 20: Temporal Difference Learning with Function Approximation

  • Lecture 21: Policy Improvement

  • Lecture 22: Actor-critic Methods

  • Lecture 23: Case Studies in Computational Underactuated Control

  • Assignments

  • Exams

Interested? Enroll to this course right now.

There is more to learn