Important Notice: This course is organized through online lectures and exercises. Details to the organizations will be discussed in our
FIRST MEETING: 07.04.2020 10:15-11:45 WEBEX Slides
using the webex tool. Please follow the instructions of the ITSC here to setup your computer. Click on the links to create a google calendar event, joint the webex meeting or to access the online slides.
during a webex meeting. The slides are available here. Click on the links to create a google calendar event, joint the webex meeting or to access the online slides.
Dates & Times of the Online Webex Meetings
- Lectures are organized on TUESDAYS, 10:15-11:45, Webex Link
- Exercises are organized on WEDNESDAYS, 10:15-11:45, Webex Link
Prof. Dr. Elmar Rueckert is teaching the course on Humanoid Robotics (RO5300) together with M.Sc. Nils Rottmann, who supervises the exercises. In this course he discusses the key components of one of the most complex autonomous systems. These topics are
- Kinematics, Dynamics & Simulation
- Representations of Skills & Imitation Learning
- Feedback Control, Priorities & Torque Control
- Planning & Cognitive Reasoning
- Reinforcement Learning & Policy Search
Ask questions at our course related Q&A page here.
This course provides a unique overview over central topics in robotics. A particular focus is put in the dependencies and interaction among the components in the control loop illustrated in the image above. These interactions are discussed in the context of state of the art methods including dynamical systems movement primitives, gradient based policy search methods or probabilisitic inference for planning algorithms.
In sum, the lecture provides a structured and well motivated overview over modern techniques and tools which enable the students to define reward functions, implement robot controller and interaction software and to apply and extend state of the art reinforcement learning and planning approaches.
No special knowledge is required beforehand. All concepts and theories will be developed during the lectures or the tutorials.
The students will also experiment with state of the art machine learning methods and robotic simulation tools in accompanying exercises. Hands on tutorials on programming with Matlab and the simulation tool V-Rep complement the course content.
Course dates & materials (tentative slides, last update April 1st, 2020)
|Dates||Chapter||Topics||Links to Slides, Vodcasts, Code, etc.|
|07.04||VO||An Introduction to Humanoid Robotics||Slides, VodCast|
|08.04||UE||0 Basics||Matrix, Vectors, Inv. kinematics, gradient desc.||Exercise Sheet, Solution Sheet, Video|
|14.04||VO||I Kinematics, Dynamics & Simulation||Classical forward and inverse kinematics||Slides,Vodcast (Sorry, at 4min13 the microphone was activated.)|
|15.04||UE||0 Basics||Mechanical & dynamical systems (4 BP Sheet)||Exercise Sheet, Video|
|21.04||VO||I Kinematics, Dynamics & Simulation||Forward & inverse kinematics for control||Slides, Vodcast|
|22.04||UE||0 Basics||Mechanical & dynamical systems||Solution Sheet, Video|
|28.04||VO||II Representations of Skills & Imitation Learning||Dynamical systems movement primitives||Slides, VodCast|
|29.04||UE||0 Basics||Differential equations & numerical solutions||Exercise Sheet, Solution Sheet, Video|
|05.05||VO||II Representations of Skills & Imitation Learning||Muscle Synergies and Probabilistic Movement Primitives||Slides, VodCast|
|06.05||UE||0 Basics||V-Rep simulation env. (For the BPs send sreenshots combined in a PDF)||Exercise Sheet, Solution Sheet, Video, AddOn|
|12.05||VO||III Feedback Control, Priorities & Torque Control||Classical PID Control & rigid body dynamics||Slides, VodCast|
|13.05||UE||Assignment I||Inverse Kinematics||Exercise Sheet, Vrep, Solution Sheet, Video|
|20.05||UE||0 Basics||Statistics, Bayes, Gaussian distributions||Exercise Sheet, Solution Sheet, Video|
|26.05||UE||Assignment II||Dynamical systems movement primitives||Exercise Sheet, Solution Sheet, Data, Video|
|02.06||VO||IV Reinforcement Learning||Markov decision processes, value iteration, Q-learning, Deep Q-Learning||Slides, VodCast|
|03.06||UE||I to II||Recap and Q&As|
|09.06||VO||V Planning & Cognitive Reasoning||Sampling based planning, RRT||Slides,|
|10.06||UE||Assignment III||PID and LQR control||Exercise Sheet, Solution Sheet, Video|
|16.06||VO||Summary & Outlook of Advanced Topics||Bonus Point Exam (max. 10Pts)||Slides, VodCast|
|24.06||UE||Assignment IV||Planning, RRT||Exercise Sheet, Solution Sheet, Video|
|08.07||UE||Assignment IV Presentations|
The course grades will be computed solely from submitted student reports of four assignments, for each you can get 25 points. Two weeks after the assignment presentation events, the reports and the code have to be submitted (one report per team) to firstname.lastname@example.org. Below is the list of dates and deadlines. Please use Latex for submitting the assignments. A latex template can be found under https://drive.google.com/file/d/1sFo1qLH5Z9dX6Kdsg950UBCXCPiWOHng/view?usp=sharing
|Presentation Date||Topics||Points||Submission Deadline|
|13.05.2020 10:15-11:45||Assignment I Presentation||25||27.05.2020 10:00|
|27.05.2020 10:15-11:45||Assignment II Presentation||25||10.06.2020 10:00|
|10.06.2020 10:15-11:45||Assignment III Presentation||25||24.06.2020 10:00|
|24.06.2020 10:15-11:45||Assignment IV Presentation||25||08.07.2020 10:00|
You can receive up to 30 Bonus Points (BP) during the course, 10 BP during the lectures and 20 BP for submitting optional exercise solutions. To get BPs during the lecture, you have to successfully participate at the quizz sessions at the beginning of each lecture. To get BPs for the optional exercise solutions, you have to (clearly and readable) write down your solution, take a photo and send it to email@example.com with the concern Exercise_##_LastName, where ## is the number of the exercise. You have to send your solution prior to the start of the exercise session where we will cover the exercise sheet.
Points to Grades
|95||1.0||Best possible grade|
|0||5.0||Worst possible grade|
Materials for the exercise
For simulating robot manipulation tasks we will use the simulator V-REP. For research and for teaching a free eduction version can be found here. To experiment with state of the art robot control and learning methods Mathworks’ MATLAB will be used. If you do not have it installed yet, please follow the instructions of our IT-Service Center.
Matlab Files shown during the Tutorial can be found here.