How to build a professional low-cost lightboard for teaching

Giving virtual lectures can be exciting. Inspired by numerous blog posts of colleagues all over the world (e.g., [1], [2], [3]), I decided to turned an ordinary glass desk into a light board. The total costs were less than 100 EUR.
Below you can see some snapshots of the individual steps.

Details to the lightboard construction

The light board construction is based on

  • A glas pane, 8mm thick. Hint: do not use acrylic glass or glas panes thinner than 8mm. I got an used glass/metal desk for 20EUR.
  • LED stripes from YUNBO 4mm width, e.g. from [4] for 13EUR. Hint: Larger LED strips, which you can typically get at DIY markets have width of 10mm. These strips do not fit into the transparent u profile.
  • Glass clamps for 8mm glass, e.g., from onpira-sales [5] for 12EUR.
  • Transparent U profiles from a DIY store, e.g., the 4005011040225 from HORNBACH [6] for 14EUR.
  • 4 castor wheels with breaks, e.g. from HORNBACH no. 4002350510587 for 21EUR.

Details to the markers, the background and the lighting

Some remarks are given below on the background, the lighting and the markers.

  • I got well suited flourescent markers, e.g., from [6] for 12EUR. Hint: Compared to liquid chalk, these markers do not produce any noise during the writing and are far more visible.
  • The background blind is of major importance. I used an old white roller blind from [7] and turned it into a black blind using 0.5l of black paint. Hint: In the future, I will use a larger blind with a width of 3m. A larger background blind is required to build larger lightboards (mine is 140x70mm). Additionally, the distance between the glass pane and the blind could be increased (in my current setting I have a distance of 55cm).
  • Lighting is important to illuminate the presenter. I currently use two small LED spots. However, in the future I will use professional LED studio panels with blinds, e.g. [8]. Hint: The blinds are important to prevent illuminating the black background.
  • The LED stripes run at 12Volts. However, my old glass pane had many scratches, which become fully visible at the maximum power. To avoid these distracting effects, I found an optimal setting with 8Volts worked best for my old glass pane.

Details to the software and to the microphone

At the University of Luebeck, we are using the CISCO’s tool WEBEX for our virtual lectures. The tool is suboptimal for interactive lightboard lectures, however, with some additional tools, I converged to a working solution.

  • Camera streaming app, e.g., EPOCCAM for the iphones or IRIUN for android phones. Hint: the smartphone is mounted on a tripod using a smartphone mount.
  • On the client side, a driver software is required. Details can be found when running the smartphone app.
  • On my mac, I am running the app Quick Camera to get a real time view of the recording. The viewer is shown in a screen mounted to the ceiling. Hint: The screen has to be placed such that no reflections are shown in the recordings.
  • In the WEBEX application, I select the IRIUN (virtual) webcam as source and share the screen with the quick camera viewer app.
  • To ensure an undamped audio signal, I am using a lavalier microphone like that one [9].
  • For offline recordings, apple’s quicktime does a decent job. Video and audio sources can be selected correctly. Hint: I also tested VLC, however, the lag of 2-3 seconds was perceived suboptimal by the students (a workaround with proper command line arguments was not tested).

An example lecture

Sicheres Autonomes Fahren mit Probabilistischen Neuronalen Netzen

Wir Menschen sind in der Lage unter widrigen Bedingungen z.B. bei eingeschränkter Sicht, oder bei Störungen komplexe Vorgänge wahrzunehmen, vorherzusagen und innerhalb von wenigen Millisekunden zusammenhängende Entscheidungen zu treffen. Mit dem zunehmenden Grad der Automatisierung steigen auch die Anforderungen an künstliche Systeme. Immer komplexere und größere Datenmengen müssen verarbeitet werden um autonome Entscheidungen zu treffen. Mit gängigen KI Ansätzen stoßen wir aufgrund der konvergierenden Miniaturisierung an Grenzen, die z.B. im Bereich des autonomen Fahrens nicht ausreichen, um ein sicheres autonomes System zu entwickeln.

Ziel dieser Forschung ist es probabilistische Vorhersagemodelle in massiv parallelisierbaren neuronalen Netzen zu implementieren und mit diesen komplexe Entscheidungen Aufgrund erlernter interner Vorhersagemodelle zu treffen. Die neuronalen Modelle verarbeiten hoch dimensionale Daten moderner und innovativer taktiler und visueller Sensoren. Wir testen die neuronalen Vorhersage und Entscheidungsmodelle in humanoiden Roboteranwendungen in dynamischen Umgebungen.

Unser Ansatz beruht auf der Theorie der probabilistischen Informationsverarbeitung in neuronalen Netzen und unterscheidet sich somit grundlegend von den gängigen Methoden tiefer neuronaler Netze. Die zugrundeliegende Theorie ermöglicht weitreichende Modelleinblicke und erlaubt neben den Vorhersagen von Mittelwerten auch Unsicherheiten und Korrelationen. Diese zusätzlichen Vorhersagen sind entscheidend für verlässliche, erklärbare und robuste künstliche Systeme und sind eines der größten offenen Probleme in der künstlichen Intelligenz Forschung.

Dieses Projekt wurde mit dem Deutschen KI-Nachwuchspreis der Bilanz Deutschland Wirtschaftsmagazin GmbH geehrt und demonstriert die Wichtigkeit für Grundlagenforschung in der künstlichen Intelligenz.

H2020 Goal-Robots 11/2016-10/2020

This project aims to develop a new paradigm to build open-ended learning robots called `Goal-based Open ended Autonomous Learning’ (GOAL). GOAL rests upon two key insights. First, to exhibit an autonomous open-ended learning process, robots should be able to self-generate goals, and hence tasks to practice. Second, new learning algorithms can leverage self-generated goals to dramatically accelerate skill learning. The new paradigm will allow robots to acquire a large repertoire of flexible skills in conditions unforeseeable at design time with little human intervention, and then to exploit these skills to efficiently solve new user-defined tasks with no/little additional learning.

Link: http://www.goal-robots.eu

How to build and use a low-cost sensor glove

This post discusses how to develop a low cost sensor glove with tactile feedback using flex sensors and small vibration motors. MATLAB and JAVA code is linked.

Documentation

  • Weber, Paul; Rueckert, Elmar; Calandra, Roberto; Peters, Jan; Beckerle, Philipp
    A Low-cost Sensor Glove with Vibrotactile Feedback and Multiple Finger Joint and Hand Motion Sensing for Human-Robot Interaction Inproceedings
    Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2016. https://ai-lab.science/wp/ROMANS2016Weber.pdf
  • Rueckert, Elmar; Lioutikov, Rudolf; Calandra, Roberto; Schmidt, Marius; Beckerle, Philipp; Peters, Jan
    Low-cost Sensor Glove with Force Feedback for Learning from Demonstrations using Probabilistic Trajectory Representations
    Inproceedings ICRA 2015 Workshop on Tactile and force sensing for autonomous compliant intelligent robots2015https://ai-lab.science/wp/ICRA2015Rueckertb.pdf

Hardware

  • Arduino Mega 2560 Board
  • Check which USB device is used (e.g., by running dmesg). On most of our machines it is /dev/ttyACM0
  • Enable read/write permissions if necessary, e.g., run sudo chmod o+rw /dev/ttyACM0
  • Serial protocoll based communication: Flex sensor readings are streamed and Vibration motor PWM values can be set between 0 and 255
  • Firmware can be found here (follow the instructions in the README.txt to compile and upload the firmware)
  • Features frame rates of up to 350Hz
  • Five flex sensors provide continuous readings within the range [0, 1024]

Simple Matlab Serial Interface – max 100Hz

  • Download the Matlab demo code from here
  • Tell Matlab which serial ports to use: copy the java.opts file to your Matlab bin folder, e.g., to /usr/local/MATLAB/R2012a/bin/glnxa64/
  • Run FastComTest.m

Fast Mex-file based Matlab Interface – max 350Hz

  • Install libserial-dev
  • Download the code from here
  • Compile the mex function with: mex SensorGloveInterface.cpp -lserial
  • Run EventBasedSensorGloveDemo.m

How to build a low-cost USB controlled treadmill

This post discusses how to develop a low cost treadmill with a closed-loop feedback controller for reinforcement learning experiments. MATLAB and JAVA code is linked.

Hardware – Treadmill

  • Get a standard household treadmill Samples
  • Note: It should work with a DC-Motor, otherwise a different controller is needed!

 Hardware – Controller

  • Pololu Jrk 21v3 USB Motor Controller with Feedback or stronger (max. 28V, 3A)
  • Comes with a Windows Gui to specify the control gains
  • Sharp distance sensor GP2Y0A21, 10 cm – 80 cm or similar
  • USB cable
  • Cable for the distance sensor
  • Power cables for the treadmill
  • User Guide: https://www.pololu.com/docs/pdf/0J38/jrk_motor_controller.pdf

 Matlab Interface (max. 50 Hz)

  • Get the java library  build or the developer version, both from Sept 2015 created by E. Rueckert.
  • Run the install script installFTSensor.m (which add the jar to your classpath.txt)
  • Check the testFTSensor.m script which builds on the wrapper class MatlabFTCL5040Sensor (you need to add this file to your path)

AI and Learning in Robotics

Robotics AI requires autonomous learning capabilities

The challenges in understanding human motor control, in brain-machine interfaces and anthropomorphic robotics are currently converging. Modern anthropomorphic robots with their compliant actuators and various types of sensors (e.g., depth and vision cameras, tactile fingertips, full-body skin, proprioception) have reached the perceptuomotor complexity faced in human motor control and learning. While outstanding robotic and prosthetic devices exist, current brain machine interfaces (BMIs) and robot learning methods have not yet reached the required autonomy and performance needed to enter daily life.

The groups vision is that four major challenges have to be addressed to develop truly autonomous learning systems. These are, (1) the decomposability of complex motor skills into basic primitives organized in complex architectures, (2) the ability to learn from partial observable noisy observations of inhomogeneous high-dimensional sensor data, (3) the learning of abstract features, generalizable models and transferable policies from human demonstrations, sparse rewards and through active learning, and (4), accurate predictions of self-motions, object dynamics and of humans movements for assisting and cooperating autonomous systems.

Neural and Probabilistic Robotics

Neural and Probabilistic Robotics

Neural models have incredible learning and modeling capabilities which was demonstrated in complex robot learning tasks (e.g., Martin Riedmiller’s or Sergey Levine’s work). While these results are promising we lack a theoretical understanding of the learning capabilities of such networks and it is unclear how learned features and models can be reused or exploited in other tasks.

The ai-lab investigates deep neural network implementations that are theoretical grounded in the framework of probabilistic inference and develops deep transfer learning strategies for stochastic neural networks. We evaluate our models in challenging robotics applications where the networks have to scale to high-dimensional control signals and need to generate reactive feedback command in real-time.

Our developments will enable complex online adaptation and skill learning behavior in autonomous systems and will help to gain a better understanding of the meaning and function of the learned features in large neural networks with millions of parameters.