How to build and use a low-cost sensor glove

This post discusses how to develop a low cost sensor glove with tactile feedback using flex sensors and small vibration motors. MATLAB and JAVA code is linked.

Documentation

  • Weber, Paul; Rueckert, Elmar; Calandra, Roberto; Peters, Jan; Beckerle, Philipp
    A Low-cost Sensor Glove with Vibrotactile Feedback and Multiple Finger Joint and Hand Motion Sensing for Human-Robot Interaction Inproceedings
    Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2016. https://ai-lab.science/wp/ROMANS2016Weber.pdf
  • Rueckert, Elmar; Lioutikov, Rudolf; Calandra, Roberto; Schmidt, Marius; Beckerle, Philipp; Peters, Jan
    Low-cost Sensor Glove with Force Feedback for Learning from Demonstrations using Probabilistic Trajectory Representations
    Inproceedings ICRA 2015 Workshop on Tactile and force sensing for autonomous compliant intelligent robots2015https://ai-lab.science/wp/ICRA2015Rueckertb.pdf

Hardware

  • Arduino Mega 2560 Board
  • Check which USB device is used (e.g., by running dmesg). On most of our machines it is /dev/ttyACM0
  • Enable read/write permissions if necessary, e.g., run sudo chmod o+rw /dev/ttyACM0
  • Serial protocoll based communication: Flex sensor readings are streamed and Vibration motor PWM values can be set between 0 and 255
  • Firmware can be found here (follow the instructions in the README.txt to compile and upload the firmware)
  • Features frame rates of up to 350Hz
  • Five flex sensors provide continuous readings within the range [0, 1024]

Simple Matlab Serial Interface – max 100Hz

  • Download the Matlab demo code from here
  • Tell Matlab which serial ports to use: copy the java.opts file to your Matlab bin folder, e.g., to /usr/local/MATLAB/R2012a/bin/glnxa64/
  • Run FastComTest.m

Fast Mex-file based Matlab Interface – max 350Hz

  • Install libserial-dev
  • Download the code from here
  • Compile the mex function with: mex SensorGloveInterface.cpp -lserial
  • Run EventBasedSensorGloveDemo.m

How to build a low-cost USB controlled treadmill

This post discusses how to develop a low cost treadmill with a closed-loop feedback controller for reinforcement learning experiments. MATLAB and JAVA code is linked.

Hardware – Treadmill

  • Get a standard household treadmill Samples
  • Note: It should work with a DC-Motor, otherwise a different controller is needed!

 Hardware – Controller

  • Pololu Jrk 21v3 USB Motor Controller with Feedback or stronger (max. 28V, 3A)
  • Comes with a Windows Gui to specify the control gains
  • Sharp distance sensor GP2Y0A21, 10 cm – 80 cm or similar
  • USB cable
  • Cable for the distance sensor
  • Power cables for the treadmill
  • User Guide: https://www.pololu.com/docs/pdf/0J38/jrk_motor_controller.pdf

 Matlab Interface (max. 50 Hz)

  • Get the java library  build or the developer version, both from Sept 2015 created by E. Rueckert.
  • Run the install script installFTSensor.m (which add the jar to your classpath.txt)
  • Check the testFTSensor.m script which builds on the wrapper class MatlabFTCL5040Sensor (you need to add this file to your path)

AI and Learning in Robotics

Robotics AI requires autonomous learning capabilities

The challenges in understanding human motor control, in brain-machine interfaces and anthropomorphic robotics are currently converging. Modern anthropomorphic robots with their compliant actuators and various types of sensors (e.g., depth and vision cameras, tactile fingertips, full-body skin, proprioception) have reached the perceptuomotor complexity faced in human motor control and learning. While outstanding robotic and prosthetic devices exist, current brain machine interfaces (BMIs) and robot learning methods have not yet reached the required autonomy and performance needed to enter daily life.

The groups vision is that four major challenges have to be addressed to develop truly autonomous learning systems. These are, (1) the decomposability of complex motor skills into basic primitives organized in complex architectures, (2) the ability to learn from partial observable noisy observations of inhomogeneous high-dimensional sensor data, (3) the learning of abstract features, generalizable models and transferable policies from human demonstrations, sparse rewards and through active learning, and (4), accurate predictions of self-motions, object dynamics and of humans movements for assisting and cooperating autonomous systems.

Neural and Probabilistic Robotics

Neural and Probabilistic Robotics

Neural models have incredible learning and modeling capabilities which was demonstrated in complex robot learning tasks (e.g., Martin Riedmiller’s or Sergey Levine’s work). While these results are promising we lack a theoretical understanding of the learning capabilities of such networks and it is unclear how learned features and models can be reused or exploited in other tasks.

The ai-lab investigates deep neural network implementations that are theoretical grounded in the framework of probabilistic inference and develops deep transfer learning strategies for stochastic neural networks. We evaluate our models in challenging robotics applications where the networks have to scale to high-dimensional control signals and need to generate reactive feedback command in real-time.

Our developments will enable complex online adaptation and skill learning behavior in autonomous systems and will help to gain a better understanding of the meaning and function of the learned features in large neural networks with millions of parameters.