From Nico@CU
Jump to navigation Jump to search

On this page, you can find all the available information about the robot - including various technical details etc.


The robot has 22 degrees of freedom including two 6DOF hands (the IDs on the left show the motor ID in the low-level communication protocol):

1 - right shoulder forward-backward
2 - left shoulder forward-backward
3 - right shoulder lift
4 - left shoulder lift
5 - right elbow
6 - left elbow
19 - head rotate
20 - head lift
21 - right shoulder left-right
22 - left shoulder left-right

30 - controller in the right hand with palm sensor 

31 - right wrist rotate
33 - right wrist left-right

34 - right thumb lift
35 - right thumb close
36 - right index finger
37 - right other fingers

40 - controller in the left hand with palm sensor 

41 - left wrist rotate
43 - left wrist left-right

44 - left thumb lift
45 - left thumb close
46 - left index finger
47 - left other fingers

Software, links documentation

ROS support and other related info

  • from Connor Gäde at the University of Hamburg:

The current version was tested with ROS melodic and noetic. We have never tested ROS 2. Our setup checks for specific ROS Versions during installation, so it won’t build the ROS packages under ROS 2. ROS noetic natively uses python 3. ROS melodic officially only supports python 2.7, but our python 3 setup does make it use the python 3 executable of the virtual environment. This works for the examples provided, but might cause issues if you want to install additional ROS packages. Our setup already has to compile cv_bridge for python 3, other packages might have similar issues. The emotionrecognition module requires an old tensorflow version, so it will only run on python 3.7.X and earlier since only tensorflow 2 is supported from python 3.8 onwards. We are currently in the process of eliminating that dependency. The moveit integration hasn’t been updated in a while and won’t work with the noetic version since moveit broke backwards compatibility. There were also issues with moveit under python 3 melodic if I remember correctly. Other than that, everything should be python 3 compatible.

Low level control of motors and other

  • connections PC to robot
    • USB3 - left camera
    • USB3 - right camera
    • USB - dynamixel protocol RS485 bus control of all motors and palm sensors typically at /dev/ttyUSB0
    • USB - control of LED matrix displays in the head, typically at /dev/ttyACM0 - these respond to text commands at 115200 bps
    • USB - audio in (microphones)
    • USB - audio out (speaker)
    • optional: USB for programming controllers in the hands
  • hands_demo.c - simple demo made by modifying an example of DynamixelSDK
  • low-level motor configuration know-how from Pedro: configuring_motors.txt (all of them are switched to protocol 1 as of Oct 12th 2021)
  • More about motor limits and values


NICO’s head features two parallel See3CAMCU135 cameras with 4K resolution (4,096 × 2,160). The cameras have an opening angle of 202◦. Via the API, the camera can be configured to transmit only parts of the image and thus constrain the field of view to a human opening angle of 70◦.

To simply test the view of the cameras (they create the video devices /dev/video0 - left camera, /dev/video2 - right camera), you can use:

 fswebcam --device /dev/video0 imageL.jpg
 fswebcam --device /dev/video2 imageR.jpg

or to take pictures and record videos in interactive application:


News from April 2022

We have experimented with cameras without IR filter.

When used at daylight, they are missing the green channel, it appears grey.

Update (May 2022): This has been resolved by using lens with IR filter. See the comparison without and with IR filter: Without IR filter.jpg With IR filter.jpg With IR filter other.jpg

The See3Cam CU135 has two output modes - MJPG and UYVY. It seems that MJPG works only after replugging the cameras to USB, the stream of JPEGs probably gets interrupted in the middle and cannot be put in sync in the second run.

For low-level reading of images using video4linux, one can obtain a pair of images from the camera and saving them to PNG files. The utility is in our nico's github:

We have made some shots: calibration and candy datasets that we obtained with that program can be inspected/mirror-downloaded here: (warning: some of those are for the old lenses, which were replaced, use the calib_new_lenses/ for the new lenses)

Regarding the "official" software for Nico with python examples. Programs there that work with cameras fail to work. The reason is that our cameras have different IDs than those that the software accepts. In order to be able to run those examples, you have to change the file


and add our cameras names:

   NICO_EYES = { 
       "left": (
        "right": (

and also in the file


we had to add the cameras, for example by copying the parameters for those other cameras that are there (more proper would be to calibrate these cameras and find the parameters). But even if this is changed, some examples still do not want to work... as they seem to struggle with the UYVY format.


  • Nico's ears contain microphones sampled with high-quality studio-recording equipment with 96 kHz, 24 bit sampling
  • Nico's speaker