Home

 

ChibiFace: A Sensor-Rich Android Tablet-Based Interface for Industrial Robotics

Publications:
Nurimbetov B., Saudabayev A., Temiraliuly D., Sakryukin A., Serekov A., Varol H.A.,
"ChibiFace: A Sensor-Rich Android Tablet-Based Interface for Industrial Robotics", SII 2015 - IEEE/SICE International Symposium on System Integration, 2015

ChibiFace is a robotic guide for safe and efficient human robot interaction. It resides in a tablet but can control outside peripherals using numerous communication interfaces. ChibiFace was created with human-robot interaction in industrial setting in mind, however, it can also be used in social robotics. Our main goal was to show that the internal sensors of the tablet is sufficient to guarantee safety of the human user and control of the robot.


Chibi

Figure 1. ChibiFace.

The hardware of ChibiFace consists of Asus Google Nexus 7 with 1.2 GHz quad-core ARCortex-A9 Nvidia Tegra 3 T30L CPU, 1 GB RAM, 16 GB storage, 7 inch form factor with 1280×800 pixels resolution; and a 3D printed enclosure with 2 dynamixel motors, BT210 Bluetooth module, OpenCM9.04 controller and numerous connection methods. The tablet mimics the head of the robot, while the enclosure - the neck.


Chibi

Figure 2. ChibiFace Hardware Architecture.

Tablet sports Android Lollipop OS with OpenCM PocketSphinx and OpenCV for speech and image processing. The tablet connects to a laptop with Windows 7 OS that runs Python and VLA3(Stäubli Robotics Suite) to provide the control of the Stäubli industrial robot.


Chibi

Figure 3. ChibiFace Hardware-Software architecture.

Chibi

Figure 4. ChibiFace Emotions.

ChibiFace is capable of Face Detection using Haar fature-based cascade classifiers (OpenCV), Face Recognition using OpenCV LBPH method, Speech Recognition with PocketSphinx Based recognition library, Speech Generation using Google’s Native Speech Generation Engine and Emotion Amination by showing sequence of images on the tablet screen. Face detection is used for face tracking (using neck module) and distance estimation, face recognition is used for authentication of the user to check the permission of the user to perform certain tasks, speech recognition is used to provide the user the ability to control the robot using voice commands, speech generation is used to provide the robot with the ability to answer questions and warn users about the manipulator state, and emotion animation to transmit intuitive notion of safety and danger.


Chibi

Figure 5. ChibiFace Industrial Setup.