Ultra Light Affect Recognition

1 minute read

For aHRI class, we developed our face and emotion recognition ROS package based on pretrained models:

  • Face detection: Ultra-light face detector
  • Face recognition: MobileFaceNet
  • Emotion recognition:
    • Arch: VGG networks4
    • Dataset: facial expression dataset (FER13 dataset)
    • Accuracy: 65.93%

We implemented two nodes for face recognition and emotion recognition. The face recognition node firstly detects human faces in images sent by an Astra Stereo S U3 camera, and conducts face recognition based on our trained face data. After recognition, the module outputs the face image along with the detected name and pixel location to the emotion recognition subscriber. Finally, the emotion recognition node processes the face images and outputs the result via MQTT to Austin, our Robotic Bat. There is an optional face distance measurement node in this package. It will compute the distance between the camera and the detected face based on the depth value of the detected face.

Diagram

RQT Graph

Result

References

Updated:

Leave a comment