As a student at the Middle Tennessee State University, I was a Graduate Student Researcher in the Real-time and Embedded Control, Computing, and Communication (REC3) Lab supervised by Dr Lei Miao.
My research was in the interfacing of computer vision techniques with an existing Raspberry Pi Based robot prototype; FAll DEtection Robot (FADER) to carry out detection, distance estimation, one-dimensional and two-dimensional navigation, and fall detection in that order.
The aim is to have such mobile robots in the homes of elderly people who live alone and to have it detect when they fall so they can get help fast.
The advantages of our approach is that it is
- Non-invasive (with respect to body)/non-participatory, i.e., the elderly person does not need to remember to wear or carry anything
- Mobile and portable
- Easy to assemble
- Designed as an isolated system and so is non-invasive with respect to privacy.
As at December 2018 (Fall 2018), I achieved detection, distance estimation and navigation in 1D space(see images and video below).
Detection: I integrated a Pi Camera with FADER. To do this, I tested the two versions of Pi Cameras- the Standard and the Pi NoIR. I chose the Pi NoIR. Next we considered as the detection approach face detection, face recognition and object detection. We chose deep learning based object detection specifically Single Shot Detection as the framework and MobileNets as the backbone architecture.
Distance Estimation: To solve the problem of distance estimation, we used linear regression to obtain a mathematical relationship between the distance of the robot from the person detected and the dimensions of the detection bounding box.
1-D Navigation: Using a threshold-based algorithm(TBA) and the distances estimated, we realized three states for 1-D navigation: approaching, waiting and retreating.
FADER without the Pi Camera SSD Object Detection Running on the Pi Camera
As at November 2019, we had achieved navigation in 2D space, and Fall Detection and had tested the robot in both mildly occluded and more occluded lab spaces.
2-D Navigation: 2-D navigation was broken down into three parts namely: tracking, turning/pivoting and 1-D navigation.
2-D Navigation: Tracking + Turning/Pivoting + 1-D Navigation
We achieved both tracking and pivoting using threshold-based algorithms, the coordinates of the center of the detected person and the coordinates of the center of the camera frame. As part of 2-D navigation, I made the following modifications to the initial FADER prototype:
- Adding an Arduino Uno to work in a master-slave configuration with the Raspberrry Pi
- Restricting Motor Control from four independent wheels to two independent sides
- Encoder Use to tell distance covered by the robot.
- Separate Power Supply for the Raspberry Pi to minimize noise
- Implementing a Seeking Function in the software
Fall Detection: I tested, with our detection software, videos from the internet showing falls, and also videos recorded in our lab spaces simulating falls in five positions- lying on the back, lying on the stomach, lying on the left side, lying on the right side, and slumping to a seating position with the back rested against the wall. Our tests provided us with two metrics for fall detection and we implemented a threshold-based algorithm in two stages to detect when a fall has occurred.
Testing: We carried out a preliminary test in our lab space with one person and then extended tests with six people in two spaces (mildly occluded and more occluded) in our lab. Results showed a precision of 100% and a sensitivity of 42%.
Simulated Fall Positions
Preliminary Test: FADER Successfully Navigating and Detecting Falls
Fall Detection Notification from FADER’s Onboard Computer