****Site under construction****
Lingdong Huang, Kearnie Lin, Vicki Long, Tatyana Mustakos
Our group decided to experiment with robot feeding, particularly in controlling Baxter to recognize facial features and to subsequently move food to a subject's mouth. This project would be a combination of working with the robot arm, computer vision/facial recognition, and robot learning.
We decided on this theme as an experimental approach between fun and utility; whereas ideally the project could be used to feed a lazy person who refuses to move, it could also be introduced as a more legitimate device to feed the disabled or similarly impaired patients who would normally have trouble eating.
Being a project which poses an explicit goal of being able to control the robot arm to a target direction (location of food, subject's mouth, etc.), we plan to approach our tasks in simple, smaller tasks and gradually increase the difficulty and variety (i.e. food and utensil variability) over time with each ideal success.
use @andrew.edu email to be able to view
Timeline: Our premise is to start with basic tasks and increment our goals over time once we achieve smaller benchmarks.
Our thought process has been as follows:
Scooping/Picking up food particles from container
Movement/Transfer food from container to mouth
Week 1: Introduction and learning Baxter program, test trial of Baxter simulator, basic template of project website
Week 2: Completed first test script for successfully moving arm to cup at source location, grabbing and picking up the cup with grippers, and moving to target location
Week 3: Expanded cup script to return arm to original location and to pick up designated cup from any location, changed wrist angle of arm to grab and hold cup from the side to allow for subject straw access of practical feeding (involved three movements: moving to in front of the cup, moving forward to encapsulate cup, gripping cup), started testing for holding utensils with the gripper, as simulated with a wooden stick; the stick was positioned atop the cup horizontally with the "handle" hanging off the cup rim: current script allows arm to move to the handle, grip the stick, and carry it target destination
Week 4: Introduction to OpenCV, initial testing of facial recognition to succeed at recognizing facial landmarks from local laptop camera, and recognizing test bowl with applied patterns to strengthen object recognition
Week 5: Continued working with OpenCV to optimize cup recognition amongst any camera scene through explicit RBG, explored other facial recognition options and acknowledged dlib and other object recognition libraries as possible workflows and toolkits
Week 6: Integrated OpenCV cup recognition to Baxter's wrist camera with visual output on lab computer to explicitly show the entire cup object being detected, considered changing RGB options to HSV, plus other optimizations; started integrating Baxter's response in robot arm movement in relation to the cup detection
Week 7: Continued working with object recognition on cup with moving the robot arm: created script that allowed for user confirmation of movement direction correctness when the wrist camera detects the cup, and subsequently resumed testing with Baxter's perspective of moving right, left, forward, and back when necessary
Week 8: Debugging and refinement of Baxter's movement in relation to the cup, editing the necessary angles and trigonometry-related problems with the arm angles in relation to the robot's main body, fully integrated HSV recognition with other edits to improve overall cup detection, explicitly altered arm movment to steer around the cup accordingly and not knock it over, integrated gripper control to grab the cup once the arm has successfully moved in range of the cup
Week 9: Further testing of cup recognition with real-time output of Baxter's wrist camera POV in object detection. This benchmark required thorough testing to ensure that with all future feeding test runs that Baxter would confidently be able to retrieve the food items at hand, in this case tested through our explicit decision of it being an orange cup.
Week 10: Carnival Break!
Week 11:Fine-tuning of object recognition of distant measurement: we created a Python dictionary for real-world physical distance between Baxter's grippers and the cup mapped to its actual degrees and arm/wrist angles to extrapolate a graph that could be utilized for arm movement towards target item to be accurate once the object is centered on Baxter's wrist camera. Projected future throghput with working with facial landmark recognition, and had to relocate and account for the distance differences for our next benchmark of testing, since the area behind the table on which we tested object recognition had too many items and cords.
Week 12: Ran final tests of object recognition and tinkered accordingly with the cup at new text location for facial landmark recognition; ensured that the cup script still works with it being at a temporary location for face-testing purposes. Integrated and tested facial landmark recognition and was able to document cup/drink feeding task with target subject. Then proceeded to test with large spoon as utensil and cereal as first test food to successfully calibrate a practical spoonful for the test, and to bring it to a subject's mouth. Edited scripts for all to prompt when Baxter recognizes that the subject's mouth is opened, and this was done through a script integrated through the head camera to recognize and measure the openness of a face's mouth based on the number of relatively darker pixels on the face. Lastly, implemented and test ran our script for mouth-wiping, which was done through similar measurements and calibration with Baxter's arm movement (like feeding), with some dabbing and left-right movement at the subject's mouth to simulate wiping action; scripted such that it prompts after a feeding run and when the head camera senses that the subject's mouth has become closed.
More recent and current documentation can be found in the linked formal presentations above!