The robot head developed at University of San Diego in California, uses self learning algorithms to improve the reality of its facial expressions.The robot has 30 servo motors each of which make up for a facial muscle, by controlling muscles with tiny strings. Without the use of machine learning, all of these muscles had to be calibrated manually by humans to move at the right combination. This process automates that process, and also more efficient in creating realistic looking expressions, as it developes naturally. In summary, the process involves having the robot make a facial expression and analyze the result in a mirror, and it compares the expression with the movement of muscle motors. After learning the relationship between the facial expressions and the motor movements, the robot is able to develop new expressions by itself. An interesting result during the reseach occurred, when one of the servos stopped working. The model still continued to learn, by compensating for that servo with the nearby muscles to develop the necessary movement.