Robots are coming. Whether they come as humanoids, home assistants or self-driving cars, they are coming. They are coming out from the orderly, synthetic and isolated manufacturing plants to the chaotic, evolutionary and interactive real world. And it's this interaction what will separate machines between lifeless appliance and sociable robots. Understanding both verbal and non-verbal communication is the last layer of robotic development, though it may be one of the most challenging.
Human Pose Estimation is the problem of localizing human body parts -such as hands, eyes or feet- from just a single RGB camera, like the one you use in your smartphone. Solving this problem would mean a robot could understand your movements, predict your next action and, eventually, beat you in a Kung-Fu battle. Deep learning is our best ally here. Its ability to process images, grasp its content and improve with experience has helped remarkably to make Human Pose Estimation a reality.
Throughout this talk we'll explore the problem of Human Pose Estimation, how deep learning has revolutionized the field and how it will impact our everyday life. Live demos assured!