One of the first things a child learns about driving is that Green means Go, and Red means Stop. These all have to do with the very fundamental core of driving: the stoplight. Stoplights are considered one of the most important parts of keeping traffic in order — and yet in a lot of parts around the world, there are no traffic lights. And even in places where there are, occasional power outages disable the lights from working. Instead, a police officer comes to control traffic – with their hands.
In another situation, a man might be at the side of the road simply standing there. A smart car would slow down in case the man started walking. Again in this case, the man must gesture to the car if he intends to go, or if he’s allowing the car to continue onwards.
Any human driver must understand hand gestures – they are absolute crucial in everyday driving. There is no exception for self-driving cars; they also must be able to identify, interpret, and react to human hand gestures. While driving into an intersection controlled by a police officer, or at a crosswalk with pedestrians at the corner, vehicle-human communication will be key.
In this upcoming series of posts, we investigate how a basic knowledge of Artificial Intelligence, Deep Learning, and Neural Networks can go far in developing one of the most under-researched challenges of self-driving.