GLADAS

Bit by bit, we’re building the car of the future. Speed control? Check. Trajectory planning? Check. Road sign reading? Pretty much check. But that’s only a portion of what must be done. As explained in our previous post on the Self-Driving Car series, we still have a ways to go — especially in interactions with humans.
In the past few months, a critical effort for cars has been undertaken in one aspect of car-human interactions: gesture learning (GL).
The paper, “GLADAS: Gesture Learning for Advanced Driver Assistance Systems,” details an extensive methodology, called GLADAS, for the testing, training, and validation of self-driving car hand gesture recognition algorithms. GLADAS allows for plug-and-play benchmark evaluations of GL algorithm performance and reliability with several metrics of feedback. The paper currently sets the standard at an accuracy of 94.56% and F-1 score of 85.91%.
Engineers, researchers, and hobbyists are all invited to use GLADAS. We hope that the framework enables a host of subsequent advancements in the new field of gesture learning.
Recent Comments