The Final 1%
Autonomous Vehicle (AV) technology is slowly growing in both popularity and success in the artificial intelligence community. With such technology comes the promise of a future where the elderly no longer have to touch the steering wheel, where teenagers no longer have to take the infamous driving test, and the average worker can have a traffic-free commute. With 5 million vehicle crashes annually in the US, 93% of which result from human error, autonomous vehicle (AV) technology has great potential to improve human safety and health .
However, all these potentials all rely on our ability to actually create AVs. 7 years ago, co-founder Sergey Brin claimed that Google would release an autonomous car in 2018 .And yet, even a year later, AVs are still not ready. On average, normal cars drive 108 miles per fatality, while the best self-driving cars drive 104 per disengagement (the human has to interfere when the computer messes up). In other words, one could argue that AVs are currently 0.01% as good as normal cars .
With these low success rates, estimates have been revised. Elon Musk’s Tesla now claims it will have a self-driving taxi fleet service next year , Ford insists it will achieve Level 4 (“High Automation”) by 2021 , and the Institute of Electrical and Electronics Engineers predicts that 75% of cars will be driverless by 2040 . Why are self-driving cars still not ready-to-go? In order for AV technology to be adopted it should be as safe as possible.
Unfortunately, making something 100% safe is impossible, but engineers theoretically should train cars to be able to handle 99.99% of driving situations. Current state-of-the-art technology has not yet reached that threshold. Out of all driving situations, we can divide them into two categories: Normal, and Edge Case.
99% of automobile driving is completely normal and can be driven by self-driving cars– e.g. going straight down a highway. Whenever companies such as Waymo or ArgoAI drive around to collect data, most of that training data can be categorized in this 99% normal category. However, there is still 1% of driving that consists of edge cases – scenarios too rare and dangerous to replicate . For example, these edge cases may include when a police officer uses hand gestures to direct traffic instead of the traffic light, sunlight/heavy rain blinds the camera from seeing a small child running across the road, or abnormal highway traffic accidents. Due to their rarity, situations rarely exist within the training data, meaning the cars are less adequately prepared to respond.
Being able to cover these final 1% edge cases is crucial to saving lives (of both passengers and bystanders). Failing to do so would mean releasing a car unprepared for dangerous-but-possible events to occur. Pan et al. ’17 introduced one of the first attempts at simulating and training self-driving cars in a virtual world . Expanding on their efforts, we simulate these dangerous scenarios in a virtual world, allowing our AV car model to train on and further understand these edge case scenarios.
 D. J. Fagnanta and K. Kockelmo. Preparing a nation for autonomous vehicles: oppor-tunities, barriers and policy recommendations, May 2015.
 A. Dosovitskiy, G. Ros, F. Codevilla, A. L ́opez, and V. Koltun. CARLA: an open urbandriving simulator. CoRR, abs/1711.03938, 2017.
 S. Gupta, M. Vasardani, and S. Winter. Conventionalized gestures for the interactionof people in traffic with autonomous vehicles. pages 55–60, 10 2016.
 D. Dey and J. Terken. Pedestrian interaction with vehicles: Roles of explicit and implicitcommunication. pages 109–113, 09 2017.
 E. Olson. The moore’s law for self-driving vehicles – may mobility, Feb 2019.
 L. Kolodny. Elon musk claims tesla will have 1 million robotaxis on roads next year,but warns he’s missed the mark before, Apr 2019.
 J. Walker. The self-driving car timeline – predictions from the top 11 global automakers,May 2019.
 M. Dempsey. (h)edge cases in autonomous vehicles – frontier technology, Mar 2016.
 X. Pan, Y. You, Z. Wang, and C. Lu. Virtual to real reinforcement learning for au-tonomous driving, Sep 2017.