Self-driving cars are getting closer and closer to become an everyday reality. Although, at first it may seem like that autonomous cars investigations are reserved for a very narrow group of researchers, we would like to show it is not necessary true. Actually, the only things you need to start playing with driverless-cars, are some hacking skills, a little bit of programming and basic understanding of machine learning concepts – mainly deep and reinforcement learning.
Autonomous driving
Driverless cars have been a dream of engineers since automotive industry was born and the first approaches were made, when Ford Model T was still ruling the roads. Although, radio-controlled car, presented by Houdina Radio Control in 1925, is far away from what we understand as an autonomous car in 21th century, it might be considered as the first try to construct an automobile, that does not require a human behind the wheel. The next decades brought new ideas, like burying a series of detector circuits in the pavement (RCA Labs 1958) to guide the car and determine its position and velocity.
The first “modern” approaches appeared in the 1980s. ALV (Autonomous Land Vehicle) project used lidar sensors, computer vision and robotic control in order to drive a car with slow speed. The project was an inspiration for ALVINN (Autonomous Land Vehicle In a Neural Network), that is taught to drive by processing video images from the onboard camera as a person drives. Another interesting solution has been developed by Eureka Prometheus in 1995. The Prometheus project was able to drive without human intervention for a mean of 9 km, with a high speed of 175km/h on a German highway.
The field of autonomous driving has seen a rapid increase since then, but the real boom could be observed in the last years as computational power grew enough and big investments by the top companies were made.
Google’s (Waymo) and Tesla’s self-driving cars are the best known examples, but also Nvidia and Uber run successful researches related to this field. Obviously, automotive companies are trying not to stay behind. For example, BMW has announced to release its autonomous car by 2021, meanwhile Volvo already offers semi-automated mode and works hard to release fully automated systems.
Most of the self-driving cars carry number of different sensors onboard. Although, projects like Waymo, or Tesla managed to make their cars indistinguishable from regular cars, they still analyse the surrounding with a help of sophisticated electronic sensors.
A good example of such setup is the one presented by KITTI that involves GPS, laser scanners, stereo camera and varifocal lenses.
On the other hand, there are approaches similar to the mentioned ALVINN and Nvidia concepts, that maps the road image directly into steering commands. They don’t require any specialized gauges beside a camera, what makes them feasible in home settings.
Radio-Controlled Cars
A full size vehicle might be replaced by an RC car in order to start experimenting with self-driving concepts. Although, scaling a real world traffic into size of a toy might be difficult, it is good enough to be used for closed tracks such as racing circuits.
A great example of such solution, with a brief explanation how to assemble everything together, was published on this blog. The idea presented by Nvidia was literally scaled down to size of an RC car and trained and tested on a playground road located on the backyard. No sensors were used, just a popular web-camera (Logitech C920) mounted on the toy vehicle. The images collected from the camera and the steering input read from the remote controller were used to build a training set for Convolutional Neural Network. When trained, the network provided steering commands on the base on the “seen” road image. Even though deep learning was involved, a gaming desktop PC was used for fitting the network.
A very similar concept was used to build an RC car which is able to maintain its position within road boundaries when being driven on… a carpet. Also in this case CNN was used, but what is especially worth mentioning is the Donkey repository, that is indented to make it easier to assemble such car.
Finally I have found some crazy, but very interesting approaches. Self drifting RC car is one of them. It has actually a really good motivation, as it puts the car into extreme situations and applies Reinforcement Learning to handle the vehicle safely, when it starts sliding. There is also a “full size” Stanford’s project, that is related to the mentioned, miniatured example.
Computer Games
Modern computer games offer very complex virtual worlds that might be extremely useful to practice with autonomous driving without even leaving your desk. Grand Theft Auto is a pretty obvious example that might be used to mimic even very complicated traffic circumstances. However it’s not the only example, that might be involved in self-driving experiments.
OpenAI Universe makes experiments with computer games particularly easy, as it provides a complete environment for testing AI agents. As you can read in the launch blog post, you can turn any program into Gym environment, where your Reinforcement Learning agent might be trained to control a game just like a human does – *“by looking at screen pixels and operating a virtual keyboard and mouse”*. The initial release contained mainly Flash, Web-browser and Atari games support, so rather simple virtual environments, however more of them is coming soon.
When the idea of this post was born (around 13th of January 2017), there has been a big excitement around an announcement of OpenAI – GTA V had been given an official Universe’s environment that worked both with deep and reinforcement learning methods. Unfortunately both the blog post and resources silently disappeared from the Internet and are no longer available. In the beginning of February there was no official statement made about this situation. However I would recommend to keep an eye on it as it seemed to be a perfect solution.
Princeton researchers have elaborated an unique approach to self driving with utilization of deep learning and Torcs (The Open Source Racing Simulator). Torcs is a highly portable racing game that has been widely used as a research platform. Due to the fact that the code is publicly available, it is very easy to modify the virtual environment and create any kind of conditions. Moreover, Torcs offers an access to numbers of meaningful indicators, which may imitate sensors readings from a real-world autonomous cars. Writing a few lines of code gives an instant access to car’s position, wheel angle, distance to preceding cars, throttle, etc.
Deep learning is not the only one applicable approach for Torcs. It is quite easy to find online resources, where Reinforcement Learning was used. Basically, the open source nature of the game, makes it available for any kind of experiments you can imagine.
Computer games are becoming complex enough to emulate the real world, therefore there are some active researches on data collection in virtual environments and evaluating models trained on this data in real traffic. The Princeton’s approach mentioned in one of the previous paragraphs is one of them. If this idea seems interesting to you, I encourage you to give a look at this article, that is fully oriented on moving from virtual to the real world.
Online Courses
I can’t imagine writing this article without mentioning the outstanding online course provided by Udacity. Self-Driving Car Engineer Nanodegree will guide you through the aspects of computer vision, deep learning and vehicle kinematics. The only disadvantage is the price, as it costs around 800$ for each of the three terms.
There is a free alternative, that is prepared by MIT – Introduction to Deep Learning and Self Driving Cars. It is not as complementary as the course from previous paragraph (it focuses only on the deep learning approaches), but it still might be considered as an interesting introduction to the field.
Summary
We have just scratched the surface and mentioned few of the most interesting projects that we are aware of. Learning autonomous driving concepts is more a matter of will, rather than resources, as there are plenty of them at your fingertips. We hope, that this article showed, that this is actually true!
Graphics by Anna Langiewicz