Saturday, August 4, 2018

Elon Musk Says Tesla's Onboard Computer Is Blazingly Fast

begin quote from:Elon Musk Says Tesla's Onboard Computer Is Blazingly Fast
15,764 views

Elon Musk Says Tesla Has A Blazingly Fast Onboard Computer To Aid Autopilot

Elon Musk, CEO of Tesla. (Photo by Joshua Lott/Getty Images)
Tesla's second-quarter earnings call included a discussion of the computational hardware system that Tesla uses for its Autopilot driver assistance functionality. Tesla CEO Elon Musk revealed that Tesla is building its own computational hardware, as opposed to the traditional automotive approach of sourcing computational units from semiconductor suppliers. Furthermore, Musk revealed that Tesla's computer is, "an order of magnitude improvement in the frames per second."
Musk included the top three leaders of Autopilot on the call: Stuart Bowers (VP of Engineering), Peter Bannon (Director of Silicon Engineering), and Andrej Karpathy (Director of AI). It was Musk himself, though, who made the big reveal:
But it's an incredible job by Pete and his team to create this, the world's most advanced computer designed specifically for autonomous operation. And as a rough sort of whereas the current NVIDIA's hardware can do 200 frames a second, this is able to do over 2,000 frames a second and with full redundancy and fail-over.

Bannon provided a bit more detail that sheds some light on what Tesla might have done:
So, like two years ago when I joined Tesla, we did a survey of all of the solutions that were out there for running neural networks, including GPUs....We had the benefit of having the insight into seeing what Tesla's neural networks looked like back then and having projections of what they would look like into the future, and we were able to leverage all of that knowledge and our willingness to totally commit to that style of computing to produce a design that's dramatically more efficient and has dramatically more performance than what you can buy today.
One way to squeeze more performance out of a chipset is to design the chipset for a specific set of computational needs, as opposed to the more generic range of tasks that a CPU or GPU is designed to perform. Taken to the extreme, it's possible to design a chipset specifically for a particular neural network architecture, which limits the size, connections, heat, and data transfers of the chip. All of that could make the chip much faster at running a particular neural network architecture. The tradeoff, of course, is severely constrained flexibility.
Several of Bannon's statements hint that Tesla may have decided to go in that direction. By highlighting the "projections of what [Tesla's neural networks] would look like into the future", and "our willingness to totally commit to that style of computing", Bannon might be indicating that they have designed their chip for a very specific type of machine learning architecture.
This would give Tesla a huge advantage in the present, at the risk of being unable to leverage any new deep learning breakthroughs discovered in the coming years. Given how heavily Tesla relies on updating its software, Bannon and Karpathy presumably have a lot of confidence in what their machine learning models will look like in the future.
Although he said very little on the call, Tesla's Andrej Karpathy is one of the world's foremost experts in deep learning. If Tesla were going to bet the future of their machine learning architecture on anybody's projections, he would be the person to ask. And the decisions he's made this year may become very important for Tesla down the road.
What do you think? Tweet me @dsilver829!
I lead the Self-Driving Car Engineer Nanodegree Program at Udacity, which has trained thousands of engineers to work on autonomous vehicles. Udacity students have joined self-driving car teams at companies like Lyft, Mercedes-Benz, and NVIDIA. Prior to Udacity, I was a resea...
 MORE

No comments: