23.2 C
New York
Friday, July 12, 2024

Don't Count on Tesla's Dojo Supercomputer to Jump-Start an AI Revolution

You’d have to be pretty brave to bet against the idea that applying more computing power and data to machine learning—a recipe that birthed ChatGPT—won’t lead to further advances of some kind in artificial intelligence. Even so, you’d be braver still to bet that combo will produce specific advances or breakthroughs on a specific timeline, no matter how desirable.

A report issued last weekend by the investment bank Morgan Stanley predicts that a supercomputer called Dojo, which Tesla is building to boost its work on autonomous driving, could add $500 billion to the company’s value by providing a huge advantage in carmaking, robotaxis, and selling software to other businesses.

The report juiced Tesla’s stock price, adding more than 6 percent, or $70 billion—roughly the value of BMW and much less than Elon Musk paid for Twitter—to the EV-maker’s market cap as of September 13.

The 66-page Morgan Stanley report is an interesting read. It makes an impassioned case for why Dojo, the custom processors that Tesla has developed to run machine learning algorithms, and the huge amount of driving data the company is collecting from Tesla vehicles on the road, could pay huge dividends in future. Morgan Stanley’s analysts say that Dojo will provide breakthroughs that give Tesla an “asymmetric” advantage over other carmakers in autonomous driving and product development. The report even claims the supercomputer will help Tesla branch into other industries where computer vision is critical, including health care, security, and aviation.

There are good reasons to be cautious about those grandiose claims. You can see why, at this particular moment of AI mania, Tesla’s strategy might seem so enthralling. Thanks to a remarkable leap in the capabilities of the underlying algorithms, the mind-bending abilities of ChatGPT can be traced back to a simple equation: more compute x more data = more clever.

The wizards at OpenAI were early adherents to this mantra of moar, betting their reputations and their investors’ millions on the idea that supersizing the engineering infrastructure for artificial neural networks would lead to big breakthroughs, including in language models like those that power ChatGPT. In the years before OpenAI was founded, the same pattern had been seen in image recognition, with larger datasets and more powerful computers leading to a remarkable leap in the ability of computers to recognize—albeit at a superficial level—what an image shows.

Walter Isaacson’s new biography of Musk, which has been excerpted liberally over the past week, describes how the latest version of Tesla’s optimistically-branded Full Self Driving (FSD) software, which guides its vehicles along busy streets, relies less on hard-coded rules and more on a neural network trained to imitate good human driving. This sounds similar to how ChatGPT learns to write by ingesting endless examples of text written by humans. Musk has said in interviews that he expects a Tesla to have “ChatGPT moment” with FSD in the next year or so.

Musk has made big promises about breakthroughs in autonomous driving many times before, including a prediction that there would be a million Tesla robotaxis by the end of 2020. So let’s consider this one carefully.

By developing its own machine learning chips and building Dojo, Tesla could certainly save money on training the AI systems behind FSD. This may well help it do more to improve its driving algorithms using the real-world driving data it collects from its cars, which competitors lack. But whether those improvements will cross an inflection point in autonomous driving or computer vision more generally seems virtually impossible to predict.

Most PopularGearApple’s $60 iCloud Service Is the Future of Apple

Lauren Goode

GearApple’s iPhone 15 Marks a New Era

Julian Chokkattu

GearEverything Apple Announced Today

Boone Ashworth

SecurityChina-Linked Hackers Breached a Power Grid—Again

Andy Greenberg

For one thing, FSD isn’t that much like ChatGPT. As the company’s engineers explained during its AI Day event last year, the feature is powered by multiple programs and machine learning systems designed to handle a raft of different road tasks, such as steering or decoding road markings. More data and more computation may produce significant advances in some of these, but a big leap in autonomous driving requires significant leaps in many or all of those sub-systems. ChatGPT’s remarkably general capabilities, by contrast, were enabled by improving a single underlying system—a monolithic algorithm that unspools text.

Another problem: Video, and other sensor data, is fundamentally different from text. Last week, I met with roboticists who explained that a central question for their field is whether the kind of scaling up that unlocked new capabilities in ChatGPT could transfer to robotic sensing, navigation, and reasoning. You can build a supercomputer to work on those problems. But learning from video data requires a lot more computer power than processing text, and making fundamental advances might require exponentially more. No one—not Tesla, not Morgan Stanley—knows for sure just how much data or how big a supercomputer one needs to make fundamental breakthroughs in robotics.

A third kink in Morgan Stanley’s Dojo dominance thesis is the idea that advances in autonomous driving will transfer to other problems. Learning to drive requires extensive understanding of the physical world, but it doesn’t teach a machine anything about operating in the world beyond the relatively controlled world of the highway, with its rules and signage.

I asked Christian Gerdes, codirector of the Center for Automotive Research at Stanford (CARS), what he thinks of Tesla’s approach. He emailed back from a racetrack in Portugal, where he’s testing a self-driving system developed at his lab. Gerdes says there is a growing belief in his field that self-driving capabilities will scale with data and computing power, but that it’s still unclear how far this can go. “We have relatively simple neural networks learning the physics of racing,” Gerdes says of his own experiments. “The results are quite good but, interestingly, don’t always improve with more data.”

Maybe all you need is even moar data and silicon. By the Morgan Stanley report’s estimation, we’ll soon have an idea whether this is the case. It predicts that the next version of FSD will be unveiled at a Tesla AI Day in early 2024, and will demonstrate that Tesla has made fundamental breakthroughs in autonomous driving thanks to Dojo.

Perhaps. But given Tesla’s track record of promising an imminent self-driving utopia, I wouldn’t bet, or invest, on it.

Related Articles

Latest Articles