Taking a quantum leap, artificial intelligence (AI) is a key technology for the automotive industry
More and more vehicle functions are based on artificial intelligence. However, conventional processors and even graphics chips are increasingly reaching their limits when it comes to the computations required for neural networks. Porsche Engineering reports on new technologies that will speed up AI calculations in the future.
Artificial intelligence (AI) is a key technology for the automotive industry, and fast hardware is equally important for the complex back-end calculations involved. After all, in the future it will only be possible to bring new features into series production with high-performance computers. “Autonomous driving is one of the most demanding AI applications of all,” explains Dr. Joachim Schaper, Senior Manager for AI and Big Data at Porsche Engineering. “The algorithms learn from a multitude of examples collected by test vehicles using cameras, radar or other sensors in real traffic.”
Dr. Joachim Schaper, Gerente Senior de IA y Big Data and Porsche Engineer
Conventional data centers are increasingly unable to cope with the growing demands. “Now it takes days to train a single variant of a neural network,” explains Schaper. So, in his opinion, one thing is clear: automakers need new technologies for AI calculations that can help algorithms learn much faster. To achieve this, as many vector matrix multiplications as possible must be executed in parallel in the complex deep neural networks (DNNs), a task that graphics processing units (GPUs) specialize in. Without them, the incredible advances in AI in recent years would not have been possible.
50 times the size of a GPU
However, graphics cards were not originally designed for AI use, but rather to process image data as efficiently as possible. They are increasingly pushed to the limit when it comes to training algorithms for autonomous driving. Therefore, specialized AI hardware is required for even faster calculations. The Californian company Cerebras has presented a possible solution. Its Wafer Scale Engine (WSE) is optimally tailored to the requirements of neural networks by packing as much computing power as possible into one giant computer chip. It is more than 50 times the size of a typical graphics processor and offers room for 850,000 compute cores, more than 100 times more than today’s top GPU.
In addition, Cerebras engineers have networked the computational cores together with high-bandwidth data lines. According to the manufacturer, the Wafer Scale Engine network carries 220 petabits per second. Cerebras has also widened the bottleneck within GPUs: data travels between memory and compute almost 10,000 times faster than on high-performance GPUs, at 20 petabytes per second.
Giant Chip: Cerebras’ Wafer Scale Engine packs massive computing power into a single integrated circuit with a side length of over 20 centimeters.
To save even more time, Cerebras mimics a brain trick. There, neurons work only when they receive signals from other neurons. The many connections that are currently idle do not need any resources. In DNNs, on the other hand, vector matrix multiplication often involves multiplying by the number zero. This costs time unnecessarily. Therefore, the Wafer Scale Engine refrains from doing so. “All zeros are filtered out,” Cerebras writes in its white paper on the WSE. So the chip only performs operations that produce a non-zero result.
One drawback of the chip is its high electrical power requirement of 23 kW and requires water cooling. Therefore, Cerebras has developed its own server enclosure for use in data centers. The Wafer Scale Engine is already being tested in the data centers of some research institutes. Artificial intelligence expert Joachim Schaper thinks the giant chip from California could also accelerate automotive development. “Using this chip, a week’s training could theoretically be reduced to a few hours,” he estimates. “However, the technology has yet to prove this in practical tests.”
light instead of electrons
As unusual as the new chip is, like its conventional predecessors, it also runs on conventional transistors. Companies like Lightelligence and Boston-based Lightmatter want to use the much faster medium of light for AI calculations instead of comparatively slow electronics, and are building optical chips to do so. Thus, DNNs could work “at least several hundred times faster than electronic ones,” the Lightelligence developers write.
“With the Wafer Scale Engine, a week of training could theoretically be reduced to just a few hours.” Dr. Joachim Schaper, Senior Manager for AI and Big Data at Porsche Engineering
To do this, Lightelligence and Lightmatter use the phenomenon of interference. When light waves amplify or cancel each other out, they form a light-dark pattern. If you direct the interference in a certain way, the new pattern corresponds to the vector-matrix multiplication of the old pattern. So light waves can “do math.” To make this practical, the Boston developers etched tiny light guides onto a silicon chip. As in a textile fabric, they are crossed several times. The interference takes place at the junctions. In between, tiny heating elements regulate the refractive index of the light guide, allowing light waves to move past each other. This allows you to control their interference and perform vector-matrix multiplications.
However, Boston businesses are not doing without electronics entirely. They combine their lightweight computers with conventional electronics that store data and perform all calculations except vector and matrix multiplication. These include, for example, nonlinear activation functions that modify the output values of each neuron before moving on to the next layer.

Computing with light: Lightmatter’s Envise chip uses photons instead of electrons to compute neural networks. Input and output data are supplied and received by conventional electronics.
With the combination of optical and digital computing, DNNs can be calculated extremely quickly. “Its main advantage is low latency,” explains Lindsey Hunt, a spokesperson for Lightelligence. For example, this allows the DNN to detect objects in images faster, such as pedestrians and electric scooter users. In autonomous driving, this could lead to quicker reactions in critical situations. “Also, the optical system makes more decisions per watt of electrical power,” said Hunt. That’s especially important as increasing computing power in vehicles increasingly comes at the expense of fuel economy and range.
Lightmatter and Lightelligence solutions can be inserted as modules into mainstream computers to speed up AI calculations, just like graphics cards. In principle, they could also be integrated into vehicles, for example to implement autonomous driving functions. “Our technology is well suited to serve as an inference engine for a self-driving car,” explains Lindsey Hunt. Artificial intelligence expert Schaper has a similar opinion: “If Lightelligence succeeds in building components suitable for automobiles, this could greatly accelerate the introduction of complex AI functions in vehicles.” The technology is now ready for the market: the company is planning its first pilot tests with customers in 2022.
The quantum computer as a turbo AI
Quantum computers are somewhat further removed from practical application. They will also speed up AI calculations because they can process large amounts of data in parallel. To do this, they work with so-called “qubits”. Unlike the classical unit of information, the bit, a qubit can represent the two binary values 0 and 1 simultaneously. The two numbers coexist in a state of superposition that is only possible in quantum mechanics.
“The more complicated the patterns, the more difficult it is for conventional computers to distinguish classes.” Heike Riel, director of IBM Research Quantum Europe/Africa
Quantum computers could boost artificial intelligence when it comes to classifying things, for example in traffic. There are many different categories of objects there, including bikes, cars, pedestrians, signs, dry and wet roads. They differ in terms of many properties, which is why experts speak of “pattern recognition in higher-dimensional spaces.”
“The more complicated the patterns, the more difficult it is for conventional computers to distinguish the classes,” explains Heike Riel, who leads IBM’s quantum research in Europe and Africa. That’s because with each dimension, it becomes more expensive to compute the similarity of two objects: How similar are an e-scooter driver and a walker user trying to cross the street? Quantum computers can work efficiently in high-dimensional spaces compared to conventional computers. For certain problems, this property could be useful and result in some problems being solved faster with the help of quantum computers than conventional high-performance computers.

Heike Riel, director of IBM Research Quantum Europe/Africa
IBM researchers have analyzed statistical models that can be trained for data classification. Initial results suggest that cleverly chosen quantum models perform better than conventional methods for certain data sets. Quantum models are easier to train and appear to have higher capacity, allowing them to learn more complicated relationships.
Riel admits that while current quantum computers can be used to test these algorithms, they still don’t have an advantage over conventional computers. However, the development of quantum computers is advancing rapidly. Both the number of qubits and their quality are constantly increasing. Another important factor is speed, measured in Circuit Layer Operations Per Second (CLOPS). This number indicates how many quantum circuits the quantum computer can run at one time. It is one of the three important performance criteria of a quantum computer: scalability, quality, and speed.
In the foreseeable future, it should be possible to demonstrate the superiority of quantum computers for certain applications, that is, that they solve problems faster, more efficiently, and more accurately than a conventional computer. But building a powerful, bug-fixed, general-purpose quantum computer will still take some time. Experts estimate that it will take at least another ten years. But the wait could be worth it. Like optical chips or new architectures for electronic computers, quantum computers could hold the key to future mobility.
Soon
When it comes to AI calculations, not only conventional microprocessors, but also graphics chips, are now reaching their limits. Therefore, companies and researchers around the world are working on new solutions. Wafer chips and lightweight computers are close to reality. In a few years, these could be supplemented by quantum computers for particularly demanding calculations.