April 08, 2026

AI in autonomous vehicles: Powering the next era of mobility

By Uwe Brandenberg, Global Lead - Automotive Advisory, DXC Technology




The term “autonomous vehicles” evokes thoughts of futuristic science fiction films, right? Think again. The future is here: autonomous vehicles are roaming our streets. Waymo clocks up 200,000 rides a week across the United States. Apollo Go has over 400 robotaxis in Wuhan, China, alone.

The publicity hasn’t been all positive for autonomous vehicles, however. Waymo recently made the news when its cars blocked the roads during the San Francisco blackout. The cars needed longer than usual to assess the state of intersections, according to Waymo’s spokesperson. In 2024, Waymo’s cars were also caught on camera circling parking lots and honking at each other at night.

While these incidents are not safety-critical, they illustrate the operational maturity challenges still facing AI in autonomous vehicles. That said, AI and autonomous vehicles do come with an appealing promise: making mobility more accessible to those who can’t drive and reducing injury-causing crashes.

What’s the role of AI in autonomous vehicles?

For a long time, automation in vehicles relied on rule-based systems. Those systems have certain advantages: they’re extremely predictable, and it’s easy to understand how particular decisions are made. But there’s a tradeoff. They can’t adapt to scenarios beyond the pre-defined rules, which makes them unsuitable for dynamic environments.

Cue artificial intelligence (AI). Machine learning algorithms, computer vision, and predictive analytics enable the underlying models to learn from data and adapt to various scenarios.

The result? Artificial intelligence in autonomous vehicles enables those models to handle a multitude of real-world scenarios.

That said, since edge cases are extremely rare, the technology requires vast amounts of data to learn to identify and adapt to them. We’re talking about data from billions of real-world kilometers of driving without simulation — one reason synthetic data and simulation are essential (see graphic "Billions of kilometres are needed to identify edge cases," in How GenAI is helping drive vehicle autonomy, World Economic Forum, 4/03/2025)

How do self-driving cars use AI? Core applications

The extent to which AI is used in an autonomous vehicle depends on that vehicle’s level of autonomy. Autonomous vehicles exist on a five-level scale, with L1 describing simple driver assistance capabilities (e.g., cruise control). The driver controls the L1 vehicle at all times. L5 vehicles, in turn, come fully automated and operate independently under any conditions and in any environment.

The highest level of autonomy currently on the market is L3, with Mercedes-Benz being the first to reach this milestone. (Mercedez-Benz is also testing L4 vehicles in Beijing.) L3 cars can drive themselves independently, but only under certain conditions. The driver may have to take control of the vehicle.

In contemporary autonomous vehicles, AI performs three functions: it enables the cars to “see” their surroundings, make decisions with a degree of foresight, and control the car's components.

Perception systems

Autonomous vehicles can use multiple technologies to “see” the world around them:

  • LiDAR sensors emit laser light, measure the distance they travel before bouncing back
  • Computer vision analyzes real-time camera footage from multiple angles
  • Radar uses radio waves to measure distance to objects
  • Ultrasonics function similarly to radio, using ultrasound instead of radio waves

Artificial intelligence helps detect objects in 3D representations or via camera footage and assign semantic labels to elements (e.g., pedestrian, road, traffic light). It also enables cars to track the detected entities over time, predict their trajectories, and identify and monitor the vehicle’s position and orientation.

Decision-making and planning

AI for autonomous vehicles doesn’t just analyze the environment using the perception system; it also identifies the most suitable course of action. What’s more, AI helps calculate the best route to the destination, adjusting it, if necessary, according to real-time road conditions.

To make split-second decisions, the AI system is trained to select the safest and most efficient option based on the environment, road conditions, and so on.

Control and actuation

Once the model makes a decision on sensory input and predictions, AI in self-driving cars initiates the necessary action. To that end, the system transmits commands to the vehicle’s actuators, which in turn control steering, braking, and acceleration.





The many machine learning models powering autonomous driving

When it comes to AI and autonomous vehicles, machine learning models are the foundational technology powering perception, localization, and more:

ComponentRoleMachine learning modelsUsed for
Perception layerObject detection, semantic segmentation, depth estimationConvolutional neural networks (CNNs)Detecting and localizing dynamic agents and static infrastructure in the environment
Semantic segmentation modelsParsing road geometries, lane markings, vegetation, etc.
Localization and mappingSpatial anchoring, map maintenanceClassical: LIDAR-based SLAM, Extended Kalman FiltersDetecting the vehicle’s position in real time, calculating speed and distances
Visual odometry models: DeepVO, PoseNetEstimating 6‑DoF poses from image sequences in visually complex and GPS-denied environments
Semantic SLAM, enhanced by CNNs or semantic segmentation outputsEnvironmental representations for long-term map maintenance
Reinforcement learning agentsDetecting map drift and optimizing map fusion heuristics (in research and advanced systems)
Planning layerDecision-making, trajectory optimization, translating perceptual understanding into safe and efficient trajectoriesSupervised learning techniques (behavioral cloning)Mimicking expert demonstrations
Reinforcement learning modelsLearning reward-optimized policies in high-dimensional state‑action spaces through extensive simulation

 



Generative AI and simulation in autonomous vehicles

Training AI for autonomous vehicles to handle edge cases is a matter of safety, but real-world data for these is scarce. Synthetic data created by generative AI models is filling the gap. Waymo and Waabi are already using it to train their models.

Generative Adversarial Networks (GANs) can create realistic, detail-rich images of urban environments — with everything from pedestrians and realistic lighting conditions to moving vehicles and real-world traffic patterns.

The benefits of using GANs are obvious. Training and testing AI in self-driving cars using synthetic data reduces training costs and saves time. Plus, it mitigates the safety risks that come with training in real-world conditions.

Hardware foundations of AI in autonomous vehicles

AI-powered automotive software solutions can’t be deployed on just any vehicle. They require specialized hardware that supports real-time data collection, processing, analytics, and decision-making.

Data collection, for example, involves gathering real-time measurements from LiDAR, radar, camera, and ultrasonic sensors. Analyzing the data, in turn, can be done using edge computing or centralized processing. Edge computing is prevalent due to latency, reliability and safety constraints, despite higher local compute demands.

AI and self-driving cars may process terabytes or petabytes of data every day, and traditional CPUs can’t keep up with those volumes. That’s why, in autonomous vehicles, CPUs are supplemented with GPUs optimized for AI operations. Systems-on-chip (SoCs), such as NVIDIA’s Orin and Thor, have also emerged as a way to improve computational efficiency while reducing power consumption.

Key constraints of AI in self-driving cars

The application of AI in autonomous vehicles can’t be done blindly: as cars start roaming roads outside of training zones, a single overlooked constraint may cause a serious accident. At least 25 fatalities in the U.S. have been associated with incidents involving autonomous or partially autonomous vehicle systems.

Going forward, the following three constraints may shape the use of AI in autonomous vehicles:

  • Balancing computational power with energy efficiency. In-vehicle power is limited, making energy efficiency paramount. At the same time, CPUs and GPUs have to process tons of data, while thermal management comes with space constraints.
  • Solving reliance on connectivity. Urban environments offer a stable high-speed internet connection; rural areas often do not and, what’s more, they are more susceptible to unstable or broken connections due to internet disruptions or extreme weather. This can cause vehicles to stop cold, as happened with Cruise cars that lost connection with the remote operations center.
  • Maintaining the currency of AI models. Artificial intelligence in self-driving cars can’t be trained once and left to work forever. Model drift may cause AI systems to return faulty output if relationships between input and output change with time.



AI in autonomous vehicles: What happens next?

We have yet to see L4 and L5 autonomous capabilities, and the expectations are somewhat  lukewarm as to how soon startups and incumbents will deploy them (see Timelines for Level 4 and Level 5 autonomous-vehicle use cases have extended by two to three years on average, McKinsey & Company).

For the moment, AI and self-driving cars have to deal with more pressing issues like rising computational requirements, lack of training data for edge cases, and connectivity. These three trends are poised to shape the evolution of the technology:

  • Hardware accelerators. Today, most vehicles use a combination of CPUs and GPUs for AI processing. That said, the industry is shifting towards using GPUs as primary accelerators.
  • Vehicle-to-everything (V2X) communication. V2X may help reduce uncertainty in controlled or well instrumented environments.
  • Simulations powered by synthetic data. Platforms like SaferDrive AI and NVIDIA’s Omniverse already offer off-the-shelf solutions for such simulations.




Frequently asked questions

At a basic level, AI is there to handle what a human driver would normally do. It takes in a lot of signals from the car, figures out what they mean, and decides how the vehicle should behave. Sometimes that’s about reacting to something nearby. Other times it’s about keeping the car moving the right way, at the right speed.

The systems don’t learn from just one source. Some of the data comes from real driving, collected over time. Some of it is created artificially, using simulations. Cars rely on sensors like cameras or LiDAR to gather this information, which is then used to train and improve the models.

The World Economic Forum expects L2 and L2+ autonomous cars to become widely adopted by 2030. McKinsey predicts that vehicles with more advanced levels of automation (L3 and L4) will make up 12% of new sales in 2030 and 37% in 2035.

Yes. The use of AI in autonomous vehicles is what basically makes them work. AI helps the car notice what’s around it, understand traffic, choose a route and react while driving.