Aerospace and Defense
Automotive
Consumer and Retail
Energy
Financial Services
Healthcare
Insurance
Life sciences
Manufacturing
Public Sector
Technology, Media and Telecommunications
Travel and Transportation
AGIG's Customer Service Transformation
Agentic AI in Insurance
Addressing Technical Debt with DXC Assure Platform
The Hogan API Microservices solution
DXC launches AMBER platform at CES 2026
Build & Innovate
Manage & Optimize
Protect & Scale
AI & Data
DXC IoT & Digital Twin Services
Strategize and accelerate your AI agenda
Explore our tailored options to navigate change
Enhance operational effectiveness, maintain compliance and foster customer trust
Customer Stories
Knowledge Base
AI
Closing the AI execution gap
About DXC
Awards & Recognition
Careers
Partners
Events
Environmental, Social, Governance
Investor Relations
Newsroom
Leadership
Legal & Compliance
DXC leads in the age of AI
Partnership with Manchester United
Partnership with Scuderia Ferrari
April 08, 2026
By Uwe Brandenberg, Global Lead - Automotive Advisory, DXC Technology
The term “autonomous vehicles” evokes thoughts of futuristic science fiction films, right? Think again. The future is here: autonomous vehicles are roaming our streets. Waymo clocks up 200,000 rides a week across the United States. Apollo Go has over 400 robotaxis in Wuhan, China, alone.
The publicity hasn’t been all positive for autonomous vehicles, however. Waymo recently made the news when its cars blocked the roads during the San Francisco blackout. The cars needed longer than usual to assess the state of intersections, according to Waymo’s spokesperson. In 2024, Waymo’s cars were also caught on camera circling parking lots and honking at each other at night.
While these incidents are not safety-critical, they illustrate the operational maturity challenges still facing AI in autonomous vehicles. That said, AI and autonomous vehicles do come with an appealing promise: making mobility more accessible to those who can’t drive and reducing injury-causing crashes.
For a long time, automation in vehicles relied on rule-based systems. Those systems have certain advantages: they’re extremely predictable, and it’s easy to understand how particular decisions are made. But there’s a tradeoff. They can’t adapt to scenarios beyond the pre-defined rules, which makes them unsuitable for dynamic environments.
Cue artificial intelligence (AI). Machine learning algorithms, computer vision, and predictive analytics enable the underlying models to learn from data and adapt to various scenarios.
The result? Artificial intelligence in autonomous vehicles enables those models to handle a multitude of real-world scenarios.
That said, since edge cases are extremely rare, the technology requires vast amounts of data to learn to identify and adapt to them. We’re talking about data from billions of real-world kilometers of driving without simulation — one reason synthetic data and simulation are essential (see graphic "Billions of kilometres are needed to identify edge cases," in How GenAI is helping drive vehicle autonomy, World Economic Forum, 4/03/2025)
The extent to which AI is used in an autonomous vehicle depends on that vehicle’s level of autonomy. Autonomous vehicles exist on a five-level scale, with L1 describing simple driver assistance capabilities (e.g., cruise control). The driver controls the L1 vehicle at all times. L5 vehicles, in turn, come fully automated and operate independently under any conditions and in any environment.
The highest level of autonomy currently on the market is L3, with Mercedes-Benz being the first to reach this milestone. (Mercedez-Benz is also testing L4 vehicles in Beijing.) L3 cars can drive themselves independently, but only under certain conditions. The driver may have to take control of the vehicle.
In contemporary autonomous vehicles, AI performs three functions: it enables the cars to “see” their surroundings, make decisions with a degree of foresight, and control the car's components.
Autonomous vehicles can use multiple technologies to “see” the world around them:
Artificial intelligence helps detect objects in 3D representations or via camera footage and assign semantic labels to elements (e.g., pedestrian, road, traffic light). It also enables cars to track the detected entities over time, predict their trajectories, and identify and monitor the vehicle’s position and orientation.
AI for autonomous vehicles doesn’t just analyze the environment using the perception system; it also identifies the most suitable course of action. What’s more, AI helps calculate the best route to the destination, adjusting it, if necessary, according to real-time road conditions.
To make split-second decisions, the AI system is trained to select the safest and most efficient option based on the environment, road conditions, and so on.
Once the model makes a decision on sensory input and predictions, AI in self-driving cars initiates the necessary action. To that end, the system transmits commands to the vehicle’s actuators, which in turn control steering, braking, and acceleration.
Levels of Self-driving (brief definition)
When it comes to AI and autonomous vehicles, machine learning models are the foundational technology powering perception, localization, and more:
Training AI for autonomous vehicles to handle edge cases is a matter of safety, but real-world data for these is scarce. Synthetic data created by generative AI models is filling the gap. Waymo and Waabi are already using it to train their models.
Generative Adversarial Networks (GANs) can create realistic, detail-rich images of urban environments — with everything from pedestrians and realistic lighting conditions to moving vehicles and real-world traffic patterns.
The benefits of using GANs are obvious. Training and testing AI in self-driving cars using synthetic data reduces training costs and saves time. Plus, it mitigates the safety risks that come with training in real-world conditions.
AI-powered automotive software solutions can’t be deployed on just any vehicle. They require specialized hardware that supports real-time data collection, processing, analytics, and decision-making.
Data collection, for example, involves gathering real-time measurements from LiDAR, radar, camera, and ultrasonic sensors. Analyzing the data, in turn, can be done using edge computing or centralized processing. Edge computing is prevalent due to latency, reliability and safety constraints, despite higher local compute demands.
AI and self-driving cars may process terabytes or petabytes of data every day, and traditional CPUs can’t keep up with those volumes. That’s why, in autonomous vehicles, CPUs are supplemented with GPUs optimized for AI operations. Systems-on-chip (SoCs), such as NVIDIA’s Orin and Thor, have also emerged as a way to improve computational efficiency while reducing power consumption.
The application of AI in autonomous vehicles can’t be done blindly: as cars start roaming roads outside of training zones, a single overlooked constraint may cause a serious accident. At least 25 fatalities in the U.S. have been associated with incidents involving autonomous or partially autonomous vehicle systems.
Going forward, the following three constraints may shape the use of AI in autonomous vehicles:
Lead the software-defined future of automotive with AMBER, our modular software platform that empowers carmakers to lead the software-defined vehicle revolution.
DXC Automotive: Software solutions defining the future of automotive.
We have yet to see L4 and L5 autonomous capabilities, and the expectations are somewhat lukewarm as to how soon startups and incumbents will deploy them (see Timelines for Level 4 and Level 5 autonomous-vehicle use cases have extended by two to three years on average, McKinsey & Company).
For the moment, AI and self-driving cars have to deal with more pressing issues like rising computational requirements, lack of training data for edge cases, and connectivity. These three trends are poised to shape the evolution of the technology:
At a basic level, AI is there to handle what a human driver would normally do. It takes in a lot of signals from the car, figures out what they mean, and decides how the vehicle should behave. Sometimes that’s about reacting to something nearby. Other times it’s about keeping the car moving the right way, at the right speed.
The systems don’t learn from just one source. Some of the data comes from real driving, collected over time. Some of it is created artificially, using simulations. Cars rely on sensors like cameras or LiDAR to gather this information, which is then used to train and improve the models.
The World Economic Forum expects L2 and L2+ autonomous cars to become widely adopted by 2030. McKinsey predicts that vehicles with more advanced levels of automation (L3 and L4) will make up 12% of new sales in 2030 and 37% in 2035.
Yes. The use of AI in autonomous vehicles is what basically makes them work. AI helps the car notice what’s around it, understand traffic, choose a route and react while driving.
For sales inquiries only. We cannot respond to careers, HR, or support requests here — please use the main Contact Us page for other inquiries.
Thank you for providing your contact information. We will follow up by email to connect you with a sales representative.