Aerospace and Defense
Automotive
Consumer and Retail
Energy
Financial Services
Healthcare
Insurance
Life sciences
Manufacturing
Public Sector
Technology, Media and Telecommunications
Travel and Transportation
AGIG's Customer Service Transformation
Agentic AI in Insurance
Addressing Technical Debt with DXC Assure Platform
The Hogan API Microservices solution
DXC launches AMBER platform at CES 2026
Build & Innovate
Manage & Optimize
Protect & Scale
AI & Data
DXC IoT & Digital Twin Services
Strategize and accelerate your AI agenda
Explore our tailored options to navigate change
Enhance operational effectiveness, maintain compliance and foster customer trust
Customer Stories
Knowledge Base
AI
Closing the AI execution gap
About DXC
Awards & Recognition
Careers
Partners
Events
Environmental, Social, Governance
Investor Relations
Newsroom
Leadership
Legal & Compliance
DXC leads in the age of AI
Partnership with Manchester United
Partnership with Scuderia Ferrari
March 31, 2026
By Angela Q. Daniels, CTO (Americas), Consulting and Engineering Services, DXC Technology
It happens more than you’d think: The big AI initiative that the board approved, staffed, and funded has stalled out. And it’s not because of the tools, the technology, or the talent involved, but rather the data – specifically, that it simply can’t be trusted.
Over the last two decades, enterprises across virtually every industry have accumulated mountains of data. In theory, that data should make for the perfect foundation for that enterprise’s entrance into the era of AI. In practice, years of sprawl have fragmented the data layer.
Every definition of “customer” or “revenue” diverged quietly across business units. Those units used different applications, each with their own logic, as every source of truth drifts further away from each other. This doesn’t reflect a failure, but rather trade-offs made at the time of implementation.
Those trade-offs compound, however, and the result is a data layer that's often unsuitable for any attempt to use agentic AI to make real-time decisions. And while your CFO may have been willing to fund a big AI push, your CIO may find they have little patience for the time and effort it would take to do a full revamp of your data infrastructure.
The good news is that by working closely with our customers on their own AI projects, we’ve found an approach that can successfully get things back on track. Better yet, this approach can make an AI implementation self-funding, alleviating pressure from the CFO’s office to show ROI.
As the capabilities of AI agents grow, the possibilities can be intimidating. Rather than getting overwhelmed by that potential, start with a concrete business goal tied to a metric you can track, then identify the places where AI can move that metric.
Using that goal as a north star, the CIO can then identify where AI can improve those metrics. Even if the project starts with modest goals, any savings in hours or money can be reinvested in the next phase of your AI rollout. It becomes a flywheel, where each new milestone opens the door to bigger and more ambitious AI-based projects.
On a technical level, the key to this approach is taking advantage of an enterprise knowledge graph, which can be deployed above that fractured data landscape. Rather than requiring clean, unified data as a prerequisite, a knowledge graph maps relationships between data entities across fragmented systems. In other words, it maps the terrain as it is, without trying to force it into any particular shape.
For the purposes of an agentic AI implementation, the knowledge graph gives each agent a map of where the relevant data resides. Think about onboarding a new employee. You tell them things like, “For our revenue numbers, use this dashboard. For our customer records, pull from these systems. If those two disagree, here’s which one to trust.” A knowledge graph does the same thing for AI Agents. It gives them the right context to operate in an imperfect data environment without making mistakes that erode trust.
This approach also plays into the transformation flywheel effect, especially for getting an initiative unstuck. Constructing and implementing a knowledge graph requires an audit of the enterprise’s data landscape. That process can take weeks. This makes it much more efficient than the months or even years that full data modernization may demand. The compressed timeline is what makes the self-funding model work: you can show ROI fast enough to sustain executive support while building toward larger ambitions.
Finally, I’d like to address one other sticking point I hear from our customers and CIOs we work with. There’s a persistent idea that every company needs to go through the trouble of training its own models. Doing so is expensive and time-consuming, and the result is rarely worth the effort when commercial models coming out of the frontier AI labs are already available, benefiting from levels of investment, data, and continuous refinement that very few organizations could match on their own.
If your AI initiative has stalled on model-building, consider putting that project on the shelf. The higher-leverage move is developing a strategy to responsibly connect your enterprise data, via the knowledge graph layer, to a commercial model, balancing business needs with security and governance requirements.
This is where the three ideas in this piece converge. The knowledge graph is what makes commercial model adoption viable at enterprise scale, because it provides the routing, context, and trust boundaries that let a frontier model work with your data safely and effectively. Start with the business problem. Build the knowledge graph to navigate your data. Connect it to the best available model. And let each win fund the next one.
The most common mistake I see in stalled AI programs isn’t a technology problem. It’s a sequencing problem. Organizations try to solve everything at once: fix the data, build the model, deploy the agents. That’s a recipe for an 18-month planning phase that never ships.
The approach that works is the one that starts delivering value in weeks, not years. Identify the business metric. Map the data with a knowledge graph. Connect to a frontier model. Ship something. Learn from it. Reinvest. The enterprises that are pulling ahead right now aren’t the ones with the cleanest data. They’re the ones that figured out how to move forward with the data they have.
Angela Q. Daniels is the Chief Technology Officer (Americas) for Consulting and Engineering Services (CES) at DXC Technology. She is shaping the future of enterprise engineering by embedding AI into every layer of DXC’s technology ecosystem. Her vision is to create a fully connected, AI-driven delivery model that transforms how enterprises build, operate, and evolve technology.