Welcome to the Executive Data Series.

DXC created this new program to provide advanced insight into the data domain. In a series of conversations, DXC experts will explore data-driven decision making, and offer their perspective about what it takes to be successful in data, information and knowledge activities.

Mohammed 'Khal' Khalid, global advisory director at DXC Technology, will moderate our series. Discussions will draw on research conducted by DXC and upon our executives' experiences working with customers.

In this fourth conversation, Khal welcomes Senior Partner, Analytics & Engineering, Consulting (APJ-MEA) Gianluca Bernacchia to discuss the fascinating topic of Generative AI. We invite you to listen to their full conversation or, if you don’t have time now, to a short extract about the best use cases for Generative AI. You can also find a full transcript of the discussion below.

The conversation

Q.  It's my absolute pleasure today to introduce Gianluca Bernacchia to discuss this very interesting subject. Gianluca, would you like to say a little bit about yourself? 

A. I’ve been working in the data analytics space for over 23 years, the last 17 in this company, across HP, HPE and DXC. I started my career in Europe, spent the last eight years in Asia and am now handling the Middle East, as well. I have wide experience across multiple industries, driving mostly transformational programs around data and analytics. I'm very passionate about technology, especially data architectures, AI and machine learning, but also computer graphics and simulation. And I love to teach, especially to my sons.

Q.  We’re hearing a lot about Generative AI, and you've got some real experience doing real-world stuff with it. So, what challenges are businesses facing, given the current market environment?

A.   There is a lot of hype around Generative AI, especially in the last year, and many companies and stakeholders are exploring how they can jump in and explore the value that these AI projects can bring to their organizations. The reality is that a lot of these projects are simply failing because of lack of experience, both on the customer side and in the company developing the solutions. What people want is some help to reduce the risk and the effort to succeed, because they see Generative AI as an opportunity to disrupt others instead of being disrupted.

Q.  What are the specific challenges that people are having based on specific job responsibilities and objectives? In other words, what does this really boil down to in terms of the challenges you see on the ground for people and their jobs?

A. They are expecting Generative AI to automate some tasks or to solve problems that they are tackling today. The issues they are facing is that sometimes the reality falls short of the expectations, simply because initial expectations are not realistic or because the technology is not applied at the right time.

Q.  And what happens if this challenge isn’t overcome?

A. The risk is to be disrupted instead of disrupting others. So if the solution doesn't resolve the problem, then others would simply do better than you and you will not be able to create the kind of competitive advantage you're expecting from this kind of solution.

Q.  What do you see as the 2-3 year vision for AI? 

A. In AI, data – not rules – is the key to get right, and the ability to select, to prepare, to protect the data, and to keep it updated makes all the difference in the quality of the result. In this context, I consider the value of decision making as strategy. Having a clear vision enables you to maximize the outcome and avoid wasting time and resources chasing too big or too complex or low return endeavors. There's no standard playbook. Generative AI priorities need to be connected to strategy. It’s more important for Generative AI to reduce costs, to serve for a call center or focus on content personalization to generate additional revenues, just to give you an example. Strategy can help. The idea is to apply AI in the right areas, and experience can help to ensure that your objective can be reached.

Q.  So what I'm hearing is that the ability to select, prepare and protect the patterns that you see in the data is really important. What’s the importance of verification around all of that?  

A.  It’s extremely important, because getting the data right is only the starting point. Verifying that the outcome of the model is what was expected is actually extremely difficult, especially when you are dealing with Generative AI – machine learning in general – because it doesn't get trained through rules. By nature it is a black box that is trained on data, and whenever there is a new piece of data which was not considered during the training phase, the results are not easy to predict. So to have a policy and an approach, and to verify what comes out of a model and ensure that it is aligned with the target and the expectation, is extremely important.

Q.  So as we look to taking the theory and the hype into reality, to what's happening today, I know that from a DXC perspective, we're seeing a lot of areas where Generative AI is being used to help support different types of tasks. Based on your experience, can you give a concrete example of where you see Generative AI being successfully applied?

A.  Marketing is a great example. The automatic creation of highly personalized messages, including text, images, even videos, that target a single customer – the so-called “segment of one” – was simply not possible before, because it required too much effort and was extremely difficult to manage. But it is achievable today because the creation of the content can be fully automatic. But this needs to be achieved with a high level of control and safeguards, as I mentioned earlier, as mistakes can easily damage a company's reputation.

Q. Let’s shift gears and talk about the limitations of AI. What are the areas that you see as major risk factors today?

A. The AI domain includes multiple techniques and algorithms which are meant to deal with real-world problems. The key risks are related to getting unexpected behaviors or wrong responses from the models, which are more likely when complexity increases – especially with machine learning and its focus on the models, when the responses are defined by examples, not rules. If the examples are wrong or biased or even missing, the result is not going to be good. And it is not easy for humans to figure out how models are processing the information, so spotting the defects is difficult to do. This happens especially when dealing with more complex algorithms to address more complex problems.

Just to give you an idea, there are two main areas in machine learning work. One is supervised learning, where humans provide examples of input and responses with a high level of control over model output, because you can specifically label an example and control how the response for that sample is going to be in the model. The other is unsupervised learning – Generative AI and other techniques such as enforcement learning – where we are basically letting the model figure out responses by itself and just providing some context for the model to do this work. This makes the work much more difficult to control and greatly increases the possibility of mistakes in the final results. So the key is to include extensive testing and a strong methodology to ensure that these errors are minimized. The errors cannot ever be completely controlled with the techniques that we have today, but to bring them to an acceptable level is actually the target these days.

Q.  So talk about methodologies. As we look to operationalize AI and to build it to scale, how must methodologies like MLOps evolve to cover Generative AI?

A.  MLOps is important because it can make the work on creating the model much more efficient. There is a lot of time and effort invested by the team – the data scientists working on AI – on training and preparing the models, and by nature this work is iterative. So the way MLOps helps is to make this work easier and more efficient. And, of course, making it more efficient also helps to reduce errors. So, more iterations in less time with fewer errors, which all brings better models.

Q.  Based on your experience, where are the opportunities that are most common across the board? I'm thinking in particular about the journey of engaging Generative AI.

A.  There are multiple opportunities for Generative AI, practically in every industry. Generative AI is not limited to bots, and it enables automated content creation in multiple areas. That's the generative definition. While AI was traditionally more used for code or to classify existing content, Generative AI can be applied to any content that can be translated into a digital format: text, images, videos, 3D models, recordings, and so on. And there’s a huge domain where this can be applied. Particularly mature today are generative models for natural language, processing basically every document, conversation and image. I'm sure you have already seen many examples of these. Some examples are related to reducing the cost of creating the content (for example, auto-summarization of meetings, creation of commercial, legal documents or marketing documents), improving the quality of content to avoid errors (especially related to computer coding, automatic documentation or test generation or redaction of legal documents), and generation of original content (for example, prototypes or concept of design). The potential is really huge and the number of areas is wide.

Q. Do you see any maturity framework – formal or informal – that would help inform the priorities?

A.  There's no internationally recognized specific framework that everybody agrees upon, as this area is quite recent. But organizations like DXC have a broad experience of the subject. We have created a framework which can help define specific Generative AI use cases and also define the maturity of specific areas to help our customers understand where they are in their journey. This is something that we normally do to help our customers quickly discover the value that Generative AI can bring in specific areas.

Q.  Is there anything that is absolutely necessary that you must do to start the journey well? Is there a “get this right” first step, and what is it?

A.  What is very important is to get to the business case right, because without a clear idea of the desired outcome of the initiative, there is a very high risk of failure. Also, getting all the required skills and experience, maybe picking a partner like DXC with expertise on the subject, benefitting from our experiences, seeing what worked and what didn't, is definitely a good accelerator to reduce the risk for these kinds of initiatives.

Q.  On that journey, how do you benchmark? How do you evaluate and assess your progress, maybe against your peers?

A.  We’re looking at successful uses cases and implementations within different industries. The key topic here is prior experience. By nature we work with a lot of companies that are experimenting – and be aware that the majority of these experiments initially are going to fail. The point is to quickly iterate, see what didn’t work and move on to get quickly to the desired outcome. So, definitely prior experience in this area matters.

Q.  And then of course, the inevitable question: What's the payoff? What's the ROI from any investments? Any thoughts on that?

A.  Yes. This depends on the area being considered. Marketing initiatives can be measured with response rates of a campaign or uplifts in product sales. Automation use cases can be measured with the average cost per volume of services provided with automated processes. In general, there are parts of the top and bottom line of the company, depending on the specific areas being considered. The definition of these KPIs – especially a realistic definition of the KPIs – is very important to ensure that the project and the initiative has a future, because to build something that can be achieved in a reasonable amount of time and with a reasonable cost allows you to get the support and funding for the next step.

Q.  I would have thought that that next step is so important, the scale-up piece. More on that later. So let me just ask about prioritization. Based on your experience, are there certain specific industry or business process dynamics that you see as being areas to investigate first?

A.  In general, the many areas that involve content creation and human interaction are what I would address first. New contexts and use cases are coming out quickly and on a constant basis, and there are a lot of investments happening in exploratory pilots. My best suggestion is to look at the business priorities and compare with the best experiences in the market to identify promising areas through qualified partners. Some exercises that we performed in previous years will definitely be outdated now because the technology has advanced. So to stay up-to-date and look at what others have done definitely matters.

Q.  And, if I wear my Chief Knowledge Officer hat for a moment, I guess looking at the types of knowledge tasks of analyzing, summarizing, converting through to designing and creating, these are probably targets for investing in Generative AI at short- to mid-term in the journey. Coming back to success, what does it look like when someone is on the right path?

A.  After an initial exploratory phase, when there is business commitment to make it happen, when there is a path to production that includes multiple steps, clear success criteria and a good understanding of the potential and the pitfalls, transformation can happen quickly and effectively. 

Q.  Being on the right path and looking at success, what does success look like with different types of people or companies in size, industries, technographics, competitiveness? What are the characteristics of success from your perspective?

A.  Success will definitely look different in different contexts and businesses, even different business areas in the same context or industry. For example, a successfully developed "robo advisor" to provide financial advice can be considered from an IT and operational perspective as a way to reduce costs, which impacts the bottom line. The same initiative from a sales perspective can help increase revenues, which impacts the top line. So different people measure success in a different way, and it is very important to align with them to ensure that support for the initiative is there and will continue to be.

Q.  If we think about the success of a pilot, which is usually about leading to the next stage and the ability to scale the pilot effectively – in other words, moving to operationalizing Generative AI –  have you seen any good examples where that happens, and what does that look like?

A.  I think a good example about the value that AI can bring to operations is around claims processing for an insurance company. It’s a process that is very time consuming and error prone. So there are two aspects of it. On one side, definitely automating the process of processing claims can bring a reduction in costs, which is something that impacts the bottom line and is very welcome with the companies. But at the same time it also has an impact on the service that the company is providing to its customers, because automation should also reduce the errors in the process, thus providing a better service. There is also an impact on the time to serve the customer, because claims can be processed faster.

A good implementation of AI in operations should address many areas, should make the process more efficient (meaning better, depending on how the process is measured), should make the process cheaper (depending on the context) and can definitely provide a better service for the customer and final user.

Q.  To make something like that happen, to the best of your knowledge, are there their special tools, frameworks, delivery models? We've touched on methodologies and frameworks before, but any particular sets of tools that you know will be useful when executing this?

A.  There are many tools and methodologies that can definitely help to develop these kinds of solutions. DXC has developed a methodology and framework to support AI projects that specifically covers Generative AI. And the focus is to ensure compliance, privacy and the ethical usage of AI. There are other solutions provided by different OEMs which address more the efficiency in executing the various steps most related to MLOps and automation of the creation itself. The processes are complex, so definitely having a framework and methodology on one side to make it more robust, and a set of tools on the other side to make it more automated, is a good thing.

Q.  Moving onto the common pitfalls. If we’re talking to leaders and people who are investing in a Generative AI pilot or scale-up, what are the things you think leaders should be aware of when they are embarking on these initiatives?

A.  Definitely they should spend some time understanding what technology can and cannot do. Setting realistic targets is the key to success. There is a lot of hype these days, which is understandable, and there are many companies that are exploring and testing processes. Some of them, as I mentioned, are going to fail, and that is perfectly normal. To have a grounded approach and a phased approach, and to educate themselves about what the technology can do, is really important so that they can start in the right direction.

Q.  And what do you see as being common mistakes that tend to pop up?

A.  Definitely the two biggest areas that I see are expectations not being rightly set on one side, and delays in execution, which often happen because the scope, as initially defined, was too big and too broad. So after months the project is not where it needs to be, and it just loses support.

Q.  Are there any challenges you see that are unique to specific industries or business processes?

A.  Yes. For highly regulated industries such as the public sector the risk of Generative AI models having hallucinations can imply even higher risks than in the private sector. There are also niche areas with little documentation or training sets available, or which are dealing with highly sensitive data, that can have an issue developing effective solutions. This is especially true with public cloud, because they may not be able to put the data in the public cloud for training, or maybe because there is not enough data to get a decent quality out of the training or fine tuning. So, these areas and these industries definitely require much more focus to start the project in the right way to ensure that it is not doomed to fail.

Q.  I'm very keen to explore this whole topic of hallucinations and sensitive data a bit more. What are the most innovative things you have seen when evaluating the problems delivering and tracking the performance of the solution?

A.  For me, one of the most fascinating areas is definitely automated code generation and AI plugins, because they bridge two different domains without a clear set of rules. This will have a profound implication in human-machine interactions, and most of it is completely uncharted territory. I really see a lot of potential in having humans and machines that are able to communicate in a natural language. This will enable a level of productivity and scenarios that are difficult to imagine even today.

Q.  So I guess this will lead into some of what we're seeing around how Generative AI is automating data science. What are some of the biggest factors to success over the next five years?

A.  For sure, the ability to embed new tools and technologies into existing processes quickly and effectively, without reinventing the wheel or being stuck in legacy solutions, would be very important. The state of the art evolves very quickly in this field, and APIs and frameworks today make it possible to easily integrate new components without creating something from scratch. Especially today, AI is not just for data scientists, but can be used to integrate components (which is a kind of “building block” approach) very often much more easily and with better results.

Q.  So being able to operationalize AI so that it can scale becomes the key. Are there any major disruptors or potential for volatility in the space?

A.  Many. Reliable models to automate content generation can have a huge impact on business and society, making many roles redundant, augmenting others and creating new ones. At the same time the “black box” nature of AI makes predictability and control a big challenge.

Q.  As we look forward, how do you envision this space looking in five years, and what will separate the winners from the losers?

A.  Considering the current focus and levels of investment, I would expect an even faster pace of innovation in the next few years, and more mistakes being made. Succeeding or failing quickly, and moving on, is going to be the key.

Q.  Regular listeners of this Executive Data Series will be aware that I usually ask this next question. Gianluca, if you were to go back and advise younger you, what's the one piece of advice you would give yourself related to this topic?

A.  I would suggest that I start investing time in repeatable solutions earlier in my career instead of trying to solve point problems. But this is a natural growth path, from the tree to the forest. And variety is what I enjoy the most, being exposed to many interesting topics in different industries. And when the problem is solved, the idea is to move on to the next. My ambition is to define and grow multiple world-class solutions in this space.

 


About the speakers

 

Mohammed 'Khal' Khalid is global advisory director at DXC Technology, working with customers to make real change happen. As a coach and experienced business leader, Khal previously spent 9 years with Gartner as regional vice president of executive programs, leading a team of highly experienced former CIOs and IT executives in the Benelux region. A former chief knowledge officer and CIO and now a business advisor, he is passionate about helping organizations exceed their objectives and goals. Read his most recent research paper Boosting data metabolism to improve decision making. Connect with Khal on LinkedIn.
 

Gianluca Bernacchia is senior partner of Analytics & Engineering, Consulting (APJ-MEA) at DXC Technology. He is a seasoned technologist with over 20 years of advisory experience spanning Europe, the Middle East, and Asia. As a Senior Partner for Asia and the leader of the Global Insurance Community for Analytics at DXC, Gianluca is at the forefront of driving the innovation agenda in the Consulting and Analytics teams. He specializes in helping clients define their data and analytics strategy, tackling complex problems across multiple industries. he is driven by his passion for exploring the value of new technologies and fostering innovation. With an impressive track record, he consistently delivers transformative solutions by leveraging cutting-edge analytics and AI technologies. Through Gianluca's guidance, organizations experience data-driven transformations that enhance customer engagement and optimize processes. He consistently generates value through his expertise in data and AI, shaping the future of organizations. Connect with Gianluca on LinkedIn.