To use AI or not to use AI — for insurers, that is no longer the question.
Instead, says Brian Bacsu, Director, Insurance Architecture and Platform Engineering at DXC Technology, it’s time to dive into Generative AI, ideally with a trusted partner.
Bacsu made his comments during the recent ITC Vegas event for the insurance industry. He spoke during an ITC workshop entitled “Understanding the World of AI in Insurance.” It included a presentation by DXC partner Datos Insights that outlined five possible paths insurers can adopt with large language models (LLMs):
- Build your own LLM.
- Fine-tune existing LLMs.
- Leverage LLM capabilities from a proven insurance solution provider.
- Leverage insuretech/innovators.
- Do nothing.
Bacsu was also interviewed by Datos Insights. Their conversation has been edited for clarity and length.
A conversation with Brian Bacsu
Q: Datos identified five different paths that insurers can take when adopting Large Language Models. One of those models is to do nothing. Do you agree with that path?
A: We’ve moved beyond the point where you can just ignore AI. Now we’re in a world where you have to at least be aware of the AI fundamentals. Because AI is going to impact many aspects of the insurance business.
Q: At Datos, we believe that the first model — build it yourself, and the second model — fine-tune LLMS, are highly unlikely to happen. What do you think?
A: I agree. It’s what we’re seeing in the market and hearing in talks with our customers. For them to play in GenAI, it comes down to three factors: expense, expertise and effort.
First, it requires substantial financial investment to play in the LLM space, and it takes a lot of time. On average, processing the large amounts of documents and data that’s needed to produce a commercially viable and visible LLM can take over a year. That doesn’t include testing, which can take another six months to a year. You also need to apply human training. All that is expensive.
Second, it's expertise-intensive. You need resources who know LLM creation and related technologies. That’s not just data science. Now you’re also into natural language processing (NLP) and vector data-storage architectures. Finding people with this knowledge and skills is hard and hiring them is expensive.
Third, I mentioned effort. You cannot underestimate the impact that finding, curating and testing data is going to have on your organisation. It’s going to absorb a significant amount of manpower. So before you go into this area, you need a really well-thought-out business case and use cases.
Q: For insurers seeking a GenAI partner, what are some best practices? Are the industry’s traditional vendor-management processes still relevant?
A: The question to consider is: Are these partnerships a new type of relationship? Or are they simply new priorities for your traditional partners?
I think it’s a bit of both. You have traditional partners because you trust them. You trust them to know both the technologies and the regulatory environment. But you also need a partner that can apply general LLM capabilities to the highly specific needs of your organisation.
You can ask an LLM model, “What is an insurance policy?” and you’ll get back a great answer. But if you ask it things like, “Tell me about my insurance policy,” then there’s a lot of work involved to move from the general to the specific. Either way, you’ll need partners you can trust.
Q: Within insurance, what are some of the most exciting LLM use cases you’ve seen?
A: Right now, the best use cases for GenAI in insurance balance acceptable risk with big impact. These insurers ask, “How do I juggle the uncertainty in the regulatory environment?” But they also ask, “How can I use GenAI in my core business to have an impact?”
One important area is knowledge management, knowledge access. All insurers have vast amounts of documentation and data. Trying to get that to your fingertips has been difficult. So, this is an area where AI could make some big impacts.
Another exciting use case is conversational AI and human interaction. This comes back to having information at our fingertips. It could have a huge impact.
Q: In your experience, when people in insurance think about LLMs and GenAI, what are some of their biggest misconceptions?
A: One is the idea that implementing GenAI is easy: “I can download ChatGPT onto my phone, and it’s a cool thing to play with.” But once people move into the proof-of-concept stage, they realise, “Wow, this is actually a lot of work.” Some of that is around creating regulator safeguards, managing issues around the regulatory environment. There’s effort involved, and it’s not insignificant. So that’s the first misconception, regarding the amount of work that’s necessary.
Another misconception concerns vendors: “If I just choose the right vendor, they’ll do everything for me. It’s going to be out-of-the-box.”
For sure, a lot of it is out-of-the-box. At DXC, we work very, very hard to provide GenAI as a safe capability. But to implement that, there’s still a significant amount of effort on the client’s side. They have to find the documents. They have to curate the documents. To manage this, there’s the rise of a new role: data librarian. So that’s a second misconception, around the amount of effort required to find the necessary documents.
Q. Bias has been identified as a serious issue with AI. How do you advise insurers to think about AI bias?
A: I agree, it’s an important issue. We’re seeing new guidance from governments in the United States, EU, Singapore, Canada and elsewhere focused on data privacy, copyright and related aspects. Regional regulators have shown more interest in this idea of fairness and bias.
That makes sense. The insurance industry is not only highly regulated, but also highly litigated. There have been some classic lawsuits around the ideas of fairness and bias, alleging that models, claims and underwriting have been skewed against certain socio-economic groups. There’s a lot of sensitivity around this subject.
This comes back to the idea of being aware of which LLMs you’re using. It also comes back to working with partners you trust. You need to ask questions: What content is this LLM using? Has it been curated, or is it more general? And could this data also contain misinformation?
Fortunately, there are techniques for managing bias and fairness. It’s manageable, assuming you do it correctly and spend the necessary time. At DXC, we’ve spent a great deal of time managing this risk.
Q: What advice do you have for insurers looking to get started with GenAI?
A: Don’t give up! AI is going to fundamentally change so much. And with the right partners, you can manage the risk.
The other thing I’d say is, get in there and implement something. Find a partner, and don’t get too hung up on having the use cases fully defined. Once you start working on it, use cases will start to cascade to things you never thought of. One use case will lead you to others.
Finally, don’t underestimate the cultural impact. GenAI is going to change how we work and how our businesses operate.
About DXC
DXC Technology has been delivering AI-enabled solutions for more than 20 years, in industries from insurance and retail, to automotive, airlines and beyond. We have 10,000+ experienced AI practitioners partnering with 900+ customers around the globe to innovate and industrialise AI for enterprise growth. The world’s largest companies and public sector organisations trust DXC to deploy services to drive new levels of performance, competitiveness and customer experience across their IT estates, including responsible AI applications. In Insurance, we have eight AI-powered software solutions in market today to help transform business processes, customer experiences and risk management.
Do more:
- Explore DXC’s solutions for insurers.