AI and data solutions have been making significant waves in the insurance industry, completely transforming processes, enhancing risk management and streamlining operations. Recently, we had the privilege of hosting an exclusive insurance roundtable event where industry experts from companies such as Legal & General, Howden, EY and Alvarez & Marsal gathered to discuss the latest trends and challenges associated with the adoption of rapidly-changing technology.
The key themes and insights that emerged from the roundtable are shared below, shedding light on the future of AI in insurance.
The discussion highlighted the power of partnering with insurance companies to build AI and data solutions that span the entire insurance value chain. The focus has predominantly been on the underwriting space, leading to the development of solutions like risk management platforms, AI-driven underwriting and pricing, automation of data and processes, machine learning modelling, and advanced data analysis.
These advancements aim to streamline operations, improve risk-adjusted decision-making, and enhance overall efficiency in the short-term insurance market.
A key question that emerged was how to effectively leverage AI to improve existing processes like underwriting and risk management, especially when data availability is limited. Participants explored the possibilities of leveraging AI to generate synthetic data, the critical need to share (often anonymised) data sources, and the challenges associated with cross-border data sharing due to varying regulatory regimes.
One of the critical topics discussed was the importance of data integrity and management in the insurance industry. As one participant aptly put it, “You don’t truly know the risks you’ve underwritten until the claims start rolling in.”
Transforming “dirty” data into a reliable and usable information source, often stored in legacy systems or even filing cabinets, remains a major focus. It’s a time-consuming process that can take years to rectify and involve substantial costs. The roundtable highlighted the need for efficient data cleansing processes, the potential of using synthetic data to fill missing fields, and the challenge of empirically demonstrating the return on investment (RoI) of executing these projects.
The topic of self-regulation versus external regulation sparked a thought-provoking conversation. Participants agreed that while self-regulation plays a critical role, it’s likely that regional regulators will adopt a “principles-based” approach rather than a rigid “rules-based” one. The fundamental principle of “do no harm” was deemed pivotal in this context. The pace of regulatory evolution and its ability to keep up with technological innovation was also a recurring theme.
When it comes to potential risks associated with AI adoption, concerns were raised about misinformation, cyber threats, and the need for data security. It was stressed that companies leveraging AI technologies must take responsibility for self-regulation to mitigate unintended consequences and ensure intelligent data handling. Clearly defined lines of responsibility within organisations and a well-defined strategy for the creation and use of AI will be critical in navigating this landscape.
Participants observed that AI technologies such as coding assistance empower individuals to work more efficiently, illuminating the real need for upskilling and reskilling programs.
The issue of cross-border data sharing and jurisdictional regulations sparked a thoughtful discussion. Participants shed light on the challenges involved in sharing data across different jurisdictions, with some jurisdictions imposing restrictions on the movement of data beyond their borders. Once again, the positive impact of leveraging synthetic data and/or anonymised data sets to drive models was highlighted during the discussion.
During the roundtable, participants shared valuable insights into the underlying architecture and capabilities of generative AI, with a particular focus on ChatGPT. They emphasised the significance of attention-based models, foundation models, synthetic data, and the potential for future multimodal applications.
These advancements present new opportunities for problem-solving and pave the way for exciting developments in the insurance industry. Whether it’s improving the reliability of credit risk and pricing models, improving the speed and efficiency of processes across the insurance value chain, or delivering exceptional customer experiences and personalised services, generative AI has the potential to transform the industry.
The impact of AI on insurance careers was an interesting topic of discussion. Participants observed that AI technologies, such as coding assistance, empower individuals to work more efficiently, even if they were previously unfamiliar with programming languages like Python. This has the dual effect of ‘upskilling’ both technical and non-technical professionals, while also potentially changing the landscape of certain roles or rendering them redundant.
Currently, AI credit risk models operate alongside established actuarial teams and models, but as AI capabilities continue to improve, a question arises: Is there a point at which human oversight becomes redundant?
Conclusion: The insurance roundtable provided invaluable insights into the role of data science and AI in the current insurance market, as well as the potential for their use, particularly generative AI, in shaping the future of insurance.
As technology continues to evolve, it is essential for insurance companies to embrace AI responsibly, harnessing its potential while upholding principles of data security, ethics, compliance, and fairness. The roundtable served as a reminder that collaboration, innovation, and adaptability are key to successfully navigating the transformative journey of AI, both within the insurance sector and beyond.
We extend our sincere thanks to all the participants for their valuable contributions and engaging discussions at the event.
Other articles that may interest you