A survey published by the Lloyd’s Markets Association (LMA) in partnership with Barnett Waddingham and the LMA Next Generation Risk Committee found that artificial intelligence (AI) is now used across the majority of Lloyd’s Markets markets, with 93% of companies having or developing a formal AI framework to support adoption.
The survey is based on responses from 39 companies, which account for more than 60% of the stamp volume in the Lloyd’s market. The survey results reveal a significant shift over the past 12 months, with AI adoption moving from limited experimentation to broader early deployment.
By 2025, approximately half of companies will report limited or no implementation of AI. Twelve months on, AI is now widely used across the market, with 93% of respondents saying they have or are developing an AI framework – 72% already in place and 21% in development.
This marks a shift toward more structured, regulated adoption, with companies prioritizing oversight, accountability, and risk management before large-scale deployment.
The survey noted that the acceleration in AI adoption is primarily driven by generative AI applications such as ChatGPT and Microsoft Copilot, as well as internal productivity use cases such as summarization, reporting and data processing. However, these applications remain primarily focused on efficiency gains, with limited deployment in core underwriting, pricing and claims decision-making.
The survey results also showed that 44% of companies assigned AI governance to the chief technology officer, while 33% established a dedicated AI governance committee.
Data privacy, cybersecurity and third-party risks are now top concerns among respondents. The talent and skills gap was also cited as a key challenge, with businesses highlighting the need to build in-house expertise to support effective AI adoption.
While last year’s survey results highlighted concerns about regulatory uncertainty and the lack of a strong AI framework, the 2026 survey shows that governance is now firmly established as a priority, with most companies implementing or developing structured approaches.
Companies are embedding policies, oversight structures and controls ahead of large-scale deployment, reflecting a more thoughtful and risk-aware approach to AI.
Human oversight remains central to decision-making, with more than 60% of companies requiring mandatory review of AI-generated output to ensure AI is used to augment rather than replace expert judgement.
While progress has been made, accountability and regulatory integration remain areas of continued development.
The findings also indicate a clear shift in how businesses view the risks associated with AI. In 2025, data security and privacy are not always a top priority. In contrast, by 2026, data privacy, cybersecurity and third-party risks have become the most prominent issues in the Lloyd’s market.
This reflects a growing awareness of the risks associated with scaling artificial intelligence, particularly around data processing, third-party dependencies and system security.
About a quarter of companies still rely on general third-party risk management frameworks rather than AI-specific provisions.
Concerns about data quality, bias and reliability of AI output remain, highlighting the need for continued investment in validation, testing and assurance as use cases evolve.
Sanjiv Sharma, head of actuarial and risk management at Lloyd’s Market Association, said: “The pace of AI adoption across the Lloyd’s market has accelerated rapidly over the past 12 months, but it is encouraging to see that governance is being built in parallel with AI rather than after the fact, with more than 93% of respondents having frameworks in place or in the process of developing them. The survey clearly highlights that the market is still in its infancy, but the foundations for responsible adoption are clearly being established in place.”
“There is no clear consensus across the market on where responsibility for AI governance should lie, with companies taking a range of approaches across technology, risk and compliance functions.”
Wan Heah, partner and head of general insurance at Barnett Waddingham, added: “The market is moving from experimentation to a more regulated use of AI, with governance, data protection and verification now firmly in the spotlight. The real test will be ensuring these frameworks can keep up with the pace as AI applications become more sophisticated.
“There is no single blueprint for AI governance. Businesses need to strike a careful balance between risks and opportunities and develop practical, robust risk management strategies to support responsible adoption.”