Skip to content

Governing AI: A matter of principle

governing AI

In 2023, artificial intelligence (AI) has become the single all-consuming passion of today’s business world. The truth is that AI has been on the radar for several years, if not decades, in machine learning predictive models. But the inflection point — or more accurately, the explosion point — occurred last year when OpenAI introduced its ChatGPT Generative AI (GenAI) tool. And now AI is everywhere. Quite literally.

Opportunities and risks

In the insurance realm, strategic and technology road maps include GenAI tools to improve business processes and the customer experience (CX.) AI is being used both to summarize large amounts of information, such as underwriting manuals, compliance reviews, regulatory requirements, attending physician statements (APS) and electronic medical records, and to develop content for customer service, claim handling and marketing.

That’s just the start. GenAI’s capabilities are rapidly evolving, and it won’t be long before it’s used as CSRs and virtual assistants for financial advisors. Given the myriad of applications, there’s one thing all leaders in the insurance industry agree on: AI will become a crucial differentiator and essential for staying competitive.

The power in principle

With AI technology growing so much and so fast, concerns are being raised about governing its use and spread. An organization’s governance program should encompass its entire AI portfolio and initiatives — not only in GenAI but also across the entire AI spectrum. AI governance provides a structured framework that balances innovation with ethical, safety and societal considerations.

Proper AI governance is critical. Without it, AI’s risks could quickly overwhelm its benefits. For that reason, organizations need to be diligent about constructing governance guidelines regarding their AI.

Already, the corporate world has begun adopting multiple different approaches to AI governance — ranging from rules-based to market-driven to tech-specific to self-regulated options. A principles-based approach, however, refers to the development and implementation of high-level ethical principles that guide the design, development, deployment and use of all AI systems, including but not limited to GenAI. These principles, which serve as a foundation for responsible predictive modeling practices, aim to ensure that AI technologies align with human values, societal norms and ethical considerations.

We believe that just as AI will be a differentiator, so will your AI governance program.

Benefits of principles-based governance

A principles-based approach that’s applied throughout the model development process provides a host of benefits to the insurer and the teams involved, including:

  • Flexibility: A principles-based approach provides a high-level ethical foundation without being overly prescriptive and it can adapt to different contexts, industries and technologies.
  • Adaptability: As AI technology rapidly evolves, a principles-based approach is easily applied to emerging risks, ethical concerns and technological developments.
  • Wide applicability: Principles can be applied across emerging AI variations and organizational structures (e.g., subsidiaries), creating a common ethical foundation across an organization for responsible AI.
  • Innovation: By providing a high-level ethical framework, a principles-based approach can encourage innovation that aligns with responsible practices without imposing rigid constraints.
  • Compliance: A principles-based approach can help insurance organizations meet the rapidly changing regulatory requirements.

Rules based: Speed limit is 55 miles per hour.

Principles based: Drive at an appropriate speed for the location and conditions.

 

LifeScore Labs' AI governance principles

LifeScore Labs, by virtue of our parent company MassMutual, has had a principles-based approach in place since our inception. It guides how we develop predictive models and leverage AI.

By incorporating ethical considerations directly into AI acquisition, design and development, we can balance a need for innovation with responsible technology. Our principles-based approach helps proactively address potential risks and ethical concerns as we execute on our digital strategy.

LifeScore Labs and MassMutual’s AI governance program is sophisticated, mature, and adheres to four basic and interrelated principles:

Frame 11

Frame 12

Frame 13

Frame 14


“In matters of style,
swim with the current;
in matters of principle, stand like a rock.”

— Thomas Jefferson


AI in general, and GenAI in particular, will continue to become more expansive. As industry adoption increases, the ethical considerations must also keep up. With a principles-driven approach, human values can drive technological development which allows organizations to remain flexible and nimble while continuing to emphasize responsible AI development and use. Principles should drive AI governance policies, standards and controls so the insurance industry can take complete advantage of what GenAI has to offer.


This blog post is based on interviews with Kevin Fitzpatrick, Head of Privacy, Data & AI Governance at MassMutual and Ra’ad Siraj, Head of AI & Data Privacy Governance, MassMutual.