Skip to content

The ins and outs of model monitoring

It's important for your data science team to monitor predictive model inputs and outputs

It’s a foregone conclusion that accelerated underwriting improves efficiency and customer experience in buying life insurance. And the industry knows it. According to a 2021 LIMRA study1, 91% of respondents had plans to implement some kind of automated UW into their process.

One of the toughest challenges however, is to ensure that accelerated UW delivers results in line with pricing expectations and that the greater efficiency and improved customer experience accelerated UW delivers is not undermined by lost protective value.

Accelerated UW programs require some combination of automation and alternative data — data to replace traditional medical exams and labs and automation to process that data efficiently. As a result, accelerated UW typically uses advanced predictive models to assess risk.

New era of monitoring

Of course, carriers have monitored underwriting for decades, whether via a post-issue audit of traditional underwriting or random holdouts for a predictive model. With today’s highly automated programs, it’s important to have a comprehensive testing program to ensure the model is performing as expected.

The consequences of the model not performing as expected can affect the customer experience, mortality and your business viability.

Unexpected results can be a problem for a number of reasons. Kicking more applicants out of accelerated underwriting contributes to a poor customer experience and adds time and costs to the underwriting process. Accelerating more applicants than expected can translate into paying more claims than anticipated, damaging profitability (and in extreme cases, even impacting business viability). Overestimating risk can mean overpricing your products and losing potential customers to the competition. All these contribute to a lack of confidence and buy-in from the powers that be.

Enter model monitoring, a critical tool to verify the ongoing performance and integrity of an accelerated UW program.

New call-to-action

Two types of monitoring

There are two types of monitoring that are important to your accelerated UW program:

Program monitoring has been conducted by most carriers well before predictive modeling arrived on the scene. As its name implies, it tracks a carrier’s entire UW program. And today, a carrier’s predictive risk model is part of that program.

Program monitoring, such as a post-issue audit, seeks to ensure that your program is producing UW decisions that align with outcome expectations for mortality, risk distribution and risk classes. It checks for potential blind spots in underwriting — situations where a risk was overlooked or incorrectly assessed due to lack of information.

Model monitoring is more micro than program monitoring. It enables you to check the veracity of data inputs to — and outputs from — your accelerated UW model by tracking your model and your rules. (We’ve previously discussed why rules and models work better together.) 

Predictive model monitoring helps ensure that model inputs and outputs are matching expectations. It also helps you quickly identify potential problems with the model or its input data, reducing the chances that the problem will impact your book of business.

Monitoring ins and outs

Organizations that monitor inputs can identify both sudden and long-tail changes in data. Take, for example, prescription data. If your data provider recently onboarded a new pharmacy provider, applicants may start showing an increase in Rx fills. In reality however, there may not be any additional risk.

Data drift: individual data points that gradually change over time.

Catch your drift?

Organizations also monitor inputs to detect “data drift,” which occurs when individual data points gradually change over time. For example, due to the increased use of cholesterol-lowering drugs, the average applicant has a lower cholesterol level today than five years ago2. Therefore, a model that uses cholesterol as an input may need to account for this drift. A monitoring program’s ability to detect data drift can help answer a myriad of questions, such as:

  • Are you getting the correct data?
  • Is the API working?
  • Does the distribution of data match expectations?
  • Are there anomalies in the data?

To account for data drift, organizations should assess their model for anomalies, as well as retraining and/or modifying the model if drift is detected. Your data science team or partner can establish automatic reporting on data drift to help ensure that data sources are calculating risk based on anticipated, high-quality data.

Monitoring outputs

Model monitoring is also essential in ensuring the continued veracity of outputs, meaning that you receive anticipated outputs based on a certain set of inputs. This can help prevent “concept drift” — changes in output that may indicate that your model is making bad calls.

Concept drift: changes in output that may indicate that your model is making bad calls.

Model monitoring can detect a change in the distribution of outputs, which may indicate a subtle but significant input issue, such as bad data points, features, etc. You can, for example, catch a problematic risk score distribution or a time lag in receiving a risk score before it corrupts your entire book of business.

Monitoring predictive model output is a useful compass for your digital underwriting road map because it reflects ever-changing applicant demographics and market forces.

Maintenance monitoring from day one

The team or partner that created and implemented  your predictive analytics should also be responsible for developing a monitoring program specific to your ecosystem, appetite for risk and underwriting goals. Monitoring controls track:

  • How your risk model is performing
  • Whether there are shifts in the distribution of risk and risk classes due to the model
  • Fluctuations in model inputs
  • Fluctuations in model outputs

Every LifeScore Labs–produced risk model includes ongoing monitoring assistance that aligns with your underwriting objectives and appetite for risk.

Monitoring is an essential investment for any carrier that wants to gain the full benefits of predictive modeling in risk assessment. It provides a way to balance innovation with reliability. By ensuring that your program delivers the results you’re expecting, you and your customers benefit from the improved speed, convenience and precision of accelerated underwriting.

 

1 LIMRA. Automated and Accelerated Underwriting. Life Insurance Company Practices in 2022.
2 Journal of the American Heart Association. US Trends in Cholesterol Screening, Lipid Levels, and Lipid‐Lowering Medication Use in US Adults, 1999 to 2018.