The insurance industry is investing more than ever in machine learning. Pricing models are becoming more powerful, more granular, and more dynamic. From how insurers assess risk to how they compete in the market, machine learning is driving a new wave of sophistication.
But a model isn’t great just because it’s accurate. It’s great when it’s understandable, when it gives us insight, not just output. When it enables decisions we can stand behind, not because the computer told us to, but because we actually understand what’s driving the result.
That’s the promise of explainable AI, and it’s becoming the new standard for insurance pricing.
We’ve moved past the days of relying solely on GLMs and overly simplified pricing models. Tools like gradient boosted machines (GBMs) have changed the game, allowing us to model intricate interactions, uncover nonlinear effects, and react to market shifts with extraordinary speed and nuance.
But with that power comes opacity.
GBMs and similar models often deliver impressive performance, but explaining why they’ve made a particular recommendation is a different story. And that matters. Because pricing isn’t just a data science problem, it’s a strategic decision. It has to be communicated, justified, challenged, and understood by more than just the model builders.
If underwriters, pricing committees, or commercial leaders can’t understand why a model suggests a certain action, they’ll hesitate. And rightly so. Blindly trusting output without context creates risk, not confidence.
For example, a model might apply an uplift in certain inner-city postcodes. But if that can’t be clearly linked to claims experience or real risk indicators, it raises questions: is this a valid signal, or a proxy that could unfairly impact certain groups? Without explainability, it’s hard to know and even harder to defend.
Explainability bridges that gap. It transforms the model from something you follow into something you trust. Something you can explain. Something you can use to inform smarter, faster, commercially sound decisions.
Yes, explainability satisfies governance. It supports regulatory expectations like those set out in the FCA’s General Insurance Pricing Practices (GIPP) reforms, or the EU’s upcoming AI Act. Those frameworks are important but they’re not the reason we prioritise explainability.
We do it because when you can truly explain what your model is doing, everything gets better.
You start to see pricing as more than just a number. It becomes a window into customer behaviour, geographic variation, and competitive dynamics. Suddenly, you’re not just modelling risk, you’re understanding it in context. You’re uncovering where pricing logic breaks down, where opportunity exists, and where strategy can evolve.
And in a world where pricing is increasingly under public and political scrutiny, that clarity becomes essential. There’s growing debate around affordability, fairness, and the role of regulation in shaping market outcomes. Some call for rating factors to be published. Others argue that pricing controls are the answer to high premiums.
But there’s a reality we can’t ignore: removing risk-based differentiation doesn’t make risk disappear, it just redistributes it. If we’re not allowed to recognise key indicators of future claims, the outcome won’t be fairer. It will just be more arbitrary. Good risks end up subsidising bad. Products become blunter. And in the long run, coverage becomes unaffordable for everyone.
That’s why explainable pricing matters. Not just to meet compliance requirements but to keep insurance sustainable. Transparent models are how we defend intelligent decisions. They’re how we demonstrate that pricing is evidence-based, not discriminatory. They’re how we push back on simplistic reforms with real insight.
Because if you can’t explain how your model works or why you priced the way you did, you can’t participate in the bigger conversation about what fairness really means.
Explainability doesn’t just protect pricing. It protects the principles that make insurance work.
That’s exactly how we built Apollo, our machine learning pricing engine at Consumer Intelligence.
Apollo is built to predict with power yes but more importantly, it’s built to explain. Every output is designed to be interrogated, unpacked, and understood. We use a range of XAI tools: SHAP, HSTATS, partial dependence plots, 2-way PDPs, and others to understand model behaviour from multiple angles. These tools don’t exist in isolation they’re used in combination to validate the logic behind the model and ensure it’s telling us something meaningful, not just mathematically plausible.
That process helps us, and our clients, go beyond surface-level outputs. We can see where a model’s logic holds up commercially and where it needs to be reviewed, recalibrated, or simplified to support confident decision-making.
In combination with our postcode classifier, which draws on over 170 engineered features spanning crime, commuting patterns, socio-demographic indicators, and weather data, we’re able to uncover granular insights about how different risks behave and how pricing strategies can be tuned in response.
Explainability, here, isn’t a post-hoc check. It’s a strategic asset that’s baked into how we model, interpret, and act.
The direction is clear. In a world of increasing complexity and tighter regulatory scrutiny, the real winners won’t be those who build the most complicated models, they’ll be the ones who understand them best. The ones who can explain what’s happening beneath the surface. The ones who turn complexity into clarity, and clarity into action.
That’s what we’re building at Consumer Intelligence.
Explainability isn’t just a layer we add to models after the fact. It’s a mindset that runs through everything we do. It’s how we unlock insights our clients can use and make sure the decisions they make with us are ones they can defend and be proud of.
Because in pricing, the real value isn’t in predicting the right number. It’s in knowing why it’s right and what to do next.
Because it’s one thing to follow a model. It’s another to stand behind it.