Phidea
Published 2026-04-21

Legacy, modern, AI-native in US insurance software: a three-generation classification

Every piece of software a US carrier runs falls into one of three generations: legacy, modern, or AI-native. The classification is not marketing — it predicts how hard a migration is, how much retraining an ops team needs, and whether buying "AI-native" today means buying brittleness.

Legacy is the pre-cloud workflow rung. In many US claims shops, "legacy" is not even software — it is a process: a phone call to a call-centre, a clipboard inspection, a paper file, an offshore data-entry team. Where it is software, it is on-prem: OnBase or FileNet for documents, a mainframe for policy admin, an Excel rating sheet stapled into a PAS. Legacy is not automatically bad. It is predictable, cheap at the margin once installed, and regulated reliably. It is just not getting better.

Modern is the 1995-2015 cloud / classical-ML / template-automation rung. CCC Intelligent Solutions (founded 1980 but modernised through the 2000s, public on Nasdaq since 2021) is the archetype in US auto claims. Verisk Xactware — used by 22 of the top 25 US property insurers — is the archetype in property estimating. Hyperscience is the modern IDP archetype: Leader in the 2025 Gartner Magic Quadrant for Intelligent Document Processing, deployed at Guardian Life, QBE, and Voya Financial.

Modern tools are SaaS, have real APIs, and use classical ML (template-based OCR, structured CNN pipelines, gradient-boosted decisioning). They are the default choice whenever enterprise depth matters more than frontier model quality.

AI-native is the post-2015 rung: tools built around deep learning or LLMs from day one. Tractable (damage appraisal, Series E $180M cumulative, 20+ of the world's top-100 auto insurers), Hover (3D property from smartphone photos, funded by Travelers + State Farm + Nationwide), Federato (RiskOps underwriting workstation, $180M raised including a Goldman Sachs Series D), Shift Technology (AI fraud detection, unicorn, 100+ carriers across 25 countries), Hi Marley (AI messaging on the FNOL layer), Roots Automation (LLM-tuned IDP with InsurGPT).

Three practical heuristics when buying.

First, maturity sets the error floor, not the ceiling. A modern tool has 5-15 years of edge-case handling baked in; an AI-native tool has better core model quality but fewer edge cases covered. If the cost of a wrong answer is high (claim denial, underwriting error), weigh the edge-case depth. If the cost of slowness is higher, weigh the model quality.

Second, the stack rung determines the migration difficulty. Replacing a modern tool with an AI-native one is usually straightforward inside the same action — Tractable on top of or in place of CCC's estimating flow. Replacing a legacy workflow with an AI-native tool jumps two generations and surfaces all the hidden rules that lived in human heads.

Third, "AI-native" is not a guarantee that the vendor will survive. Modern vendors have 15+ years of revenue and muscle; AI-native vendors have 18 months of runway and investor patience. Use the scoring methodology published here — funding, named carrier deployments, analyst recognition — not vendor adjectives.

Where the classification breaks. Transitions are fuzzy. Hyperscience is "modern" in this scheme because it predates LLMs, but it has shipped LLM-augmented capabilities; the rung it sits on is the one that matters for the risk profile of buying it, not its newest feature. CCC is "modern" because its core is a 1980s-rooted network, but its 2024-2026 AI features sit on the AI-native rung of specific actions. Treat the classification as applying to a tool's centre of gravity, not its marketing.

Frequently asked

Is AI-native always better than modern?

No. AI-native tools usually have better core model quality on a fresh input, but modern tools have more edge-case handling, more enterprise depth, more compliance muscle, and longer survivability as vendors. For high-stakes, low-velocity workflows, modern wins more often than the hype suggests.

Can a modern tool become AI-native through product updates?

It can ship AI-native features, but the classification here refers to the centre of gravity of the product. Hyperscience shipping an LLM capability does not move it to the AI-native rung, because the dominant product muscle, governance, and customer expectations are still on the modern rung.

What if my carrier still runs legacy systems?

Most do, including all the top-10 US P&C carriers. Legacy is often the right baseline — it is predictable and already approved by regulators. The question is which specific workflow layers are worth modernising (FNOL intake, damage estimation, fraud detection) vs. which to leave alone (core policy admin). The stack classification helps you target the former without touching the latter.

How does this classification help with build-vs-buy decisions?

Buying an AI-native tool that lands on a greenfield workflow is the easy call. Buying an AI-native tool that replaces a modern tool requires you to re-litigate integrations, reporting, and compliance — usually a bigger project than the pitch suggests. Building internally only makes sense when no tool on the target rung covers the specific shape of your workflow, which is rarer than it sounds.

Where is the AI-native rung thin or missing?

On the US Claims stack specifically, the AI-native rung is missing for claims routing and present but still consolidating for claim document extraction. See the Claims stack map for the current state.

Read next

Sources

Last modified 2026-04-21. Target query: legacy vs modern vs ai-native insurance software classification.