Share This Article:
By James Benham, Co-Founder, CEO, JBK, and author of Be Your Own VC
Last year, I watched a carrier deliver a demo of an AI tool that could summarize a complex medical record in under ninety seconds. The room was impressed. Six months later, I asked how adoption was going. The project lead paused, then said something I’ve heard too many times: “The adjusters just aren’t using it.” The technology worked fine. The business case was solid. But nobody had planned for the people who were supposed to change how they work every day.
I’ve been building technology for the insurance industry since I started JBK out of my dorm room at Texas A&M in 2001, and that story is not new to me. I saw the same pattern in construction tech. I saw it in my own company. The tool delivers on its promise in a controlled environment, and then it meets the reality of how people actually operate. AI in workers’ comp is following the same arc right now, and the gap between what the technology can do and what organizations are getting out of it is widening fast.
BCG’s Build for the Future 2024 Global Study found that insurers are among the fastest AI adopters across all industries, on par with technology and telecom. But only about 7% of insurance carriers have successfully scaled AI initiatives beyond the pilot stage. The rest are either locked into small experiments or haven’t started at all. And here’s the finding that should matter most to claims leaders: seventy percent of the scaling challenges come down to people and process. Not algorithms. Not infrastructure. Skills gaps. Cultural resistance. Unclear ownership. A disconnect between the AI initiative and the business priorities it was supposed to serve. BCG’s recommendation is to put just 10% of your resources toward the algorithms themselves, 20% toward technology and data, and the remaining 70% toward the human side of the equation.
Most of us are doing the opposite. We’re spending the bulk of our energy evaluating vendors and comparing feature sets when the real work, the work that determines whether a pilot becomes part of how your team actually operates, is happening at the adjuster’s desk. It’s in the supervisor’s weekly check-in. It’s in the governance conversation that keeps getting pushed to next quarter.
Let me start with what I think is the most underestimated factor in the whole equation: whether your adjusters actually trust what the AI is telling them.
When an AI tool generates a medical summary or suggests a triage path, the adjuster has to decide in the moment whether to rely on that output or redo the work manually. If they don’t trust it, they redo it. Every time. And now you haven’t saved anyone a step; you’ve added one. Trust is not something you build with a slide deck showing accuracy metrics. It’s built claim by claim, when the adjuster sees the tool get the details right in their jurisdiction, with their mix of injuries, under their specific regulatory requirements. That means giving adjusters a real feedback loop where they can flag errors, see corrections reflected in the system, and feel like they’re shaping the tool rather than being monitored by it. The Workers’ Compensation Research Institute published a study in June 2025 looking at AI’s promises and challenges in our industry, and one of the clearest themes was that AI’s value depends on how transparent it is, how well it’s governed, and whether it genuinely supports the mission of helping injured workers recover. The report also makes an important point that I think gets lost in a lot of the hype: AI should not replace the first human contact in a claim. Research shows that injured workers often seek attorneys because they believe their employer questions the legitimacy of their injury or thinks their claim has been denied. Those are communication failures, not legal inevitabilities. AI can help capture what happened, but the adjuster is the one who helps the worker understand what happens next.
If your adjusters are quietly working around the AI instead of with it, your pilot is already failing. You just don’t see it in the dashboard yet.
The second piece of this that I don’t think gets enough attention is regulatory scrutiny. It’s accelerating, and it has real implications for how you design AI into your claims workflows.
Florida’s HB 527, which is set to take effect July 1, 2026, would prohibit algorithms, AI systems, and machine learning from being used as the sole basis for adjusting or denying a claim. During committee review, lawmakers adopted an amendment that specifically expands the bill to include workers’ compensation claims. The bill requires a qualified human reviewer to examine each claim decision and formally approve any denial, and it directs insurers to document how AI tools were used during the evaluation process. Over in Arizona, HB 2175 was signed into law in May 2025 and takes effect July 1, 2026. While it’s focused on health insurance rather than workers’ comp specifically, it sends the same signal: a licensed medical professional must personally review any denial involving medical necessity, and cannot rely solely on recommendations from any other source, including AI.
These are not isolated proposals. They tell you where the regulatory environment is heading. If your AI deployment does not have a genuine human-in-the-loop framework from the start, one that is baked into the workflow and not just listed on a compliance form, you’re building something that may not survive its first regulatory review. The claims leaders who get this right will design their tools so the adjuster remains the decision-maker. The technology serves as a well-informed assistant that surfaces information, identifies patterns, and flags risks, but never signs off on a determination by itself.
The third part of the formula, and the one that ties everything together, is change management. I don’t mean a training webinar and a PDF quick-start guide. In Be Your Own VC, I write about how bootstrapped companies survive by making every resource count. When you’re funding innovation from your own results, you cannot afford a failed rollout. That discipline translates directly into workers’ comp, where every dollar moved into a new tool or pilot displaces something you could have used to support adjusters, nurses, or injured workers.
A real change management plan starts with identifying a clinical or operational champion who owns the rollout within the claims team. Not in IT. Not in an innovation lab. Someone on the floor where the work gets done. It means setting honest pilot criteria: what does success look like at 30, 60, and 90 days, and what triggers a decision to expand, adjust, or walk away? It means protecting time for adjusters to learn the tool without piling it on top of a full caseload. And it means measuring what matters to the people actually doing the work. Not just processing speed, but whether the tool helps them make better decisions on the claims that keep them up at night.
The workforce pressure makes all of this more urgent, not less. The U.S. Bureau of Labor Statistics projects the insurance industry could lose roughly 400,000 workers through attrition by 2026, driven largely by retirements. AI should be part of the answer to that pressure. But if an adjuster’s first experience with your new tool is confusion, extra clicks, and output they don’t trust, you have made their day harder, not easier. And in an environment where experienced people are already walking out the door, that is a cost you can’t afford.
So what does the formula look like when it actually works? In my experience, the organizations that get AI adoption right in workers’ comp share a few things in common.
They start narrow. They pick one use case where the pain is obvious and the value is measurable. Maybe it’s cutting the time adjusters spend on initial medical record review, or flagging claims with early indicators of complexity so a supervisor can intervene sooner. They resist the temptation to buy a platform that promises to solve everything at once.
They invest in the relationship between the adjuster and the tool. They treat adoption as an ongoing conversation, not a launch event. They empower frontline supervisors to be honest about what’s working and what isn’t, and they make sure that honesty actually reaches the vendor or product team in a form that changes something.
They build governance from day one. Not because a regulator told them to, but because governance is how you maintain trust with injured workers, employers, brokers, and your own people. If you cannot explain how your AI reached a recommendation in plain language, you are not ready to put it in front of a claim.
And they measure success in terms that matter to the people the system is supposed to serve. Does this shorten the time to first meaningful contact with the injured worker? Does it help identify the right care pathway earlier in the claim? Does it free the adjuster to spend more time on the complex human conversations that actually change outcomes? Those are the metrics that tell you whether your AI investment is earning its place or just taking up space on a server.
Workers’ comp does not have a technology shortage. The tools available today are more capable than anything we’ve had before. What we have is a follow-through shortage. The organizations that win with AI will be the ones that treat implementation as seriously as they treat procurement, that fund the human side of change with the same rigor they bring to evaluating features and pricing.
You do not need to be first. You need to be intentional. Start with a problem your people actually feel. Earn their trust before you try to scale. Build governance into the design, not after the audit. And never lose sight of why any of this matters: there is an injured worker on the other end of every claim, and they deserve a system that gets better, not just faster.
James Benham
James is the Co-Founder and CEO of JBK, a global technology firm powering major carriers, and the Co-Founder of Terra, a cloud-native platform reimagining Workers’ Compensation. He also hosts The InsurTech Geek podcast and is a frequent speaker on insurance innovation.
AI california case file caselaw case management case management focus claims compensability compliance compliance corner courts covid do you know the rule employers exclusive remedy florida fraud glossary check Healthcare hr homeroom insurance insurers iowa kentucky leadership NCCI new jersey new york ohio pennsylvania roadmap Safety safety at work state info tech technology violence WDYT what do you think women's history women's history month workers' comp 101 workers' recovery Workplace Safety Workplace Violence
Read Also
About The Author
About The Author
- James Benham
More by This Author
Read More
- Mar 17, 2026
- Dennis Sponer
- Mar 07, 2026
- NCCI
- Mar 06, 2026
- Ryan Murphy
- Mar 06, 2026
- Christina Klemm
- Mar 06, 2026
- James Moore
- Feb 26, 2026
- NCCI