Exclusive Interview with Michael Privat, Chief Data and Engineering Officer, Availity on AI and the future of AI.

Michael Privat

Michael Privat has been inside this issue longer than most people have been talking about AI. As Chief Data and Engineering Officer with 25 years in healthcare IT — one of the most data-intensive, compliance-heavy sectors in the world — he leads 500+ engineers and has watched the AI gold rush turn into a quiet reckoning. His read: the market has stopped rewarding ambition and started demanding evidence. Even as 74% of organizations say they want revenue growth from AI, just 20% are actually achieving it. The gap between those two numbers is where careers and budgets are getting eaten alive.

1. There’s been a lot of investment in AI over the past few years. From your perspective, where are most companies getting it wrong?

They’re treating AI like a product launch instead of an operational shift. You buy the tools, you announce the initiative, you put it in the earnings call, and then you wonder why nothing changed. Call it what it is: a strategy problem.

The companies getting this wrong are the ones that started with the technology and worked backward. They asked “What AI should we use?” before they asked “What problem are we actually trying to solve?” Those are very different questions, and confusing them is expensive.

I’ve also seen a lot of organizations deploy AI on top of broken infrastructure. That’s not an upgrade. That’s a faster way to scale your existing dysfunction. If your data is a mess, if your processes aren’t defined, if your teams don’t have clarity on outcomes, AI will just help you move in the wrong direction more efficiently. Garbage in, garbage out, only faster and at greater cost.

The companies getting it right started with a clear business outcome, worked backward to understand what data and infrastructure that outcome actually requires, and then built from there. That’s a much harder path. It’s also the only one that works.

2. Research suggests that a large percentage of enterprise AI pilots fail to deliver measurable financial impact. Why is that happening?

Because most pilots are designed to prove that AI is interesting, not to prove that it creates value. Those are two completely different objectives, and they produce two completely different outcomes.

A pilot that’s designed to impress will always succeed at impressing. It’ll look great in a demo. Leadership will nod along. The case study will get written. And then six months later, nobody can point to a dollar of impact.

What’s missing is accountability from day one. Before you run a single pilot, you need to answer three questions: What does success look like? How will we measure it? What happens if we hit those numbers? Do we scale? Most organizations skip that conversation entirely. They spin up a proof of concept, generate some internal excitement, and then struggle to justify the next phase because they never defined what “done” looks like.

The other issue is scope. Pilots succeed in contained environments, then die when you try to extend them into real operations, where the data is messier, the workflows are more complex, and the edge cases are brutal. If you’re not designing your pilot to survive contact with reality, you’re not really piloting anything. You’re doing theater.

3. Many organizations say they’re investing in AI for growth, but only a small percentage are seeing real returns. What’s creating that gap?

The gap is between organizations that invested in AI and organizations that invested in the conditions AI requires to work. Those are not the same investment.

AI is an amplifier. If you have strong data foundations, clear processes, and genuine alignment between your technical teams and your business objectives, AI amplifies all of it. If you don’t, AI amplifies that too, and the results aren’t pretty.

The organizations seeing real returns are the ones with compounding advantages that predate their AI initiatives. They’ve spent years building data assets, integrations, and institutional knowledge that can’t be replicated in a weekend. When they layer AI on top of that, the returns are real because the foundation is real.

The organizations stuck in the gap made the bet that AI would somehow substitute for the work they hadn’t done. It doesn’t. A VP at a logistics company can open Claude on a Friday night, describe her core routing problem, and have a working prototype by Sunday that replicates 70% of what she’s paying millions for in annual software contracts. But that prototype doesn’t come with eight years of compounding data, certified integrations, or compliance credentials earned in production environments. Features are replicable. Real value is not. The organizations winning understand that distinction at a fundamental level.

4. You’ve spent decades working in healthcare IT, which is one of the most complex data environments. How has that shaped your view on what it actually takes to make AI work?

Healthcare IT will dispel any fantasy that data is simple. You’re dealing with HIPAA compliance that’s architecturally incompatible with half the tools on the market. You’re dealing with data models that assume entirely different universes depending on which EHR system your partners use. You’re inheriting integrations held together by custom middleware that exists only in the mind of a contractor who left in 2019.

What that environment teaches you is that the gap between “the data exists” and “the data is usable” is enormous. It also teaches you that every “we’ll fix it later” you let slide becomes a compounding liability. The scar tissue accumulates. At Availity, we’ve processed billions of healthcare transactions. That’s not luck, and it wasn’t easy. It required relentless precision about data quality and a willingness to do the unglamorous work that nobody puts in a pitch deck.

The lesson I carry into every AI conversation: don’t ask whether you have data. Ask whether your data is structured, labeled, governed, and trustworthy enough to hand to a model and stake business outcomes on. Most organizations, when they answer that question honestly, discover they have significantly more work to do than they thought. And that work isn’t optional. You can’t skip it because AI is exciting.

5. There’s a lot of discussion around “AI-ready” infrastructure. What does that really mean in practice, and how does it differ from what’s being marketed today?

What’s being marketed today is essentially: cloud-native, modern-looking architecture. And that’s not wrong, exactly. It’s just dramatically incomplete.

I’ve walked into systems where “modern” meant the UI looked clean while the back end was a monolith running on the same bones since the Obama administration. The database schema was designed when the company had twelve people. “Scalable” meant throwing hardware at it and praying. Pretty dashboards. Terrifying infrastructure.

AI-ready infrastructure in practice means your data is accessible, consistent, and governed. It means you have clear data lineage. You know where your data comes from, who owns it, and what transformations it’s been through. It means your systems can surface the right data at the right time without requiring heroic engineering efforts every single time. And it means your teams have enough observability into the system that when AI produces an output, you can evaluate whether to trust it.

That last part matters more than people realize. AI-ready also means audit-ready. Especially in regulated industries like healthcare, you can’t just deploy a model and walk away. You need to be able to explain what happened, why, and who’s accountable. Infrastructure that can’t answer those questions isn’t AI-ready, regardless of what the vendor’s website says.

6. How can companies tell if their current data infrastructure is strong enough to support meaningful AI outcomes?

Ask your team one question: if I gave you a business-critical decision to make tomorrow, could you pull the data you need, clean, current, and trustworthy, in less than a day? If the answer is anything other than yes, you have your answer.

Beyond that diagnostic, I look at a few specific things. First, data consistency: do different teams answer the same question with different numbers? If your sales team’s revenue figures don’t match your finance team’s, you have a data governance problem that will become an AI liability. Second, lineage: can you trace any piece of data back to its source? If you can’t explain where your training data came from and what happened to it along the way, you can’t trust what your model learned from it. Third, freshness: is the data your AI will act on the same data your business is actually operating on, in real time? Stale data produces confident, wrong answers, which is often worse than no answer at all.

The honest reality is that most organizations discover, when they do this audit, that their data infrastructure is significantly weaker than they assumed. That’s not a reason to panic. It’s a reason to be honest about your starting point and invest in the foundations before you invest in the models. The sequence matters.

7. You mentioned that the conversation is shifting from “Are you using AI?” to “What results is it driving?” What are boards and leadership teams asking for now?

They want receipts. The era of AI enthusiasm as a business model is over. You can’t walk into a board meeting with a demo and a vision anymore. You need numbers.

What I’m hearing from leadership teams now is a much sharper set of questions. Not “what AI initiatives do we have running?” but “what did those initiatives cost, what did they return, and how do we know that?” They want to see the before and after. They want to understand the unit economics. They want to know whether the value being claimed is incremental or whether it’s just efficiency that would have been found anyway.

There’s also a growing appetite for accountability structures. Boards are starting to ask who owns AI outcomes, not who owns the AI team, but who is on the hook when an initiative doesn’t deliver. That’s a very different conversation than the one we were having two years ago, and it’s a healthier one. The question has shifted from “are you using AI?” to “what results is it driving, who’s accountable for those results, and what’s the plan if they don’t materialize?” Those are the right questions. It took a few years of expensive pilots to get here, but we’re here.

8. What are some of the early warning signs that an AI initiative is unlikely to deliver real business value?

The first one is when you can’t articulate the business outcome in plain language. If the only people who can explain what an AI initiative is supposed to accomplish are the engineers building it, that’s a problem. Business value doesn’t live in a model. It lives in a decision, a process, a cost line, a revenue number. If you can’t connect your AI initiative to one of those, what you have is a science project.

The second warning sign is when the data question makes people uncomfortable. Any time I probe on data quality or governance and the room gets quiet, I know there’s a problem hiding underneath that nobody wants to talk about. The initiatives that succeed have leaders who can speak confidently about their data, where it comes from, how clean it is, who owns it. The ones that fail have leaders who change the subject.

Third is the absence of a defined failure condition. If you ask “what would have to be true for us to stop this initiative” and nobody has an answer, you’re running a program with no off switch. That’s how you keep funding something that isn’t working, because nobody wants to be the person who called it.

And finally, when the timeline is driven by announcement pressure rather than operational readiness. If you’re moving fast because leadership wants something to present at the next industry conference, you’re optimizing for optics. That never ends well.

9. How should companies rethink their approach if they want to move from experimentation to measurable results?

Start by killing most of your experiments. I know that sounds counterintuitive, but organizations that are stuck in pilot purgatory almost always have too many initiatives running and not enough resources going into any of them. You can’t scale five things at once. Pick the one with the clearest path to measurable value and actually finish it.

Then flip how you’re measuring success. Instead of asking “is our AI working?” ask “what business metric moved?” Those are different questions with different implications. The second question forces you to connect your initiative to something the business already cares about, which is the only connection that produces real accountability.

The other shift I advocate for is changing who’s in the room. Most AI initiatives are run by technical teams who report results to business leaders. The business leaders nod, ask a few questions, and go back to their day. That structure guarantees a disconnect. The organizations moving fastest are the ones where business owners are co-sponsoring AI initiatives from the start, not rubber-stamping them at the end. When the CFO has skin in the game, the conversation about measurable results gets very different very fast.

10. Where do you see the biggest disconnect between technical teams building AI systems and leadership teams expecting ROI?

The language gap is real, but it points to something more fundamental. Technical teams are optimizing for model performance and leadership teams are optimizing for business outcomes, and those are not the same thing. A model that is technically impressive can still produce no business value. And a model that is technically mediocre can still solve the exact problem a business needs solved.

I see this play out constantly: a technical team delivers a system that performs well on their benchmarks, and leadership is underwhelmed because the benchmarks had nothing to do with the business problem. Then technical teams get frustrated because they feel like their work isn’t being recognized, and leadership gets frustrated because they funded something that didn’t move anything that matters to them. Both sides are right. The problem is the gap was never closed at the beginning.

The fix isn’t more communication. It’s better problem definition before anyone writes a line of code. Get the business owner and the technical lead in the same room and don’t leave until you have agreement on: what problem are we solving, how will we know we’ve solved it, and what does the business outcome look like when this works? That conversation is uncomfortable because it forces clarity that nobody wants to commit to prematurely. It’s also the only conversation that produces AI systems that leadership teams are actually happy to fund a second time.

11. In your experience, what separates companies that are successfully scaling AI from those that remain stuck in pilot mode?

The ones scaling AI treated it as an operational discipline, not a technology initiative. That’s the cleanest way I can put it.

At Availity, AI coding assistants now generate the vast majority of all code written by our engineers. That’s hundreds of thousands of lines of AI-generated production code. That didn’t happen because we were early adopters of interesting technology. It happened because we built the infrastructure, the governance frameworks, and the accountability structures that let us deploy AI at scale without surrendering control over the outcomes. Engineers learned to read and judge code written by AI whereas before their focus was more on writing.

The organizations stuck in pilot mode are usually missing one of three things. Either the data foundations aren’t there and they know it but won’t say it out loud. Or they have an accountability vacuum, nobody owns the initiative end-to-end, so momentum dissipates every time it crosses a team boundary. Or they’re still treating AI as experimental when the rest of the world has moved on to treating it as operational.

The other thing I’d point to: the companies that scale are the ones that relentlessly challenge how they work. Every meeting, every process, every assumption is on the table. AI doesn’t scale into rigid organizations. It scales into organizations that are genuinely willing to ask whether the way they’ve always done something is actually the best way.

12. How should organizations be thinking about accountability when it comes to AI performance and outcomes?

The same way they think about accountability for any other significant business investment, with clear ownership, defined metrics, and real consequences when things don’t perform.

What I don’t see enough of is the equivalent of a P&L for AI initiatives. You have someone responsible for the model, someone responsible for the data pipeline, someone responsible for the infrastructure, but nobody who owns the business outcome end-to-end. When things go wrong, and they will, that structure produces a very productive blame cycle and absolutely no resolution.

My approach is to kill the debt and kill the factory that produced it. When we find a problem in how we build systems, whether it’s technical debt, bad process, or poor accountability design, our job isn’t just to fix the symptom. It’s to understand what systemic issue created it and eliminate that too. Otherwise you’re mopping the floor with the faucet running.

For AI specifically, that means organizations need someone accountable for the full cycle: the data going in, the outputs coming out, the business results on the other end, and the feedback loop that improves all of it over time. Not a committee. Not a shared responsibility model. One person. That level of accountability tends to clarify conversations very quickly.

13. There’s a lot of pressure to move quickly with AI adoption. How do you balance speed with building something that actually lasts?

The pressure to move quickly with AI adoption is real, but a lot of what gets called “speed” is theatrical. Seats licensed. Tools deployed. A press release. Most of that visible velocity has nothing to do with whether anything in the company actually works the next morning differently. The pressure is rarely coming from customers — it is coming from boards, analysts, and competitors trying to look unafraid. So the first thing I do is separate adoption velocity from outcome velocity. Adoption velocity is how fast you put AI in front of people. Outcome velocity is how fast a real workflow becomes cheaper, faster, or more accurate. They are not the same number, and treating them as the same is how organizations spend two years moving quickly toward nothing.

The teams that move fast in a way that lasts are not the ones loosening their controls. They are the ones who did the unglamorous infrastructure work (identity, lineage, audit, reproducibility) early enough that they can deploy real capability without spending six months on plumbing every time. Governance has to live as infrastructure, not as an approval committee, because committees do not scale at the speed of agentic systems. The pattern I trust is parallel fast-lane teams: small surface area, one real workflow, governance baked in from day one, measured against an outcome that shows up on the income statement. When something works there, you propagate it. When it doesn’t, you’ve spent very little to learn that. Speed and durability only look like opposites when the substrate underneath you is weak. Once the substrate is real, speed is just what you get for free. You can run fast on a tight rope, you just won’t make it very far.

14. What role does data quality and governance play in determining whether AI initiatives succeed or fail?

It’s the whole game. Everything else is downstream of this.

You can have the best model in the world running on garbage data and you will produce confident, well-articulated, completely wrong outputs. That’s actually worse than having no AI at all, because at least when you don’t have an answer, you know you don’t have an answer. A bad AI answer comes dressed as a good one.

Data governance, in particular, is where most organizations are much further behind than they want to admit. Governance isn’t just about having a data dictionary or a data catalog. It’s about clear ownership, clear standards, and clear consequences for violating them. In healthcare, we don’t have the luxury of ambiguity on this. A compliance failure isn’t just a technical problem. It’s a patient safety problem, a regulatory problem, and an existential business problem all at once. That level of stakes teaches you very quickly what real data governance actually requires.

The organizations that get this right don’t treat data quality as a cleanup exercise. They treat it as a continuous operational discipline, the same way they treat security or reliability. They build it into how work gets done, not as a separate initiative that’s always running behind the main one. That shift in how you think about governance is actually more important than any specific tool or process you put in place.

15. Looking ahead, how do you see the enterprise AI landscape evolving over the next few years?

The hype cycle ends. What replaces it is an infrastructure conversation, and it is going to be brutal for the companies that mistook adoption for transformation. The real shift over the next few years is from AI as lookup to AI as loop — from systems that answer questions to systems that do work. Agentic workflows are where the operating leverage actually shows up. A chat assistant marginally helps an employee. An agent runs the workflow. That difference is where the unit economics finally move, and it is also where the unprepared get exposed, because agentic systems multiply whatever substrate sits underneath them. Clean infrastructure produces compounding leverage. Messy infrastructure produces autonomous mistakes at scale. The market will sort into two groups: those who use this moment to simplify and re-platform around agentic workflows, and those who bolt agents on top of what they already had and spend the rest of the decade explaining the dashboard.

That said, this is the most interesting decade of my career to be doing this work. We finally have the leverage to remove the layers of complexity that engineers have been complaining about for two generations — the meeting, the handoff, the thousand-line stored procedure nobody wants to touch — and rebuild around agents that genuinely execute end-to-end. For people who love the craft, that is not a threat, it is a once-in-a-career opening. The work that gets left for humans is the work that was always worth doing in the first place: judgment, design, the hard problems, the parts that require taste. Teams will be smaller, sharper, and harder to copy. The companies that come out the other side will not just be more efficient — they will be the kind of places serious engineers actually want to work, because the substrate is clean and what is left on the table is genuinely interesting. Average is a choice, and the agentic era finally gives us the tools to refuse it.

Leave a Reply

Your email address will not be published. Required fields are marked *