Freddy del Barrio on the Crucial Gap Between AI Innovation and Real-World Healthcare

edited by Entrepreneur UK | Apr 12, 2026
Freddy del Barrio

Today, AI is ubiquitous across nearly every sector, yet a misguided focus on AI capability over practical applications may be stalling its adoption in healthcare. While the technology itself is advancing, Freddy del Barrio, founder of Companion AI, argues that a lingering implementation failure exists, stemming from a persistent misalignment with real-world clinical workflows and how care is largely delivered.

The healthcare AI market is projected to expand to $187 billion by 2030, a number that reflects its vast potential, yet studies highlight that its usage is largely reduced to analyzing medical data generated from electronic health records, genomic tests, and imaging scans. It’s this implementation gap that Freddy builds his work around.

“AI doesn’t fail in healthcare because it’s not smart enough. It fails because it’s not used in a way that maximises its potential. And if it doesn’t fit into the workflow, it doesn’t exist,” Freddy explains.

He believes that most founders often enter healthcare being preoccupied with what AI can do, while operators evaluate what it can change within existing routines, drawing a distinction that defines adoption outcomes. “I think leaders hesitate. Not because they don’t believe in AI, but because they don’t see a clear path to implementation without disruption. But the key is to skip that translation layer and adopt it at the workflow level,” he says.

In his view, three recurring miscalculations define where systems in healthcare break down.

The first, Freddy points out, is an overemphasis on demonstration, which can lead to products that perform well in controlled environments yet struggle in general practice. From his perspective, many teams often design for the end user interface while overlooking the broader system surrounding it. This includes stuff operating under time pressures, documentation requirements, and regulatory oversight. If a tool disrupts established patterns, Freddy notes that it can risk immediate rejection.

The second issue he highlights comes from treating AI environments like productivity tools, not regulated, workflow-heavy systems. He ascribes to the tendency where reliability drives decision-making across care settings. “Instead of rewarding novelty, healthcare rewards reliability,” he says. “If systems introduce additional steps or require adaptation from already stretched teams, they become unpopular.” Meaningful adoption, in this context, can then depend on whether the workload decreases measurably.

Freddy attributes the third gap to ownership. If a tool doesn’t have a defined responsibility for proper implementation, he believes that dashboards and alerts only accumulate, hindering existing workflows. “Insight without clear accountability and utility isn’t likely to be a catalyst for action; if it’s not tied to ownership, it just becomes noise,” he adds.

In that same vein, Freddy defines utility as something grounded in execution. He frames it as a function of how systems operate within existing systems. Utility, to him, depends on three conditions: integration into current systems, immediate reduction in workload, and a clear path to action. Without those elements, Freddy argues that adoption stalls regardless of model capability.

“Healthcare runs on patterns over time. Point-in-time insights don’t move outcomes,” he says. This also correlates to what he believes is a missing layer in healthcare AI systems, which is continuous human context. Even when clinical data is structured and workflows are well-defined, Freddy highlights that behavioral and emotional signals can remain fragmented, seeing this gap as an imperative for the next phase of AI integration within healthcare.

His approach at Companion AI, a longitudinal platform developing AI infrastructure for healthcare and care facilities, reflects that direction. The tool is designed to integrate with cloud-based healthcare software and function as an augmentation layer that surfaces behavioral and emotional signals to care teams. Freddy underscores the platform’s role as an AI infrastructure layer, preserving clinical authority with practitioners. “The moment AI tries to replace clinical judgement, it’s undoubtedly introducing risk. Companion AI is primarily acting like a supportive layer; it’s not making any decisions,” he states.

In the context of AI integration in the healthcare sphere, Freddy isn’t cavalier about regulatory considerations and their role in shaping how these systems are deployed. He notes that compliance with frameworks like HIPAA and GDPR requires careful handling of sensitive data, with consent and encryption built into the system architecture. He sees these requirements as operational constraints that must be addressed early in development.

Adoption, in his view, scales at the institutional level. He argues that individual usage patterns matter less than alignment with workflows, staffing structures, and economic incentives. “AI is horizontal,” Freddy says. “Implementation is always vertical.”

The development of Companion AI reflects contributions from the company’s in-house team, who are focused on integration and operational fit. “We are building this together, strengthened by the same purpose: ensuring that continuous care is accessible globally without compromise or operational overload,” he says.

He emphasizes the importance of building infrastructure that aligns with systems of record, allowing AI to function as part of the environment rather than as an external layer.

As the industry moves toward a phase where operationalization defines success, Freddy believes that systems that capture signals and feed them back into compassionate care delivery will shape outcomes over time.

Ultimately, the distinction between capability and utility, Freddy insists, determines whether AI remains experimental or becomes embedded in everyday care, and that, he believes, is how AI can shift from concept to infrastructure.

Today, AI is ubiquitous across nearly every sector, yet a misguided focus on AI capability over practical applications may be stalling its adoption in healthcare. While the technology itself is advancing, Freddy del Barrio, founder of Companion AI, argues that a lingering implementation failure exists, stemming from a persistent misalignment with real-world clinical workflows and how care is largely delivered.

The healthcare AI market is projected to expand to $187 billion by 2030, a number that reflects its vast potential, yet studies highlight that its usage is largely reduced to analyzing medical data generated from electronic health records, genomic tests, and imaging scans. It’s this implementation gap that Freddy builds his work around.

“AI doesn’t fail in healthcare because it’s not smart enough. It fails because it’s not used in a way that maximises its potential. And if it doesn’t fit into the workflow, it doesn’t exist,” Freddy explains.

Related Content