Almost 10 years ago, physician and data scientist Dr. Ruben Amarasingham founded Pieces Technologies in Dallas with a clear goal: use artificial intelligence to make clinical work lighter, not heavier. At a time when much of healthcare AI focused on prediction and automation, Pieces concentrated on something harder to quantify but more consequential—how clinicians actually think, document, and make decisions inside busy hospital workflows.That focus helped Pieces gain traction with health systems looking for AI that could assist with documentation, coordination, and decision-making without disrupting care. But as hospitals began relying more heavily on AI for diagnosis, triage, and daily operations, the expectations placed on these tools changed. It was no longer enough for AI to sound impressive or move fast. It had to be trustworthy under real clinical pressure.Pieces did not set out to become a case study in healthcare AI accountability. But over the past two years, that is effectively what it became. In 2024, a regulatory investigation by the Texas Attorney General’s office into the accuracy and safety of its systems forced the company to examine how its models behaved in real-world settings, how clearly their reasoning could be explained, and how quickly problems could be identified and corrected.Rather than retreat, the company reexamined its models, documentation practices, and safeguards. Those efforts later became central to its acquisition by Smarter Technologies, a private equity-backed healthcare automation platform formed earlier this year through the combination of SmarterDx, Thoughtful.ai, and Access Healthcare, in September 2025. The purchase price was not disclosed.Pieces’ journey captures a defining truth about healthcare AI today: the technology is no longer judged by ambition alone, but also by whether it can withstand scrutiny, explain itself under pressure, earn clinician trust, and operate safely in environments where the cost of error is measured in human outcomes.FROM PROMISE TO PROOFAI arrived in healthcare with big promises. It would ease physician workloads, speed decisions in emergencies, and cut through the complexity of modern care. Some of those promises materialized early. But as adoption spread, hospitals began to see the limits of systems that were impressive in theory but fragile in practice.In early 2025, the U.S. Food and Drug Administration published updated guidance on AI and machine learning-enabled medical devices, calling for stronger post-market monitoring, clearer audit trails, and safeguards against model drift in high-stakes settings. The Federal Trade Commission reinforced that message through enforcement actions targeting exaggerated AI claims and misuse of sensitive health data.Those signals changed the conversation, forcing many hospitals to ask vendors harder questions: How does your system reach its conclusions? Can clinicians understand and override its recommendations? And does the model behave consistently as conditions change?For many AI companies, the excitement of the last decade no longer buys time. Proof does.A REAL-LIFE TESTPieces encountered those expectations earlier than most. The regulatory scrutiny forced the company to confront how its models reasoned through patient data and how clearly that reasoning could be explained to clinicians and regulators alike.But Amarasingham says the company’s mission never shifted. “Our team is focused on building the tools to make life easier for physicians, nurses, and case managers who are carrying the weight of the health system every day,” he tells Fast Company.That focus meant publishing method papers, sharing documentation with health systems, and creating processes that exposed when models struggled, drifted, or required recalibration. Those practices became foundational to the company’s next chapter.Shekhar Natarajan, founder and CEO of Orchestro.ai and a longtime observer of healthcare regulation, sees this as part of a larger reckoning. Many AI companies, he says, relied on what he calls “emergent safety,” assuming ethical outcomes would arise naturally from good intentions and culture.“That approach no longer holds,” Natarajan explains. Regulators now expect safety and accountability to be engineered into systems themselves, with reproducible reasoning, documented controls, and safeguards that hold up even when teams are stretched thin.BUILDING TRUSTTrust in healthcare does not come from branding or inspiration. It comes from repeated proof that technology understands clinical work and behaves consistently under changing conditions. Clinicians want AI that respects the pace of the workday, adapts to the unpredictable rhythm of patient care, and reduces cognitive burden rather than adds to it. Above all, they want systems that behave predictably.Pieces shaped its approach around these realities, focusing on building tools to work alongside clinicians rather than ahead of them and creating ways for teams to question the system’s conclusions. It also designed its internal processes to document when the model was correct, struggled, drifted, or needed recalibration. For Amarasingham, that kind of thinking was essential for the progress of the company.“Innovation, to us, had to serve the care team first. The goal was to reduce cognitive load rather than to add to it,” he says, a view that aligns with a growing consensus in healthcare AI research.That emphasis aligns with what independent clinicians say is holding healthcare AI back.Dr. Ruth Kagwima, an internist at Catalyst Physician Group in Texas, says AI adoption stalls when tools disrupt already overloaded clinical workflows or fail to earn trust through clarity and validation.“AI systems that succeed in hospitals are easy to understand, fit naturally into daily work, and show clear proof of safety and accuracy,” she says. “They have to protect patient data, respect clinical judgment, and improve care without adding friction.”Another independent healthcare analyst, Dr. Patience Onuoha, who is an internist affiliated with multiple hospitals in Indiana, points to the practical constraints that still slow adoption at the bedside. “Data is often messy and siloed, and new tools can disrupt already busy clinical workflows,” she says. “There are also real concerns around safety, bias, legal risk, and trusting algorithms that are not easy to understand.”Natarajan believes this will be the defining standard of the next decade. In his view, companies survive regulatory pressure when they transform their internal principles into systems that can be inspected. They build clear chains of accountability, create evidence trails that reveal where bias may appear, and show clinicians not only how a model works but also why it does.IMPACT ON THE FUTUREHealthcare AI is moving toward a world where oversight is a design requirement rather than an afterthought, especially with regulators demanding documentation that spans the full lifecycle of a system. They want performance data segmented across race, age, and medical conditions, assurances that the system cannot infer sensitive traits that patients never disclosed, and they want companies to demonstrate how quickly they can detect and correct model drift.Some of this momentum comes from damage that has surfaced over time. For example, recent research reported by the Financial Times found some AI medical tools tended to understate the symptoms of women and ethnic minority patients, potentially worsening disparities in care because models weren’t trained or evaluated for fairness and transparency.Companies that adapt to this new reality will shape the next generation of clinical AI. Pieces now operates within this landscape. As part of Smarter Technologies, it is working to bring its governance practices to a wider network of hospitals. That means integrating safety frameworks across larger datasets, more diverse populations, and broader distribution environments. It is difficult work, but also the kind of work that defines leadership in a field where the cost of failure is measured in human outcomes.A NEW CHAPTERHealthcare AI is entering a consequential phase of growth, where the safety of AI systems is far more important than headline-grabbing breakthroughs.As hospitals sharpen their expectations for AI, Amarasingham believes the industry will need to adopt a different mindset. “In healthcare and AI, you’re not playing to win once and for all; you’re playing to keep playing, keep learning, and keep improving outcomes for patients,” he says.The work, he adds, will never be finished, because the rules shift and the needs evolve. What matters is whether companies choose to design for that reality. In other words, AI in healthcare will advance only as fast as it earns trust. And that means healthcare AI vendors and buyers must now, more than ever, be committed to steady, transparent work that stands up under pressure.
A potential ban has been discussed for months; now it has been decided. From 2027 onwards, the Chinese government will prohibit the registration of new cars with fully retractable door handles and electric unlocking systems, with existing models subject to a transitional arrangement. The Chinese ban is likely to have global repercussions.
Manage your Mac files like a pro with these five essential features.
When allies hedge, CEOs must decide.
A mosquito’s mouth can ‘print’ lines thinner than a human hair Science News Explores
Tesla shares fell over 2% as China EV sales weakened and analysts cut price targets. Here’s what’s driving the fall and whether investors should consider buying the dip.
Maruti Suzuki offers S-CNG line-ups in India, covering almost every vehicle segment to provide fuel-efficient alternatives to petrol models. Here are some of the top models from the brand, including the Maruti Suzuki XL6, Maruti Suzuki Grand Vitara, Maruti Suzuki Brezza, Maruti Suzuki Baleno, and Maruti Suzuki Fronx. Here are more details.
The Segway Navimow H2 series robotic lawn mowers have been released in Europe. These premium products use three technologies to navigate your lawn with a high level of accuracy. Other features include Terrain Adapt, automatically adjusted cutting heights and a rain sensor.
Dozens of routinely updated CDC databases have gone quiet. Here’s what states and medical societies are doing to preserve U.S. public health
Vivo X200T Price In India: The Vivo X200T runs OriginOS 6 based on Android 16 out of the box and will receive five years of OS upgrades along with seven years of security updates.
WhatsApp Business could potentially start charging users who want to link more than four devices to the same account in a future update.
A bug in Google Messages is causing severe battery drain and overheating on some devices. Here's a temporary fix while you wait for Google to push an update.
Redmi 10 Power Price in India: Redmi has launched its new smartphone Redmi 10 Power in India. The smartphone was launched along with Redmi 10A, on Wednesday, 20 April in India
The premium Alienware m15 R7 and m17 R5 highlight four new Ryzen-powered systems.
Apple is offering its new firmware update for the MagSafe accessory that promises to give you faster charging for the compatible iPhone models.
System apps with native call-recording functionality remain unaffected.
The OnePlus Nord 2T India launch may take place soon as the moniker has been spotted on the TDRA certification. The OnePlus Nord 2T's camera details have also been leaked. Here are the expected specifications of the device.
The iPhone 16 is likely to be the first Apple device to skip the camera notch.
Garena Free Fire Max redeem codes are 12-digit codes that are region-specific and can be used by players to gain an edge over rivals in every round of the battle royale game.
We are still not sure that no-charger policy will be limited to Narzo 50A Prime or it will be extended to other Realme budget, mid-segment and premium phones