Dejar un mensaje

Dejar un mensaje

Si está interesado en nuestros productos y desea conocer más detalles, deje un mensaje aquí, le responderemos lo antes posible.
ENTREGAR
bandera
Noticias de la empresa
Hogar Noticias Noticias de la empresa

From Images to Decisions: An 11-Month Journey of Transformation for a Chinese Medical AI Team

From Images to Decisions: An 11-Month Journey of Transformation for a Chinese Medical AI Team

Jan 22, 2026

In late 2025, a paper from the preprint platform arXiv has been published and has attracted a lot of attention in the field of cardiovascular intervention community.

 

The paper reported that a domain-specific medical AI system named CA-GPT—designed to support decision-making during Percutaneous Coronary Intervention (PCI)—outperformed a general-purpose large language model (OpenAI's ChatGPT-5) across ten key clinical indicators. In some scenarios, its performance even exceeded that of junior physicians with one to five years of experience.

 

 

For the R&D team behind the system, however, the paper was not the destination.

 

What truly mattered was the long, uncertain, and sometimes uncomfortable path that led there.

 

This is the story of how a small medical AI team spent 11 months crossing the hardest gap in clinical AI: the distance between seeing information and making decisions.

 

 

From OCT Data to Clinical Decisions

 

Optical Coherence Tomography (OCT) is often called “the third eye” of interventional cardiologists. A thin optical fiber enters the coronary artery and produces high-resolution images that reveal plaque morphology and vessel structure in extraordinary detail.

 

For more than a decade, AI has been used to analyze OCT images in Vivolight OCT system, such as unstable plaque detection, calcified lesion assessment, and functional assessment. Many are already deployed in clinical workflows.

 

But something fundamental was missing. Doctors don’t just need more measurements. They need clear decisions.

 

Should a stent be implanted? Which diameter? What length? Is this lesion safe to treat, or should we step back?

 

Between raw imaging data and a real physician‘s decision lies a critical gap. No matter how accurate an algorithm is, if it cannot bridge that gap, it remains a tool—not a clinical partner.

 

That gap became the mission.

 

 

“The Last Mile” Is Always the Hardest

 

By early 2024, the Vivolight algorithm team had a clear vision: to turn years of expert knowledge into a system that could reason like an experienced physician—consistently, transparently, and under pressure.

 

They tried multiple approaches: general-purpose large models, API integrations, hybrid pipelines. Progress was slow. Each attempt felt close—but never quite enough.

 

As one team member put it:
——“You could feel you were approaching the answer. But every time, you were still one step away.”

 

 

 

A Turning Point: Stop Training Bigger Models

 

The breakthrough came in early 2025 with the open-sourcing of DeepSeek—not from training a larger model, but from asking a different question.

 

Instead of building another general AI, the team chose to anchor their system in what clinicians already trust: expert guidelines, industry consensus,structured reasoning, and explicit decision logic.

 

The idea was simple, but radical in practice:

● Let specialized vision models act as the “eyes”

● Let clinical guidelines and expert consensus form the “knowledge pool”

● Let a reasoning engine connect them into a transparent decision pathway

 

 

 

Learning the Hard Way—from Doctors

 

Early demos were humbling. Doctors didn’t ask how the AI worked. They didn’t care about prompts or architectures.

 

They asked questions like:

● “What stent would you choose?”

● “Is this lesion suitable for intervention?”

● “Why this diameter, not that one?”

 

When the system answered with vague explanations or generic language, the feedback was blunt. “This isn't AI. This is noise.” That comment from Dr. Zhu (CEO of Vivolight) hurt—but it clarified everything. Real clinical intelligence isn't about sounding smart. It’s about being precise, decisive, and accountable.

The team rebuilt again—this time modularly. Decision logic, reasoning steps, and outputs were stripped down, tested, and rebuilt at speed. Days blurred into nights. Iteration followed iteration.

 

Eventually, something changed.

 

Dr. Zhu looked at the system and said:
——“Okay. This actually feels like AI.”

 

 

 

Clinical Reality Is the Only Real Benchmark

 

By May 2025, the first version of AI-OCT system was delivered—less than three months after the project initiation. That same month, the innovation was showcased at the OCC 2025. To everyone's delight, it stood out among 48 entries and was selected as one of only five distinguished projects. But recognition was not the goal. Validation was.

 

 

The real test came from clinical trials. The system entered pilot use in multiple hospitals such as The Second Affiliated Hospital of Air Force Medical University (Tangdu Hospital), Beijing Anzhen Hospital Capital Medical University, Fuwai Hospital Chinese Academy of Medical Sciences, Zhongshan Hospital Affiliated to Fudan University, and The First Affiliated Hospital of Xi’an Jiaotong University, supporting PCI procedures in real-world settings. Feedback was direct, detailed, and often unforgiving.

 

Yet that feedback became the system’s strongest asset. Doctors challenged assumptions, corrected logic, and pushed the AI to be clearer, stricter, and more consistent. Step by step, the system matured—not in isolation, but in dialogue in clinical reality.

 

 

Trust Is Earned, Not Claimed

 

In August 2025, led by the clinical team at Zhongshan Hospital Affiliated to Fudan University, The Second People’s Hospital of Kashgar Prefecture successfully completed pilot clinical trials as the designated trial site for AI-OCT.

 

One detail mattered deeply to the team. In a real procedure, the AI recommended a stent diameter range of 3.0–3.25 mm with length of 19 mm. The physician ultimately selected a 3.0 × 20 mm stent—well within the AI’s recommended bounds.

 

That alignment was not coincidence. It showed something more important than agreement: shared reasoning. Clinical experts later summarized it simply: Medicine depends on consistency.


Standardized, guideline-based decision logic is not a limitation—it is the foundation of safe, scalable care.

 

 

 

Evidence Over Claims

 

Under the spotlight of the Chinese Society of Cardiology (CSC) annual conference, CA-GPT was officially unveiled. Concurrently, preliminary clinical comparative data from various trial centers began to roll in, and the results were nothing short of exhilarating.

 

But the most meaningful feedback didn’t come from metrics.

 

It came from doctors saying:

——“This system helps me think.”
——“It gives me confidence in my decisions.”

 

 

 

What This Journey Really Meant

 

In December 2025, the team’s research was published on arXiv. For them, it wasn’t just a paper. It was proof that clinical AI doesn’t need to be louder, bigger, or flashier. It needs to be grounded, disciplined, and accountable.

 

True medical intelligence isn’t about replacing doctors. It’s about helping them make better decisions—especially when it matters most. And that journey, from images to decisions, is only just beginning.

 

Dejar un mensaje

Dejar un mensaje
Si está interesado en nuestros productos y desea conocer más detalles, deje un mensaje aquí, le responderemos lo antes posible.
ENTREGAR

Hogar

Productos

whatsApp

contacto