Study shows ChatGPT can produce medical record notes 10 times faster than doctors without compromising quality

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,427
Reputation
8,215
Daps
156,646

1/8
@DKThomp
Fascinating paper (and post!) on AI in medicine

- cardiologists working with AI said the machine's diagnosis, triage and management were equal or better than human cardiologists in most areas

- the paper's summary comes with a generative AI podcast explaining the results

[Quoted tweet]
One of the most promising areas for the application of AI in medicine is scaling specialty expertise. There simply aren't enough specialist doctors to care for everyone in need. We believe AI can help.

As a first step towards that goal, we worked with the amazing Google medical AI team to tune and test their conversational agent AMIE in the setting of Stanford's Center for Inherited Cardiovascular Disease.

Unlike many medical studies of LLMs, we completed our testing not with curated cases or exam questions but real-world medical data presented in exactly the way we receive it in clinic.

Data was in the form of reports derived from multi-modal data sources including medical records, ecgs, stress tests, imaging tests, and genomic data. AMIE was augmented by web search and self-critique capabilities and used chain-of-reasoning strategies fine-tuned on data from just 9 typical patients.

What did we find?

1. Overall, AMIE responses on diagnosis, triage and management were rated by specialty cardiologists as equivalent to or better than those of general cardiologists across 10 domains.

2. Access to AMIE's responses improved the general cardiologists' responses in almost two thirds of cases.

3. Qualitative data suggested that the AI and human approaches were highly complementary with AMIE judged thorough and sensitive and general cardiologists judged concise and specific.

In conclusion, our data suggest that LLMs such as AMIE could usefully democratize subspecialty medical expertise augmenting general cardiologists' assessments of inherited cardiovascular disease patients.

Paper: arxiv.org/abs/2410.03741
Generative podcast describing the paper (!): shorturl.at/rdKZn
Stanford Center for Inherited Cardiovascular Disease: med.stanford.edu/familyheart
AMIE: arxiv.org/abs/2401.05654

Congrats to @DrJackOSullivan and @taotu831 for leading the charge on this work as well as the @StanfordDeptMed team and the amazing folks @Google led by @alan_karthi and @vivnat


GZbHUyuasAIERRN.png


2/8
@Jill992004231
AI is coming at us very, very quickly.



3/8
@ian_sportsdev
So? Tech is easy, politics is hard 😩



4/8
@fl_saloni
Cardiologists are the gatekeepers for adoption of this AI and most (NOT all) will create mistrust in AI to preserve their status and comp. How do you overcome this



5/8
@Spear_Owl
AI+DR>DR



6/8
@mario_anchor
The question is will specialty caregivers allow this. Their lobbying apparatus has already created an artificial doctor shortage to prop up wages.



7/8
@EscoboomVanilla
Also this!

Wimbledon staff left devastated after decision to break 147-year tradition and put 300 jobs at risk



8/8
@PatrickPatten8
I think medical and education is where AI will have the biggest impact. Hopefully America rethinks what an educated population looks like, cause right now 47% think fascism is a good idea... hopefully we can do better.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,427
Reputation
8,215
Daps
156,646

96% Accuracy: Harvard Scientists Unveil Revolutionary ChatGPT-Like AI for Cancer Diagnosis​


By Harvard Medical SchoolOctober 17, 2024No Comments9 Mins Read


Identifying Cancer Cells
Scientists at Harvard Medical School have developed a versatile AI model called CHIEF that can diagnose and predict outcomes for multiple cancer types, outperforming existing AI systems. Trained on millions of images, it can detect cancer cells, predict tumor genetic profiles, and forecast patient survival with high accuracy.

A ChatGPT-like model can diagnose cancer, assist in selecting treatment options, and predict survival outcomes across various cancer types.​


Researchers at Harvard Medical School have developed a versatile AI model, similar to ChatGPT, that can perform a wide range of diagnostic tasks across various types of cancer.

The new AI system, described Sept. 4 in Nature, goes a step beyond many current AI approaches to cancer diagnosis, the researchers said.

Current AI systems are typically trained to perform specific tasks — such as detecting cancer presence or predicting a tumor’s genetic profile — and they tend to work only in a handful of cancer types. By contrast, the new model can perform a wide array of tasks and was tested on 19 cancer types, giving it a flexibility like that of large language models such as ChatGPT.

While other foundation AI models for medical diagnosis based on pathology images have emerged recently, this is believed to be the first to predict patient outcomes and validate them across several international patient groups.

“Our ambition was to create a nimble, versatile ChatGPT-like AI platform that can perform a broad range of cancer evaluation tasks,” said study senior author Kun-Hsing Yu, assistant professor of biomedical informatics in the Blavatnik Institute at Harvard Medical School. “Our model turned out to be very useful across multiple tasks related to cancer detection, prognosis, and treatment response across multiple cancers.”

The AI model, which works by reading digital slides of tumor tissues, detects cancer cells and predicts a tumor’s molecular profile based on cellular features seen on the image with superior accuracy

How close the measured value conforms to the correct value.

" data-gt-translate-attributes="[{"attribute":"data-cmtooltip", "format":"html"}]" tabindex="0" role="link" style="box-sizing: inherit; -webkit-font-smoothing: antialiased; margin: 0px; padding: 0px; border-width: 0px 0px 1px; border-top-style: initial; border-right-style: initial; border-bottom-style: dotted; border-left-style: initial; border-top-color: initial; border-right-color: initial; border-bottom-color: rgb(0, 0, 0); border-left-color: initial; border-image: initial; vertical-align: baseline; font-size: 16px; font-family: inherit; text-decoration: none !important; color: rgb(102, 102, 102) !important;">accuracy to most current AI systems. It can forecast patient survival across multiple cancer types and accurately pinpoint features in the tissue that surrounds a tumor — also known as the tumor microenvironment — that are related to a patient’s response to standard treatments, including surgery, chemotherapy, radiation, and immunotherapy. Finally, the team said, the tool appears capable of generating novel insights — it identified specific tumor characteristics previously not known to be linked to patient survival.

The findings, the research team said, add to growing evidence that AI-powered approaches can enhance clinicians’ ability to evaluate cancers efficiently and accurately, including the identification of patients who might not respond well to standard cancer therapies.

“If validated further and deployed widely, our approach, and approaches similar to ours, could identify early on cancer patients who may benefit from experimental treatments targeting certain molecular variations, a capability that is not uniformly available across the world,” Yu said.

Training and performance​


The team’s latest work builds on Yu’s previous research in AI systems for the evaluation of colon cancer and brain tumors. These earlier studies demonstrated the feasibility of the approach within specific cancer types and specific tasks.

The new model, called CHIEF (Clinical Histopathology Imaging Evaluation Foundation), was trained on 15 million unlabeled images chunked into sections of interest. The tool was then trained further on 60,000 whole-slide images of tissues including lung, breast, prostate, colorectal, stomach, esophageal, kidney, brain, liver, thyroid, pancreatic, cervical, uterine, ovarian, testicular, skin, soft tissue, adrenal gland, and bladder. Training the model to look both at specific sections of an image and the whole image allowed it to relate specific changes in one region to the overall context. This approach, the researchers said, enabled CHIEF to interpret an image more holistically by considering a broader context, instead of just focusing on a particular region.

Following training, the team tested CHIEF’s performance on more than 19,400 whole-slide images from 32 independent datasets collected from 24 hospitals and patient cohorts across the globe.

Overall, CHIEF outperformed other state-of-the-art AI methods by up to 36 percent on the following tasks: cancer cell detection, tumor origin identification, predicting patient outcomes, and identifying the presence of genes and DNA

DNA, or deoxyribonucleic acid, is a molecule composed of two long strands of nucleotides that coil around each other to form a double helix. It is the hereditary material in humans and almost all other organisms that carries genetic instructions for development, functioning, growth, and reproduction. Nearly every cell in a person’s body has the same DNA. Most DNA is located in the cell nucleus (where it is called nuclear DNA), but a small amount of DNA can also be found in the mitochondria (where it is called mitochondrial DNA or mtDNA).

" data-gt-translate-attributes="[{"attribute":"data-cmtooltip", "format":"html"}]" tabindex="0" role="link" style="box-sizing: inherit; -webkit-font-smoothing: antialiased; margin: 0px; padding: 0px; border-width: 0px 0px 1px; border-top-style: initial; border-right-style: initial; border-bottom-style: dotted; border-left-style: initial; border-top-color: initial; border-right-color: initial; border-bottom-color: rgb(0, 0, 0); border-left-color: initial; border-image: initial; vertical-align: baseline; font-size: 16px; font-family: inherit; text-decoration: none !important; color: rgb(102, 102, 102) !important;">DNA patterns related to treatment response. Because of its versatile training, CHIEF performed equally well no matter how the tumor cells were obtained — whether via biopsy or through surgical excision. And it was just as accurate, regardless of the technique used to digitize the cancer cell samples. This adaptability, the researchers said, renders CHIEF usable across different clinical settings and represents an important step beyond current models that tend to perform well only when reading tissues obtained through specific techniques.

Cancer detection​


CHIEF achieved nearly 94 percent accuracy in cancer detection and significantly outperformed current AI approaches across 15 datasets containing 11 cancer types. In five biopsy datasets collected from independent cohorts, CHIEF achieved 96 percent accuracy across multiple cancer types including esophagus, stomach, colon, and prostate. When the researchers tested CHIEF on previously unseen slides from surgically removed tumors of the colon, lung, breast, endometrium, and cervix, the model performed with more than 90 percent accuracy.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,427
Reputation
8,215
Daps
156,646

Predicting tumors’ molecular profiles​


A tumor’s genetic makeup holds critical clues to determine its future behavior and optimal treatments. To get this information, oncologists order DNA sequencing of tumor samples, but such detailed genomic profiling of cancer tissues is not done routinely nor uniformly across the world due to the cost and time involved in sending samples to specialized DNA sequencing labs. Even in well-resourced regions, the process could take several weeks. It’s a gap that AI could fill, Yu said.

Quickly identifying cellular patterns on an image suggestive of specific genomic aberrations could offer a quick and cost-effective alternative to genomic sequencing, the researchers said.

CHIEF outperformed current AI methods for predicting genomic variations in a tumor by looking at the microscopic slides. This new AI approach successfully identified features associated with several important genes related to cancer growth and suppression, and it predicted key genetic mutations related to how well a tumor might respond to various standard therapies. CHIEF also detected specific DNA patterns related to how well a colon tumor might respond to a form of immunotherapy called immune checkpoint blockade. When looking at whole-tissue images, CHIEF identified mutations in 54 commonly mutated cancer genes with an overall accuracy of more than 70 percent, outperforming the current state-of-the-art AI method for genomic cancer prediction. Its accuracy was greater for specific genes in specific cancer types.

The team also tested CHIEF on its ability to predict mutations linked with response to FDA-approved targeted therapies across 18 genes spanning 15 anatomic sites. CHIEF attained high accuracy in multiple cancer types, including 96 percent in detecting a mutation in a gene called EZH2 common in a blood cancer called diffuse large B-cell lymphoma. It achieved 89 percent for BRAF gene mutation in thyroid cancer, and 91 percent for NTRK1 gene mutation in head and neck cancers.

Predicting patient survival​


CHIEF successfully predicted patient survival based on tumor histopathology images obtained at the time of initial diagnosis. In all cancer types and all patient groups under study, CHIEF distinguished patients with longer-term survival from those with shorter-term survival. CHIEF outperformed other models by 8 percent. And in patients with more advanced cancers, CHIEF outperformed other AI models by 10 percent. In all, CHIEF’s ability to predict high versus low death risk was tested and confirmed across patient samples from 17 different institutions.

Extracting novel insights about tumor behavior​


The model identified tell-tale patterns on images related to tumor aggressiveness and patient survival. To visualize these areas of interest, CHIEF generated heat maps on an image. When human pathologists analyzed these AI-derived hot spots, they saw intriguing signals reflecting interactions between cancer cells and surrounding tissues. One such feature was the presence of greater numbers of immune cells in areas of the tumor in longer-term survivors, compared with shorter-term survivors. That finding, Yu noted, makes sense because a greater presence of immune cells may indicate the immune system has been activated to attack the tumor.

When looking at the tumors of shorter-term survivors, CHIEF identified regions of interest marked by the abnormal size ratios between various cell components, more atypical features on the nuclei of cells, weak connections between cells, and less presence of connective tissue in the area surrounding the tumor. These tumors also had a greater presence of dying cells around them. For example, in breast tumors, CHIEF pinpointed as an area of interest the presence of necrosis — or cell death — inside the tissues. On the flip side, breast cancers with higher survival rates were more likely to have preserved cellular architecture resembling healthy tissues. The visual features and zones of interest related to survival varied by cancer type, the team noted.

Next steps​


The researchers said they plan to refine CHIEF’s performance and augment its capabilities by:

  • Conducting additional training on images of tissues from rare diseases and non-cancerous conditions
  • Including samples from pre-malignant tissues before cells become fully cancerous
  • Exposing the model to more molecular data to enhance its ability to identify cancers with different levels of aggressiveness
  • Training the model to also predict the benefits and adverse effects of novel cancer treatments in addition to standard treatments

Reference: “A pathology foundation model for cancer diagnosis and prognosis prediction” by Xiyue Wang, Junhan Zhao, Eliana Marostica, Wei Yuan, Jietian Jin, Jiayu Zhang, Ruijiang Li, Hongping Tang, Kanran Wang, Yu Li, Fang Wang, Yulong Peng, Junyou Zhu, Jing Zhang, Christopher R. Jackson, Jun Zhang, Deborah Dillon, Nancy U. Lin, Lynette Sholl, Thomas Denize, David Meredith, Keith L. Ligon, Sabina Signoretti, Shuji Ogino, Jeffrey A. Golden, MacLean P. Nasrallah, Xiao Han, Sen Yang and Kun-Hsing Yu, 4 September 2024, Nature.
DOI: 10.1038/s41586-024-07894-z

The work was in part supported by the National Institute of General Medical Sciences grant R35GM142879, the Department of Defense Peer Reviewed Cancer Research Program Career Development Award HT9425-23-1-0523, the Google Research Scholar Award, the Harvard Medical School Dean’s Innovation Award, and the Blavatnik Center for Computational Biomedicine Award.

Yu is an inventor of U.S. patent 16/179,101 assigned to Harvard University and served as a consultant for Takeda, Curatio DL, and the Postgraduate Institute for Medicine. Jun Zhang and Han were employees of Tencent AI Lab.
 

The_Truth

Superstar
Supporter
Joined
Aug 17, 2014
Messages
7,590
Reputation
1,379
Daps
27,357
On the bright side, this could put an end to racial bias in the healthcare field.
 
Top