CompHealth Corner: June
Computational and digital health ideas evolve quickly. Our monthly research roundup highlights the latest publications — from our Center faculty and other leaders in the field. Here’s what you should be reading this month!
High-speed 4D fluorescence light field tomography of whole freely moving organisms
Kevin C. Zhou, Clare Cook, Archan Chakraborty, Jennifer Bagwell, Joakim Jönsson, Kyung Chul Lee, Xi Yang, Shiqi Xu, Ramana Balla, Kushal Kolar, Caitlin Lewis, Mark Harfouche, Donald T. Fox, Michel Bagnat, and Roarke Horstmeyer, “High-speed 4D fluorescence light field tomography of whole freely moving organisms,” Optica 12, 674-684 (2025)
Summary
Researchers, including Center member Roarke Horstmeyer, have developed the REFLECT microscope, an advanced light-field microscopy technique that combines single-shot 3D imaging with computational refocusing and deep learning to visualize live cells in unprecedented detail. The new method achieves high spatial and temporal resolution, enabling researchers to observe fast cellular dynamics in real time without scanning or multiple exposures. This innovation could enhance diagnostics and treatment monitoring in biomedical research.
My take
REFLECT opens new possibilities for visualizing biological dynamics at the cellular level in full 3D and in real time—no small feat!
Taking a deep dive with active learning for drug discovery
Zachary Fralish and Daniel Reker. “Taking a deep dive with active learning for drug discovery.” Nature Computational Science (2024): 1-2.
Summary
A new article benchmarked deep active learning strategies for drug discovery, revealing how algorithmic choices and data composition affect success. Researchers, including Center member Daniel Reker, found that active learning—where models iteratively suggest the next best experiment—can accelerate the identification of promising drug candidates. The study emphasizes tailoring models and strategies to the task, offering a roadmap to more efficient, AI-driven drug development.
My take
If you’re working at the intersection of AI and drug discovery, this piece is a must-read. It unpacks how the design of your algorithm and the structure of your data can make or break the performance of deep active learning—a fast-growing approach that helps AI choose what experiment to run next.
Development and Evaluation of Automated Artificial Intelligence–Based Brain Tumor Response Assessment in Patients with Glioblastoma
Jikai Zhang, Dominic LaBella, Dylan Zhang, Jessica L. Houk, Jeffrey D. Rudie, Haotian Zou, Pranav Warman, Maciej A. Mazurowski, and Evan Calabrese. “Development and Evaluation of Automated Artificial Intelligence–Based Brain Tumor Response Assessment in Patients with Glioblastoma.” American Journal of Neuroradiology 46, no. 5 (2025): 990-998.
Summary
Using over 3,400 brain MRIs from 634 patients, the team—including Center member Evan Calabrese—developed and tested a volumetric AI model (AI-VTRA) aligned with the BT-RADS framework—offering a scalable approach to assess treatment response over time.
The AI showed moderate success in replicating human assessments and was slightly less effective at predicting overall survival outcomes. While not yet a replacement for radiologists, the system holds promise as a decision-support tool, particularly in settings where expert review isn’t feasible. The study underscores the potential of AI to standardize tumor response evaluation and streamline follow-up imaging in neuro-oncology.
My take
By harnessing BT-RADS–aligned volumetric AI across thousands of MRIs, AI-VTRA promises to standardize and accelerate glioblastoma response monitoring—transforming follow-up care even when radiology expertise is scarce.
An evaluation framework for ambient digital scribing tools in clinical applications
Haoyuan Wang, Rui Yang, Mahmoud Alwakeel, Ankit Kayastha, Anand Chowdhury, Joshua M. Biro, Anthony D. Sorrentino, Jessica L. Handley, Sarah Hantzmon, Sophia Bessias, Nicoleta J. Economou-Zavlanos, Armando Bedoya, Monica Agrawal, Raj M. Ratwani, Eric G. Poon, Michael J. Pencina, Kathryn I. Pollak & Chuan Hong. An evaluation framework for ambient digital scribing tools in clinical applications.” npj Digital Medicine 8.1 (2025): 1-13.
Summary
The study team, including Center member Monica Agrawal, developed and tested SCRIBE, a comprehensive evaluation framework for ambient digital scribing (ADS) tools that transcribe and summarize patient-clinician conversations using AI. These tools promise to reduce documentation burden and clinician burnout and robust evaluation is critical for safe integration into workflows. The framework combines human expert review, automated metrics, large language models as evaluators, and simulation-based testing to assess note quality, accuracy, fairness, and resilience to errors. Testing their in-house ADS tool on 40 prenatal visit recordings, the team found strengths in fluency and clarity but noted challenges in factual accuracy and capturing new medications. The study highlights that while AI scribes show promise, systematic evaluation and governance are essential for ethical and effective clinical deployment.
My take
If you’re focused on reducing clinician documentation burden and ensuring safe AI integration, this new study is a must-read. You’ll gain a clear understanding of key ADS evaluation dimensions—fluency, completeness, factuality; a clear understanding of key ADS evaluation dimensions—fluency, completeness, factuality; and an appreciation for the necessity of robust governance to ensure ethical, secure deployment of AI-driven ADS tools.
