Last Monday, a nurse suggested we try a wireless monitor to track my vitals and those of my unborn baby.

“We call this device “Monica, the monitor!” It’s either a dream to work with or a total nightmare,” the nurse told me.

On that day, “Monica” (actually the Novii Wireless Patch System) performed exceptionally well. I was able to move freely, without the encumbrance of wires, while giving birth to my daughter. This technology harnesses passive signal acquisition to differentiate between fetal and maternal heart signals and to detect uterine contractions. Data is wirelessly transmitted to a monitoring unit for real-time observation. This system enhances accuracy and reduces false alarms, offering so much-needed mobility during labor.

I thought: that writing and theorizing about technologies is one thing, but experiencing their remarkable capabilities firsthand is quite another, especially when a device functions flawlessly. A question arose: What can foundation models add to wearables? Right after my experience with “Monica”, a recent paper from Google Research and MIT researchers caught my attention. Titled ‘Health-LLM: Large Language Models for Health Prediction via Wearable Sensor Data,’ and authored by Kim et al., this paper delves into the application of LLMs in the health sector, focusing on interpreting data from wearable sensors for health prediction. Intriguingly, these models are fed data not from medical records or doctor’s notes, but from wearable devices like Fitbits, which track daily steps, heart rate, sleep patterns, and more — akin to ‘Monica.’

The research evaluated eight cutting-edge LLMs: Med-Alpaca, PMC-Llama, Asclepius, ClinicalCamel, Flan-T5, Palmyra-Med, GPT-3.5, and GPT-4, across six public health datasets. They conducted experiments on thirteen health prediction tasks related to mental health, activity, metabolism, sleep, and cardiac assessments.

The team experimented with various methods, including zero-shot and few-shot prompting (teaching the model with minimal or no examples), instructional fine-tuning (tailoring the model to specific tasks), and even some parameter-efficient fine-tuning for computational efficiency.

Particularly fascinating is the effectiveness of context enhancement in prompts, which involves adding user context, health knowledge, and temporal information. This approach yielded up to a 23.8% improvement in performance.

Healthcare is an exceedingly sensitive field, but the potential benefits of generative AI for humans are immense, especially with the power of foundation models. Health-LLM explores the future where wearables are not just passive trackers but proactive health guardians.

Another recent groundbreaking paper in healthcare comes from Stanford and Stability AI researchers, titled CheXagent: Towards a Foundation Model for Chest X-Ray Interpretation. The most fascinating aspect of this paper is the development of CheXagent, an advanced foundation model specifically designed for interpreting chest X-rays. This model uniquely combines a clinical LLM, a specialized vision encoder, and a vision-language bridging network, demonstrating exceptional performance in interpreting complex medical images. Its ability to outperform existing models in accuracy and fairness evaluations marks a significant advancement in medical imaging AI technology. It can save so much time! And possibly lives.

(The newborn girl — Reason Leeloo Joy — sends her regards. We took a week off last week but are now back on track, exploring the AI world to understand how she and her four brothers will live in it and navigate it.)

News from The Usual Suspects ©

Sam Altman and OpenAI

Blackstone steps in

Elon Musk, xAI and Tesla

Google and Hugging Face

The freshest research papers, categorized for your convenience

Model Compression and Efficiency

LLM Capabilities and Evaluation

Multimodal and Specialized Models

AI Training and Data Generation Techniques

Language Models and Role-Playing

In other newsletters