Happy Thursday!
This week's research lands in uncomfortable places. AI is reading cancer biopsies better than oncologists. It's making us doubt our own thinking. It's persuading us even when we know it's a machine. And somehow, underneath all that, it's also fighting Alzheimer's and designing smarter molecules. Some of it is progress. Some of it is a mirror held up to systems that were already broken. Often, it´s complicated.
Here are today's 10 insights:
#1
〰️ AI Reads Cancer Reports Better Than Doctors
Northwestern Medicine tested six AI models on cancer pathology reports. They consistently outperformed physicians at capturing the molecular and genetic findings that drive treatment decisions. Reports now span dozens of pages. One missed detail can change a care plan. The team is building an app so clinicians can upload reports and get AI summaries on demand. 🩻 → More
#2
〰️ Leaning on AI Is Quietly Shrinking Our Confidence
A study of nearly 2,000 adults found that AI use didn't destroy cognitive ability. But it did erode confidence in their own reasoning and their sense of ownership over ideas. 58% said AI did most of the thinking. They say: staying in control! Refine your prompts, review the output, and solve problems yourself before handing them off. Train it. Don't let it train you. 🧠 → More
#3
〰️ AI Arguments Persuade You Even When Labeled
Labeling content as AI-generated doesn't make people trust it less. A new peer-reviewed study found the label barely registers if the argument is good. Transparency is not really protection and disclosure alone won't solve what AI-generated persuasion can do at scale. 🪄 → More
#4
〰️ The Science We Need Is Stuck in a 17th-Century System
AI is generating discoveries faster than academic publishing can share them. A new JMIR report calls this the predigital bottleneck. Research exists but can't be used. It's locked in static papers and paywalls, with publishing fees of $5,000 to $11,000 per article. The report argues that the future unit of science needs to link data, methods, and peer review in one living object. 📦 → More
#5
〰️ Fact-Checking Is Fading. AI Misinformation Isn't.
Platforms are pulling back on human fact-checkers. Can AI fill the gap? The answer is complicated. A simple accuracy prompt shown to 33 million users did reduce misinformation sharing. But AI bots still miss deepfakes and fumble on breaking news. The most reliable tool might just be friction. Get people to pause before they share. ⏸️ → More
#6
〰️ Machine Learning Cracks a Genetic Mystery Decades in the Making
Hypermobile Ehlers-Danlos Syndrome affects up to 3% of people worldwide. Its genetic basis has been a mystery for decades. Using machine learning for the first time on this condition, Boston University researchers found it's not a single-gene disorder. It involves variants across three distinct biological systems. This reframes the entire disease. 🧬 → More
#7
〰️ Neural Networks Are Cutting AI's Energy Use by 100x
Tufts University researchers combined neural networks with symbolic reasoning. The result slashes AI energy use by up to 100 times while improving accuracy. For context: AI and data centers used roughly 415 terawatt hours globally in 2024. That number keeps climbing. A 100x efficiency gain changes what's physically possible. ⚡ → More
#8
〰️ Psychiatric AI Is Scaling Bias, Not Fixing It
AI tools that predict violent incidents in mental health settings are flagging Black and Middle Eastern patients as high-risk at disproportionate rates. Don´t blame the algorithm, but the training data, because clinical notes are already full of human bias. AI scales prejudice, not introduces it. Without a redesign of the data and the context, these tools are eroding trust in the communities that need care most. 🏥 → More
#9
〰️ Supercomputers Have a New Rival
Processors modeled on the human brain can now solve equations behind physics simulations. That used to require energy-hungry supercomputers. This points toward low-power hardware for climate modeling, drug simulation, and materials science at a fraction of the cost. Brain-like computers doing supercomputer work is no longer theoretical. 🖥️ → More
#10
〰️ AI Is Making 3D Brain Models Physical
Screen simulations can't fully capture what happens when physical forces hit living brain tissue. A University of Missouri team is closing that gap with 3D-printed artificial brain models. Physical objects that absorb mechanical forces and electromagnetic waves in ways no simulation can. The goal is better research and a deeper understanding of traumatic brain injury. 🧠 → More
And Other Things…
✹ A Nebraska attorney was suspended after 57 of 63 citations in a court brief were defective. Twenty were AI hallucinations: fake cases, fabricated quotes, statutes that don't exist. U.S. courts imposed at least $145,000 in sanctions for AI citation errors in Q1 2026 alone.
✹ Germany's DFG is funding 15 new Emmy Noether AI research groups, covering political bias in chatbots and the math behind deep learning. The goal is to reverse the brain drain pulling researchers abroad.
✹ Google's TurboQuant algorithm cuts the memory bottleneck in large AI models using a two-step compression method. It could push AI development from raw scale toward efficiency first.
✹ Novo Nordisk partnered with OpenAI to run AI across its entire pipeline, from drug discovery to manufacturing. The CEO says the goal is to supercharge scientists, not replace them.
That´s it for today…Thank you for reading!
If you know anyone who would like to read these insights, please share this newsletter with them.
Stay curious and in the loop.
