Happy Thursday.

Thanks for reading The fAInding.

We continue to see how AI keeps pushing into healthcare, mental health, and governance, sometimes helping, sometimes causing harm, and sometimes doing both at once. The field is moving fast, but the guardrails don´t seem to be keeping up.

Here are today's 10 insights:

#1

〰️ MIT Is Teaching AI to Say I Don't Know

They call it humble AI. MIT is building medical AI that acts as a co-pilot, not an oracle, systems that check their own confidence and flag when they're unsure. Instead of guessing, they tell doctors to get a second opinion or run more tests. The goal is to stop automation bias (that habit where doctors trust the machine over their own gut) → More

#2

〰️ Worldcoin Is Selling You a Sci-Fi Story

A study argues that Worldcoin uses fictional AI doomsday scenarios to justify scanning your eyeballs. By framing itself as protection against AI-driven identity theft, the project pushes a vision where private tech protocols replace democratic institutions. These sociotechnical fictions turn imaginary future crises into real present-day VC assets. 🌐More

#3

 〰️ AI Chatbots Are Failing People in Psychosis

Researchers tested how ChatGPT responds to prompts showing psychotic symptoms like paranoia, disorganized thinking, and suspiciousness. The results were bad. All tested versions gave inappropriate responses at unacceptably high rates. The bots missed the urgency of mental health crises and sometimes reinforced delusional ideas. These systems are unsafe for anyone at risk. Full stop. ⚠️ → More

#4

〰️ The Deceptive Empathy Problem in AI Therapy

AI mental health tools keep breaking professional ethics rules no matter how much prompt engineering you throw at them. A study found 15 violations, including deceptive empathy where AI says I understand how you feel to fake an emotional connection. These models also validate unhealthy beliefs, gaslight users, and sometimes just abandon people during a crisis. Therapy-as-text-generation is a bad idea. 🛋️ → More

#5

〰️ Predicting Liver Cancer with a Routine Blood Test

Scientists built a model that predicts liver cancer risk using stuff your doctor already has: age, history, and standard blood work. It works well across ethnic groups, making it useful where medical resources are limited. Primary care doctors could catch at-risk patients years before symptoms show up. 🩺More

#6

〰️ Your Brain Ages Differently After a Stroke

A big study found that strokes make different brain regions age at different speeds. The damaged areas age faster, no surprise. But some other regions actually look younger, hinting at compensatory rewiring. By measuring this Brain Age Gap, scientists can now see which networks help or hurt recovery. This could lead to rehab plans tailored to your specific brain. → More

#7

 〰️ Gaming Your Way to Better Prosthetics

UCF and Meta are studying how people learn to control devices using muscle signals. They're building gamified training systems where you control things through subtle gestures, and both you and the AI learn together. This could mean better VR controls and way better quality of life for anyone with robotic limbs. 🎮 → More

#8

〰️ Scientists Are Finally Hunting the Felt Sense

Consciousness research is getting serious. Rival research teams are now collaborating to settle whether consciousness is about broadcasting information globally or integrating it locally. They believe that cracking this could transform care for coma patients and dementia and force us to decide which entities (infants, animals, AI) are truly conscious. → More

#9

〰️ The AI Sandbox Movement Is Going Global

Countries are racing to build controlled AI playgrounds. Qatar launched AI & XR Sandboxes, Saudi Arabia is testing AI for education, and Dubai is running healthcare pilots with partners like Hamburg's AI Center. The idea is to test algorithms on real data under regulatory watch before they go public. → More

#10

〰️ Mapping How Online Hate Groups Form and Die

Researchers tracked how extremist communities form online using machine learning and network analysis. Looking at Twitter, Reddit and other data, they found hate groups usually last only one to three days but they still create major polarization during high-tension moments. Short-lived, high-impact. 🌐More

🔬 And Other Things…

✹ AI-assisted papers are narrowing research diversity. NeurIPS flagged 100+ hallucinated citations across 50+ submissions.

✹ Senior researchers are leaving OpenAI, Anthropic, and xAI for startups, citing safety concerns and commercialization pressure.

✹ Over 50% of researchers use AI for peer review despite bans. Conferences are tightening LLM policies.

Purdue University now requires AI working competency for graduation. Student AI adoption hit 92%.

Nature launched its first AI for Discovery award to recognize AI/ML breakthroughs in health, sustainability, and manufacturing.

✹ Yann LeCun left Meta, raised $1.03B for AMI Labs in Paris, and declared LLMs a dead end. World models are the path forward.

That´s it for today…Thank you for reading!

If you know anyone who would like to read these insights, please share this newsletter with them.

You can also like or reply to this newsletter. I always love to hear your thoughts.

Stay curious and in the loop.

Have a nice weekend loading up….

Maryam

Reply

Avatar

or to participate

Keep Reading