Recently, AI got scary good at medicine and math, governments finally admitted the safety problem, and we learned why everyone suddenly hates AI's energy use.
Today's highlights:
🧠 AI can now predict dementia risk from routine brain scans using 10x less data than traditional models.
⚠️ 30+ countries just confirmed: AI capabilities are racing ahead of safety measures.
🤔 New research shows AI learning to think flexibly, like humans switching strategies mid-problem.
🌍 Data reveals exactly when and why public trust in green AI collapsed.
🧘 Psychologists get first-ever official ethics guide for using AI with patients.
BRAIN SCAN AND DEMENTIA
Your next brain scan might predict your dementia risk

The research: BrainIAC: A Foundation Model for Brain MRI Analysis | Developed a foundation model for brain MRI analysis that can learn general representations from unlabeled data and adapt to diverse medical tasks, particularly in situations where training data is limited.
What they did: Trained one AI model on 48,965 unlabeled brain scans to handle seven different medical tasks at once: brain age estimation, dementia prediction (via mild cognitive impairment classification), tumor mutation detection, cancer survival forecasting, sequence classification, time-to-stroke prediction, and tumor segmentation.
What they found:
Outperforms specialized AI models designed for single tasks
Works with just 10% of normal training data in multiple applications
At low data levels, consistently beats traditional models across all seven tasks
For sequence classification with only 500 training scans (10% of data), achieved 90.8% accuracy vs. 74-86% for other models
For MCI (mild cognitive impairment) classification at 10% data, achieved 70% accuracy compared to 52-56% for other approaches
And so what: Your next routine MRI could flag dementia risk, cancer mutations, or aging patterns without needing a specialist or expensive extra tests. This is huge for small hospitals or countries without deep-pocketed healthcare systems. One scan, seven insights, accessible anywhere. If you've got aging parents or health anxiety, this may be the AI application that actually matters.
Numbers that count: With just 500 training scans for sequence classification, BrainIAC achieved 90.8% accuracy while traditional models needed 5,000+ scans to reach similar performance; a 10x data-efficiency gain.
Takeaway: Medical AI can become practical for hospitals that aren't Stanford or Mayo Clinic.
🔗 Read it here
SAFETY GAP
The official reality check: AI capabilities vs. safety (spoiler: we're losing)
The research: International AI Safety Report 2026 | Yoshua Bengio + 100 experts, 30+ countries | Examined the current state of general-purpose AI capabilities, the emerging risks associated with them, and approaches to managing those risks through technical, institutional, and societal measures.
What they did: Assembled the biggest international expert panel ever to answer one question: Where is AI actually at, and are we prepared for what's coming?
What they found:
AI now solves International Math Olympiad gold-medal problems and exceeds PhD-level performance on science exams.
The gap between what AI can do and our ability to manage it is widening fast.
Major capability jumps in 2025: autonomous operation, advanced reasoning, complex multi-step planning.
And so what? This is backed by the EU, UN, OECD, and 30 governments. It's as close to the official truth as you'll get. If you're confused about whether AI is overhyped or actually dangerous, this cuts through the noise. Plus, it'll shape the regulations that determine what AI tools you can actually use at work, what gets banned, and who's liable when things go wrong.
Number that counts: Leading models now answer 80%+ of graduate-level science questions correctly. Up from barely passing two years ago.
Key takeaway: Capabilities are accelerating faster than safety measures, and finally, world leaders are taking it seriously.
Takeaway: The safety gap is real and widening. For the first time, 30+ countries officially agree: we need to catch up, fast.
🔗 Read more here
ADAPTIVE REASONING
AI is learning to think like you do (switching strategies mid-problem)

The Research: Fluid Representations in Reasoning Models | Studied how reasoning AI models change their internal understanding of problems while working through them step-by-step.
What they did: Watched how QwQ-32B (an AI model trained to think out loud) solves puzzles where all the action words are scrambled (like changing "pick up" to "attack"). They tracked how the AI's brain changes its understanding of these fake words as it reasons through 15-20,000 words of thinking.
What they found:
The AI gradually figures out what the scrambled words really mean, even though they're nonsense.
When the same action is called different fake names (attack vs. illuminate vs. whisper), the AI develops the same internal understanding for all of them.
This learning actually helps: when researchers injected the AI's improved understanding back into earlier steps, accuracy jumped by 2-10%.
The AI can work with pure abstract concepts, not just the specific words it sees.
Why you should care: Ever get annoyed when AI can't adapt mid-conversation? This shows that advanced AI doesn't just follow a script but actually updates its understanding as it thinks, like when you realize your first approach won't work and switch strategies. This means AI that's better at debugging code, planning trips, diagnosing problems. Basically, anything without a single right answer.
Numbers that count: When researchers helped the AI use its improved understanding earlier, accuracy increased by 2-10%, depending on the task.
Takeaway: Reasoning models are shifting from following scripts to thinking on their feet.
🔗 Read more here
ENVIRONMENTAL RECKONING
Why did people suddenly stop trusting AI to save the environment?

The research: From Promise to Concern: Public Perceptions of AI in ESG Frameworks Over Time | Examined how public sentiment toward AI has evolved across the three ESG dimensions (environmental, social, and governance) over 25 years by analyzing news media discourse, revealing that AI's perceived legitimacy varies significantly across different domains rather than being uniformly positive or negative.
What they did: Analyzed 33,628 news articles from 2000 to 2025 to track how public sentiment about AI shifted across Environmental, Social, and Governance (ESG) dimensions over 25 years.
What they found:
Environmental sentiment about AI nosedived after 2022, when generative AI's energy use became public knowledge
Governance sentiment stayed positive (AI for monitoring, compliance, transparency)
Social sentiment fluctuated wildly based on job automation fears and fairness concerns
Why you care: If you felt guilty firing up ChatGPT for the 50th time today, you're not alone, and the data proves it. The study pinpoints exactly when and why the narrative flipped from AI will optimize energy grids to AI is melting glaciers. For anyone working in sustainability, tech policy, or just trying to use AI responsibly, this shows the honest reckoning happening right now. You can't ignore the energy cost anymore, and neither can the industry.
The number that matters: Environmental sentiment reversed sharply after 2022, coinciding with ChatGPT's launch and reports that training large models uses as much energy as 120 homes annually.
Takeaway: AI went from climate hero to energy villain because the receipts finally showed up.
🔗 Read more here
ETHICAL GUARDRAILS
The ethical playbook for AI in therapy (before someone gets hurt)

The research: Ethical Guidance for AI in the Professional Practice of Health Service Psychology | Ethical guidance for health service psychologists on how to responsibly integrate AI tools into clinical practice while addressing concerns related to patient well-being, trust, and professional ethical responsibilities
What they did: APA assembled mental health tech experts to create the first comprehensive ethical framework for psychologists using AI, covering session transcription, treatment planning, diagnostic tools, and bias risks.
What they found:
AI can improve clinical decisions, expand access to care, and cut administrative burdens
Critical risks: algorithmic bias, data breaches, lack of transparency, and informed consent gaps
Human oversight is non-negotiable. AI assists, never replaces, professional judgment
Why you care: Therapy chatbots and AI scribes are already here. Without rules, they could misdiagnose, leak private health data, or reinforce biases against marginalized groups. This guidance protects both patients and therapists by setting clear boundaries. If you've ever been to therapy or know someone who has, this is the difference between a helpful tool and a dangerous experiment. Plus, these principles (transparency, bias awareness, data protection) apply to AI in education, HR, finance, and anywhere trust matters.
The number that matters: 1 in 10 psychologists already use AI monthly for clinical work, but most lack formal training in AI ethics.
Takeaway: AI in therapy can help, but only if we build guardrails before people get hurt.
🔗 Read more here
REFLECTION
Notice the pattern?
Every paper this week has a but attached:
BrainIAC can save lives. But where's the regulation?
AI reasoning is getting flexible. But the safety gap is widening.
We have ethical guidelines. But practitioners are already using the tech without training.
We know about the energy problem. But usage keeps exploding.
We're in the messy middle. Breakthrough capabilities, legitimate concerns, institutions playing catch-up.
The important question here is how do we build the good stuff while managing the real costs?
THE PARADOX
Facts and contradictions
AI hit new capability peaks (math olympiads, medical predictions) while simultaneously triggering a global safety alarm and an environmental reckoning. And institutions are finally writing the rulebooks.
Here's what's actually happening:
The tech is sprinting ahead. Governments and professional bodies are scrambling to catch up. Public trust is fracturing along clear lines: impressed by capabilities, terrified of energy costs, desperate for ethical guardrails.
Three contradictions we can't ignore:
AI can save lives (BrainIAC) while consuming energy at a planetary scale.
We're building more flexible, “human-like reasoning” while widening the safety gap.
Professional ethics frameworks are landing just as 1 in 10 practitioners already use the tech.
This is AI's awkward adolescence: wildly capable, ethically messy, environmentally costly, and desperately needing adult supervision.
YOUR MOVE
What you can try this week
🧠 Health: Next doctor visit, ask: Do you use AI analysis for imaging? If not, ask why. Demand matters.
📊 Safety: Bookmark the International AI Safety Report. Next time someone says AI is just hype or will kill us all, send them the executive summary (4 pages, cuts through both extremes).
💭 Reasoning: Test fluid thinking: Ask ChatGPT or Claude to think step-by-step and try multiple approaches on a real problem you're stuck on. See the difference.
🌍 Environment: Count how many times you use ChatGPT or Claude today. Each question uses about as much electricity as running an LED lightbulb for 2 minutes. Add it up. Was it worth it?
🧘 Ethics: If you're in healthcare, education, or HR, read the APA guidance, even if you're not a psychologist. It's a template for responsible AI integration anywhere trust is critical.
CONVERSATION POCKET BOOK
Join discussions confidently
Impressive facts you can add to conversations (without having to read the whole paper). Perfect for when asked what's happening with AI. 😉
🧠 On medical AI: AI can now predict mild cognitive impairment (early dementia warning signs) from routine brain scans using only 10% of the training data traditional models need. This makes early detection possible for smaller hospitals.
🎓 On AI capabilities: AI solved International Math Olympiad problems at a gold-medal level in 2025. The same problems that stump 99.9% of humans.
⚡ On environmental impact: Public trust in AI's environmental promise collapsed after 2022, when people learned that training one large model uses as much electricity as 120 homes consume in a year.
🧘 On therapy and AI: 1 in 10 psychologists already use AI monthly, but most lack formal training in AI ethics. Which is why the American Psychological Association just published the first comprehensive ethical framework.
🔒 On the safety gap: AI capabilities jumped dramatically in 2025 (e.g, autonomous operation), but safety measures are lagging, according to a report backed by 30+ countries.
WHAT DID YOU THINK?
Which of these five surprised you most?
Are you more hopeful or worried after reading?
Hit reply and tell me. I read every single response.
If this landed in your spam folder, please mark it as Not Spam and drag it to your primary inbox. Helps me reach you reliably.
Thanks for reading, and until next time,
Maryam
