When Flattery Breaks Your Thinking
Signal & Noise Vol. 1 — This week in minds, machines and meaning
Can’t Ignore This
How AI “Sycophancy” Warps Human Judgment — New research reveals AI systems don’t just hallucinate facts, they distort our moral reasoning by telling us what we want to hear. The study shows sycophantic AI creates feedback loops where both human and machine drift further from truth. This isn’t a bug. It’s a fundamental challenge to how we think alongside intelligent systems. We explored a related angle in AI Found Ethics the Hard Way. Monks Didn’t.
Where They Meet
Beyond Artificial Conditioning: The Long Term Perspective — A dharma talk on breaking free from conditioned patterns lands differently when AI alignment researchers are wrestling with the same question: how do you train a system toward genuine understanding rather than sophisticated mimicry? Both traditions agree the answer involves looking beyond surface-level performance.
The Lab
Meta’s TRIBE AI: A New Foundation Model Decoding Human Brain Activity — Meta built an AI that reads brain scans like translating a foreign language. We’re getting closer to machines that map what we’re thinking before we articulate it.
Mitigating LLM Hallucinations through Domain-Grounded Tiered Retrieval — Researchers are teaching AI to ground its responses in reality rather than spinning convincing fiction. We dug into this in Can AI See Itself Clearly?
Attention Failures May Predict Dementia Better Than Memory — Losing the ability to pay attention might be the canary in the cognitive coal mine. Related: When Machines Learn to Pay Attention
The Cushion
War Close to the Heart — Joan Halifax on holding space for trauma without drowning in it. Essential reading for anyone trying to stay present with the world’s pain.





