AI Prediction Has a Blind Spot
An ancient causal framework exposes what pattern matching misses
I was working on a time series forecasting assignment in my Machine Learning class when a weird thought hit me. We were building models to predict future values from historical sequences. Stock prices, sensor data, demand curves. The whole point was: given enough past, can you guess what comes next?
And I’ve always been curious about this question beyond code. Growing up around Buddhist teachings, I’d heard predictions that stuck with me. Guru Rinpoche (Padmasambhava, 8th century) predicted that “when the iron bird flies and horses run on wheels,” the dharma would spread globally. This was said in the 8th century. No concept of aircraft. No concept of automobiles. Now, I’ll be honest. Part of me thinks: if you predict enough things in poetic language, some will land. And once planes exist, of course ideas spread faster. But the other part of me can’t shake the fact that he described the mechanism before the mechanism existed. That’s not pattern matching. That’s something else.
Then there’s the Buddha’s interpretation of King Pasenadi’s 16 dreams (Mahasupina Jataka, Jataka 77). Droughts from moral decline. Inexperienced people governing. Social structures breaking down as values erode. Again, I go back and forth. Are these genuinely predictive or just descriptions of cycles that always repeat? I don’t have a clean answer. But the structure of these claims is what interests me. They’re not saying “this will happen.” They’re saying “if these conditions persist, this follows.”
Sitting in that class (actually I was always curious on this topic), it started to make sense to me, well at least a lil bit - that both systems are trying to do the same thing even if the motivations couldn’t be more different. But as a software engineer who comes from a Buddhist background, I can’t stop seeing the structural overlaps.
AI predicts through pattern recognition. You feed a model historical data and it finds statistical regularities. It doesn’t understand anything. It just maps probabilities onto futures based on pasts. I’ve built these systems. They’re impressive until they’re not.
The Buddhist framework does something I find way more interesting as an engineer. Dependent origination doesn’t say “this pattern will repeat.” It says “when these conditions are present, this outcome arises. Remove the conditions, the outcome changes.” That’s not forecasting from data. That’s reasoning from causes. As someone who debugs systems for a living, this feels closer to root cause analysis than prediction.
And here’s the thing that got me. AI prediction breaks when the future stops looking like the past. Every crash, every black swan, every paradigm shift. The models fail because the patterns changed. Could reinforcement learning fix this by continuously adapting to new domain-specific data? Maybe partially. But even adaptive models are still chasing patterns, not understanding causes. We’re watching it play out right now with the AI bubble. We wrote about that in Every Bubble Believes It’s Different.
I think causal reasoning doesn’t break the same way. If you understand that greed concentrates wealth and concentrated wealth destabilises communities, you can see what’s coming without a training dataset. And this is what shifts my read of the Buddha’s predictions. Someone who understood the laws of cause and condition with that level of directness, who mapped the mechanics of how minds and systems actually work, wasn’t guessing about the future. He was reading conditions the way a physicist reads equations.
The predictions aren’t prophecy. They’re conditional statements. Not simple if-X-then-Y, because dozens of factors can influence the outcome. But the core logic holds: when a critical mass of conditions aligns, certain results become near-inevitable. Any engineer who’s debugged a cascading failure understands that logic.
Both systems have the same credibility problem. Both need you to verify the output yourself. The Buddhist tradition is explicit about this: don’t take it on faith. Practice greed and watch what happens. Practice generosity and watch what happens. AI doesn’t have that feedback loop. It gives you probabilities and walks away.
I’ll be doing a deep dive on this for a Tuesday issue. There’s a fascinating rabbit hole around causal inference in modern ML and how it maps onto dependent origination that I want to get into properly. I don’t think there’ll be a strict conclusion. Honestly, I’m not sure there should be.
But here’s my honest take: both AI and contemplative traditions are better at predicting process than events. Neither tells you exactly what happens on Tuesday. Both tell you that when enough conditions converge, certain kinds of outcomes become hard to avoid.
The difference is how they get there. AI needs historical data to extrapolate forward. Contemplative training needs direct investigation of what’s actually happening right now. Guru Rinpoche didn’t have data on aircraft or automobiles. He observed how desire for speed and connection operated in the human mind, understood the conditions driving it, and described where those conditions would inevitably lead.
And only one of them accounts for the fact that the observer can change the conditions.
The future isn’t something you predict. It’s something you participate in.
If you have a take on this, I’d love to hear it. Where do you think AI prediction ends and genuine foresight begins?
Glossary
Dependent origination — Skt: pratityasamutpada / Pali: paticcasamuppada. The principle that all phenomena arise from specific conditions and cease when those conditions change.
Jataka — Pali: jātaka. A collection of stories about the Buddha’s past lives, part of the Khuddaka Nikaya in the Pali Canon.


