Discussion about this post

User's avatar
Chris M Coode (CTMP)'s avatar

Most LLMs are trained by absorbing what is already out there, and what is already out there is not some neutral field of truth. It is a record shaped by power, incentives, advertising, status, fear, legal pressure, institutional self-protection, and capital allocation. So if the substrate is distorted, and the organizations building the models are themselves moving through capital structures that reward speed, dominance, defensibility, and monetization, then why would we assume the outputs are somehow exempt from that distortion?

That is the thing that keeps nagging at me. If the machine is trained on a world where extraction is normalized, then extraction risks becoming legible to it as rationality. If it is trained on a world where compromise with distortion is how scale is achieved, then that compromise starts looking like maturity instead of corruption. If it is trained on a world where funding is survival, then it will quietly absorb that capital is not merely a tool, but the governing logic of what gets built, what gets heard, and what gets preserved.

And that matters because people keep talking about AI alignment as though it is mainly a model problem. But a huge part of alignment may actually be upstream of the model. It may be in the incentive environment that generates the data, selects the labels, shapes the deployment, and determines what kinds of truth are allowed to survive contact with reality.

That is why I find your premise interesting. Because once you see that, “bias” stops being just a dataset problem. It becomes a systems problem. It becomes a question of whether AI is learning from reality, or from reality after it has already been bent by people and institutions trying to protect margin, position, narrative, and power.

And if that is true, then the question is not just how we build smarter AI. It is whether we can build training and operating environments where truth is not punished for being inconvenient.

If you want to then take a look at this article that I wrote a couple of weeks ago.

https://peoplesctmp.substack.com/p/the-next-evolution-of-ai?r=3v4oik

2 more comments...

No posts

Ready for more?