I’ve noticed a trend in online forums: people bragging about their AI workflows and how they’ve mastered the art of generating code with tools like Claude. But I’m left wondering, is this really a more complex or valuable skill than traditional coding?
It seems to me that setting up an AI workflow is equivalent to setting up a dev environment, which any developer already knows how to do. And when you get down to it, Claude code is just a bunch of text files with English text that generate code in a non-deterministic way. Or, at best, it’s just a fancy autocomplete because you’ve constrained the model so much that you’re mostly just coding everything yourself anyway.
The truth is, AI coding is not a more useful skill than actual coding. In fact, it’s often less predictable and less reliable. I only use AI tools for research, not for actual work, because they lack context and can’t be trusted to produce clean code.
The limitations of AI coding are rooted in its math. Increasing context windows has quadratic complexity, which means it requires more matrix multiplication. There are optimizations, but they have drawbacks like sparse attention, which reduces accuracy. To bypass these issues, you’d need to throw away the attention mechanism entirely.
What does this mean for development? AI coding will perform worse and worse as the complexity of the code base increases. And the more code you outsource to AI, the more black box behavior you’re introducing to your architecture.
So, all this hype about AI coding skills is just that – hype. It’s not a replacement for actual coding skills, and it’s not a more valuable skillset. In fact, it’s often less reliable and less efficient.
—
*Further reading: [The Limits of Large Language Models](https://www.lesswrong.com/posts/7Ks5WP4586dxhF5dA/the-limits-of-large-language-models)*