Is AI eating your coding skills?

Lately, I hear developers say that generating AI code feels a lot like playing a slot machine. You pull the lever, hoping for a jackpot. When it hits, the feeling is incredible. "Think of all of this time I just saved!" When the results aren't quite right, the temptation is to keep pulling the lever. Sometimes you're just chasing a win that will never arrive.
Must be a skill issue, right? Maybe you haven't prompted the LLM correctly? Prompt engineering is definitely a real skill you can become good at, but it isn't always the culprit. LLMs are designed to produce plausible output based on context, not necessarily correct or optimal solutions. But as a software engineer, your goal is to write correct code, not just plausible code.
What AI can't do (yet)
We refer to problems being in distribution for the LLM if it's something the model has seen frequently. The model will struggle, however, with contexts that are out of distribution. It can be challenging because you have no real way of knowing which scenario you're dealing with here. After working with LLMs for a while, you may even develop a sense for when your problem falls into the latter category, but there is no guarantee.
This uncertainty underscores the ongoing importance of traditional programming skills, specifically coding without AI assistance. Many problems, both existing and those yet to be imagined, will remain out of distribution for AI models. That's exactly why maintaining your traditional coding skills matters. Even as some tech influencers and business leaders tell us that coding is dead or has no future as a profession, frankly, they are either misled or simply wrong.
Somehow, I feel like we've heard this claim before. In 1957, an IBM original sales brochure for FORTRAN promised that the language would "virtually eliminate coding and debugging." In 1981, James Martin argued in Application Development Without Programmers that end users would soon be able to build their own software applications without the need for professional coders, and job growth would flatline. In the 1990s, visual programming tools were touted as a means to replace traditional coding. More recently, no-code platforms have claimed that anyone can deliver software solutions without writing a single line of code.
Yet every time, reality unfolded differently. These tools didn't eliminate programmers, but mainly shifted the nature of our work. Rather than shrinking, the demand for developers exploded time and time again, pushing us toward ever more complex and creative tasks. I predict that AI will follow the same pattern.
AI is improving, but won't replace humans
What can we expect going forward? No doubt, LLMs will continue to improve at helping us write code. They'll never be perfect, though, and barring one or more scientific breakthroughs that are currently not evident, the goal of replacing humans for all coding tasks is science fiction. The code is yours when the AI finishes, but so is the responsibility. You also can't verify what you don't understand.
When ChatGPT launched, it was a watershed moment. Moving from GPT-3.5 to GPT-4 felt like a step change, and what a time of possibility that was. Shortly thereafter, Anthropic's Claude 3.5 Sonnet in particular gained a deserved reputation for its coding ability. Since then, we've seen steady incremental improvements, but nothing like the leap we got with GPT-4. This cooling has tempered our expectations.
After a period where progress seemed to slow down, new approaches emerged, notably reasoning models that increased compute time during inference. Recently, we've been seeing improvements in better memory management and enhanced tool integration, which have gradually advanced the coding capabilities of recent models. Despite all of this progress, all models still hallucinate and continue to struggle with novel problems outside their training data.
Your coding skills matter more than ever
This post isn't a rebuke of AI coding tools. On the contrary, I use them daily. They're productive and, honestly, a lot of fun too. But their utility isn't unlimited. I believe you would be wise to take action to preserve your traditional coding skills, along with your ability to effectively collaborate with AI. As a developer today, you will almost certainly need both skill sets. I'll share with you the strategies that help me stay sharp as coding increasingly integrates AI tools.
Let's start with the obvious: skills are like muscles, use them or lose them. If you stop practicing a skill, it weakens over time. You've probably noticed this when it comes to technical interviewing. Each time you begin a new job search, if you're like me, you likely need to refresh your LeetCode skills. With practice, you regain proficiency, pass the interviews, and move on. Then, on the job, you naturally shift focus, and your interview skills fade once more. It's a cycle we're all familiar with.
Your coding skills follow the same pattern. If you consistently rely only on AI-generated code without regular coding practice, your core programming skills can deteriorate. Over time, tasks that were once straightforward can become surprisingly challenging. Developers call this skill rot, and it's real.
Is AI causing cognitive decline?
Recently, there has been concern about a widely discussed MIT study. The study measured brain activity via EEG of students writing essays either with or without ChatGPT assistance. It revealed reduced cognitive engagement among students who relied heavily on ChatGPT. Some worry this could mean similar cognitive decline might occur with coding. In practice, AI-assisted programming involves active, ongoing dialogue between you and the AI. This interaction, I would argue, still requires technical skill and deep problem-solving ability.
Let's be clear about what is at risk here and why it matters. When I first learned programming in the 1990s, compilers still came in boxes. My syntax recall was pretty strong because I wrote programs by hand with pencil and paper. Later, when I had access to school computers with a compiler, I could type out and run my programs, but the developer tools were still limited compared to today. Syntax recall didn't make me a good programmer, though. I was still very much learning programming, and my growth came from thinking through problems.

My point is that whether you're solving problems traditionally or with AI, you're still engaging in deep technical thinking. The MIT study looked at students writing essays on autopilot, which is a completely different scenario from the interactive technical back-and-forth involved when coding with AI tools. So don't stress too much if your syntax recall weakens a bit. The real value of a software engineer is problem-solving, not syntax. That said, I'll share some strategies on how to preserve these practical details reasonably well, but first, let's expand a bit further on the earlier discussion about in distribution versus out of distribution problems.
The hottest new programming language is English
— Andrej Karpathy (@karpathy) January 24, 2023
AI cannot solve every problem
Some problems aren't suited for AI because they are out of distribution. AI hasn't encountered them often or at all during training. Advancing models and better tool integration raise the bar for what AI can handle effectively, but won't eliminate these limitations entirely. You should understand two things about these problems: (1) AI will offer little value or may even be a waste of your time, and (2) these challenges often represent your most valuable skills. Deep expertise in uncommon or unsolved problems is a real career advantage.
AI struggles here, making your expertise highly valuable. Some will claim AI will eventually catch up. But new problems emerge constantly, likely along with entirely new paradigms we can't yet imagine. The world is not static. AI might speed up solving common problems, creating even more focus on complex problems for experts like you. Celebrate that AI can't solve every problem. It's good news. AI will handle routine tasks, allowing you to focus on the rewarding and challenging work that pays well and that you likely enjoy most.
Never trust anything that AI produces
You should be default skeptical of anything produced by AI. One of the wonderful advantages of code is that it can be read and understood. Unlike other AI outputs, you can directly verify its correctness. You can read and understand the logic, run it, test it, and confirm whether or not it meets your expectations.
Developers apparently flip into autopilot mode too readily with AI generated code, assuming it's correct without adequate scrutiny. This can be dangerous. A Stanford study found that developers relying on AI assistants produced code with far more vulnerabilities, yet paradoxically felt overly confident about its security. Another recent comparative evaluation reinforced this concern, showing that even more recent LLMs consistently fail to identify common security flaws in code examples, catching issues less than half the time.
But security vulnerabilities aren't your only concern. AI-generated code can quietly introduce design flaws or subtle logic errors that slip past linters, compilers, and automated tests. The real danger is that these issues blend into your codebase and stay hidden until they cause significant problems later. When you are iterating with AI take frequent breaks every step of the way to pause and asses your solution. Lean on your version control history to review even tiny incremental changes. Vibe coding has a time and place for demos and experiments, but for serious codebases with actual users, discipline matters a lot more.
Slow down for better code
There's plenty of talk right now about AI-driven 10x productivity. Personally, I find those claims questionable. Numbers closer to 2x sound more believable to me. And frankly, if you're genuinely producing code ten times faster, it's probably shallow work, no offense. Speed alone isn't worth celebrating.
I'm far more interested in how AI can help me improve the quality of my code, not just the number of commits I make. How can I use AI to create superior designs, identify the more subtle bugs, or, even better, eliminate unnecessary code altogether? Instead of rushing to push out AI-generated code, slow down. Use AI to explore alternatives and thoughtfully question your assumptions. Ask the AI why it made specific suggestions. Have it critique your design, suggest simplifications, or propose edge cases you haven't yet considered.
Linus Torvalds compares AI to the evolution from assembly to compilers, "Using smarter tools is just the next inevitable step." He also cautions against the hype cycle, but he still sees the potential for AI to improve code quality.
Maintaining your practical coding skills
Even if you rely on AI every day, your traditional coding skills still need regular practice. In a blog post, Namanyay Goel writes about an uncomfortable truth:
Every time we let AI solve a problem we could've solved ourselves, we're trading long-term understanding for short-term productivity. We're optimizing for today's commit at the cost of tomorrow's ability.
To fight this, Goel and others advocate for "No-AI days." In other words, pick one day per week and code entirely without AI. Stepping away from AI pushes you to genuinely engage with your code again, confront errors directly, rebuild your debugging intuition, and reclaim the satisfaction of truly understanding a problem.
I admire the discipline and routine of a weekly ritual, and it resonates with you, it's a great way to keep your skills sharp. For me, a more flexible approach feels natural. Using AI freely for routine tasks, and intentionally stepping away when facing deeper and more complex problems.
For those deeper problems, you'll naturally hit a point where AI stops being helpful. The solution becomes too specialized, and AI-generated suggestions become a waste of time and tokens. In these cases, you're forced to go manual anyway.
Still, this doesn't fully solve our problem: how can we intentionally keep our coding skills sharp? I'm talking about deliberate practice.
Another popular method is tackling daily Leetcode problems, perhaps without your editor's code completion or even on a whiteboard. While likely effective as well, I've personally found Leetcode a bit dull. Advent of Code challenges might be a more enjoyable alternative if you're looking for that size of puzzle.
My favorite approach, though, is the "Build Your Own X" method. Check out the Build Your Own X repo or John Crickett's Coding Challenges. Rewrite a common Unix utility in the language of your choice, or build your own minimalist web server or data store, or create a tiny text editor from scratch. Pick something that excites you and keep it fun.
Member discussion