9 programming tasks you shouldn’t hand off to AI – and why

Mininyx Doodle / Getty Images

It’s over. Programming as a profession is done. Just sign up for a $20-per-month AI vibe coding service and let the AI do all the work. Right?

Also: What is AI vibe coding? It’s all the rage but it’s not for everyone — here’s why

Despite the fact that tech companies like Microsoft are showing coders the door by the thousands, AI cannot and will not be the sole producer of code. In fact, there are many programming tasks for which an AI is not suited.

In this article, I’m spotlighting nine programming tasks where you shouldn’t use an AI. Stay tuned to the end, because I showcase a 10th bonus reason why you shouldn’t always use an AI for programming. Not to mention that this could happen.

1. Complex systems and high-level design

Here’s the thing. Generative AI systems are essentially super-smart auto-complete. They can suggest syntax, they can code, and they can act as if they understand concepts. But all of that is based on probabilistic algorithms and a ton of information scraped from the web. Contextual intelligence is not a strength. Just try talking to an AI for a while, and you’ll see them lose the thread.

Also: 10 professional developers on vibe coding’s true promise and peril

If you need to produce something that requires substantial understanding of how systems interact, experience to make judgment calls about trade-offs, understanding of what works for your unique needs, and consideration of how everything fits with your goals and constraints, don’t hire an AI.

2. Proprietary codebases and migrations

Large language models are trained on public repositories and (shudder) Stack Overflow. Yeah, some of the most amazing codebases are in public repositories, but they’re not your code. You and your team know your code. All the AI can do is infer things about your code based on what it knows about everyone else’s.

Also: A vibe coding horror story: What started as ‘a pure dopamine hit’ ended in a nightmare

More than likely, if you give an AI your proprietary code and ask it to do big things, you’ll embed many lines of plausible-looking code that just won’t work. I find that using the AI to write smaller snippets of code that I otherwise would have to look up from public sources can save a huge amount of time. But don’t delegate your unique value add to a brainy mimeograph machine.

3. Innovative new stuff

If you want to create an algorithm that hasn’t been done before — maybe to give your organization a huge competitive advantage — hire a computer scientist. Don’t try to get an AI to be an innovator. AIs can do wonders with making boilerplate look innovative, but if you need real out‑of‑the‑box thinking, don’t use a glorified box with brains.

Also: Google’s Jules AI coding agent built a new feature I could actually ship – while I made coffee

This applies not only to functional coding, but to design as well. To be fair, AIs can do some wonderful design. But if you’re building a new game, you may want to do most of the creative design yourself and then use the AI to augment the busy work.

Sure, many of us go through life parroting things we heard from other folks or from some wacky podcaster. But there are real humans who are truly creative. That creativity can be a strategic advantage. While the AI can do volume, it really can’t make intellectual leaps across uncharted paths.

4. Critical security programming and auditing

Do not let the fox guard the hen house. Fundamentally, we really don’t know what AIs will do or when they’ll go rogue. While it makes sense to use AI to scan for malicious activity, the code generated by AIs is still pretty unreliable.

CSET (the Center for Security and Emerging Technology) at Georgetown University published a study late last year based on formal testing. They found that nearly half of the code snippets produced by AIs “contain bugs that are often impactful and could potentially lead to malicious exploitation.”

Also: Coding with AI? My top 5 tips for vetting its output – and staying out of trouble

This tracks with my own testing. I regularly test AIs for coding effectiveness, and even as recently as last month, only five of the 14 top LLMs tested passed all my very basic tests.

Seriously, folks. Let AIs help you out. But don’t trust an AI with anything really important. If you’re looking at cryptographic routines, managing authentication, patching zero‑day flaws, or similar coding tasks, let a real human do the work.

5. Code requiring legal or regulatory compliance

There are laws — lots of them — particularly in the healthcare and finance arenas. I’m not a lawyer, so I can’t tell you what they are specifically. But if you’re in an industry governed by regulation or rife with litigation, you probably know.

There is also a case to be made that you can’t be sure that cloud-based LLMs will be secure. Sure, a vendor may say your data isn’t used for training, but is it? If you’re subject to HIPAA or DoD security clearance requirements, you may not be allowed to share your code with a chatty chatbot.

Also: How I used this AI tool to build an app with just one prompt – and you can too

Do you really want to bet your business on code written by Bender from Futurama? Yes, it’s possible you might have humans double‑checking the code. But we humans are fallible and miss things.

Think about human nature. If you think your opponent will come down on you for a human error, you’re probably right. But if you were too lazy to write your own code and handed it off to AIs known to hallucinate, ooh — your competition’s gonna have a field day with your future.

6. Domain-specific business logic

You know how it is when you bring a new hire into the company and it takes them a while to get a handle on what you do and how you do it? Or worse, when you merge two companies and the employees of each are having difficulty grokking the culture and business practices of the other?

Also: The top 20 AI tools of 2025 – and the #1 thing to remember when you use them

Yeah. Asking an AI to write code about your unique business operations is a recipe for failure. Keep in mind that AIs are trained on a lot of public knowledge. Let’s define that for a minute. Public knowledge is any knowledge the public could possibly know. The AIs were trained on all the stuff they could hoover from the Internet, with or without permission.

But the AIs are not trained on your internal business knowledge, trade secrets, practices, folklore, long‑held work‑arounds, yada yada yada. Use the AI for what it’s good at, but don’t try to convince it to do something it doesn’t know how to do. AIs are so people‑pleasing that they’ll try to do it — and maybe never tell you that what you just deployed was fabricated garbage.

7. Low-level systems work and performance optimizations

While it’s possible for an AI to identify areas of code that could use optimization, there are limits. AIs aren’t trained on the very fine details of microarchitectural constraints, nor do they have the experience of coaxing just a skosh more out of every line of code.

Also: The best AI for coding in 2025 (including a new winner – and what not to use)

A lot of the coding involved in embedded systems programming, kernel development, and performance-critical C and C++ optimization exists in the brains of a few expert coders. Also, keep in mind that AIs confabulate. So what they may insist are performance improvements could well be hidden cycle drains that they simply won’t admit to.

If you need fine craftspersonship, you’ll need a fine craftsperson — in this case, a very experienced coder.

8. Learning exercises and educational assignments

If you use an AI, are you cheating? Yes. No. Depends. Yes, because you may be violating academic standards and cheating yourself out of the critical hands-on learning that makes knowledge stick. No, because AI has proven to be an excellent augmentation for help, especially when TAs aren’t available. And maybe, because this is still a fairly unknown area.

Also: I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of work

Harvard takes a middle ground with its wonderful CS50 Intro to Computer Science course. It offers the CS50 duck (it’s a long story), an AI specifically trained on their course materials with system instructions that limit how much information students are provided. So the AI is there to help answer legitimate student questions, but not do their work for them.

If you’re a student or an educator, AI is a boon. But be careful. Don’t cheat, and don’t use it to shortcut work that you really should be doing to make education happen. But consider how it might help augment your studies or help you keep up with students’ demands.

9. Collaboration and people stuff

I’ve found that if I treat the AI chatbot as if it were another human coder at the other end of a Slack conversation, I can get a lot out of that level of “collaboration.” A lot, but not everything.

Both humans and AIs can get stubborn, stupid, and frustrating during a long, unproductive conversation. Humans can usually break out of it and be persuaded to be helpful, at least in professional settings. But once you reach the limit of the AI’s session capacity or knowledge, it just becomes a waste of time.

The best human collaborations are magical. When a team is on fire — working together, bouncing ideas off each other, solving problems, and sharing the workload — amazing things can happen.

Also: Open-source skills can save your career when AI comes knocking

AI companies claim workforces made up of agents can duplicate this synergy, but nothing beats working with other folks in a team that’s firing on all cylinders. Not just for productivity (which you get), but also for quality of work life, long-term effectiveness, and, yes, fun.

Don’t get me wrong. Some of my best friends are robots. But some of my other best friends are people with whom I have long, deep, and fulfilling relationships. Besides, I’ve never met an AI that can make Mr. Amontis’ moussaka or Auntie Paula’s apple pie.

Bonus: Don’t use AI for anything you want to own

Don’t use AI for anything you indisputably want to own. If you write code that you then release as open source, this may not be as much of an issue. But if you write proprietary code that you want to own, you might not want to use an AI.

We asked some attorneys about this back at the dawn of generative AI, and the overall consensus is that copyright depends on creation with human hands. If you want to make sure you never wind up in court trying to protect your right to your own code, don’t write it with an AI. For more background, here’s the series I published on code and copyrights:

What about you? Have you found yourself leaning too much on AI to write code? Where do you draw the line between convenience and caution? Are there any programming tasks where you’ve found AI genuinely helpful or dangerously misleading? Have you ever had to debug something an AI wrote and wondered if it saved you time or cost you more? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.

Want more stories about AI? Sign up for Innovation, our weekly newsletter.



Original Source: zdnet

Leave a Reply

Your email address will not be published. Required fields are marked *