DEV Community

Cover image for Are we still engineers… or just really good prompt writers now❓
TheBitForge
TheBitForge

Posted on

Are we still engineers… or just really good prompt writers now❓

Last Tuesday I fixed a bug in about four minutes.

Not a small bug either. It was one of those authentication edge cases that only shows up when a user has a certain combination of OAuth scopes and a session that's technically valid but partially expired. The kind of thing that in a previous life would have cost me two hours, a lot of console.log statements, and at least one long stare out the window.

I described it to an AI assistant, pasted some context, and got back a fix that actually worked. I tested it, shipped it, and moved on.

Then I sat back and thought: what just happened?

Not in a bad way. Not in a grateful way either. Just... genuinely, what did I do there? Did I solve that problem? Or did I facilitate someone else solving it?


The way it used to feel

When I started writing code seriously, there was a certain texture to the work that I don't think I fully appreciated until it started changing.

You'd read through something — a library's source, a language spec, a Stack Overflow thread from 2013 — and slowly build up a mental model. You'd get it wrong the first time, sometimes the second. You'd trace execution line by line. Eventually, something would click.

That clicking feeling was the job. That was engineering.

It wasn't fast. But it left a residue. Every hard thing you figured out made the next hard thing slightly easier, because you were slowly building a map of how systems actually worked.

And yeah, most of us also Googled everything. We copied from Stack Overflow constantly. We used frameworks that abstracted away enormous complexity. Nobody pretended otherwise.

But there was still a gap you had to cross on your own. The search result gave you a direction. You still had to understand why.


What's different now

With AI, the gap is smaller. Sometimes it disappears entirely.

That's not a complaint. It's just true, and worth sitting with.

I can describe a problem in plain English and receive something close to a solution. I can paste error messages I don't fully understand and get explanations that are usually correct. I can ask for a working implementation of something I've never built before and get scaffolding that would have taken me a week to research from scratch.

This is genuinely useful. There's no version of this story where I pretend it isn't.

But it changes something about the feedback loop that used to build understanding. When the gap closes too easily, you don't always notice what you didn't learn.


The question I keep dodging

Here's the uncomfortable one: do I fully understand the code I just shipped?

Sometimes yes. Often, mostly. But not always entirely.

And the follow-up, which is worse: would I be able to build this without AI?

Some of it, sure. Parts of it, probably not as quickly. But some of it — if I'm honest — I'm not completely certain. There are pieces I've been trusting more than verifying lately.

That's a new feeling. And I'm not sure it's fine.


But let's be fair to ourselves

Every generation of developers has had this conversation about something.

When IDEs started autocompleting, someone worried we'd forget how to type. When ORMs arrived, someone worried we'd stop understanding SQL. When frameworks abstracted routing and state management, someone worried we'd stop understanding the web itself.

Some of those concerns turned out to be overblown. Some of them turned out to be partially right.

The truth is usually in the middle: tools do raise the floor, and they sometimes lower the ceiling for people who stop at the tool. Both things can be true.

So maybe the question isn't "are we losing skills" but "which skills, and does it matter?"


What I think engineering actually is

When I try to strip it down to what I value most in the developers I respect, it's not typing speed or memorization of syntax. It's something harder to name.

It's the ability to look at a system and understand its shape — where things can go wrong, why certain trade-offs were made, what will break under pressure.

It's being able to debug when the AI's suggestion doesn't work. That moment still requires something real. You have to know enough to recognize when you're being led the wrong direction, to ask the right follow-up question, to look past the plausible answer to the actual problem.

It's taking responsibility. Not just deploying working code, but understanding what you deployed. Knowing what it does to the data. Knowing what happens if it fails.

That part hasn't been automated. Not really.


Prompting as a skill

Here's a frame that I find genuinely useful sometimes: prompting is just another interface.

We don't think of someone as less of an engineer because they're good at reading documentation. We don't think of someone as less of an engineer because they can quickly find what they need on GitHub or in a package registry. Those are skills too. Knowing where to look, how to evaluate what you find, when to trust it and when to dig deeper.

Prompting well is similar. It requires knowing enough about the problem to describe it precisely. It requires evaluating the output critically. It requires understanding when the answer is subtly wrong.

The people who get the most out of AI tools right now are, in my observation, people who already knew a lot. They use it to go faster, not to avoid understanding. The foundation still matters.


Where the concern is real

But I do think there's a legitimate risk that we're not talking about clearly enough.

If you're newer to this, and AI fills every gap before you have a chance to cross it yourself — you might end up with the outputs of understanding without the understanding itself. Code that works. A mental model that doesn't.

And that's fine until something breaks in a way the AI can't explain. Until you have to debug something novel. Until you have to make a decision without a scaffold.

The skills you don't build in the first years are hard to build later. Not impossible. But harder.

That worries me a little. Not for senior developers who've already built the map. For the people starting now, who might be building on shaky ground without knowing it.


What I'm trying to do about it

I don't have a clean answer. But I've been trying a few things.

When I use AI to solve something, I try to make sure I can explain the solution after. Not just deploy it — actually trace through it and understand what it's doing and why.

When it gives me something that works but I don't fully understand, I mark that as debt. Not technical debt. Understanding debt. And I try to pay it down before it compounds.

I still write some things from scratch when it would be faster not to. Not always. But sometimes, deliberately, to keep the manual memory alive.

I don't know if this is the right balance. I'm figuring it out in real time, same as everyone else.


The title question, honestly answered

Are we still engineers?

Yes. But the word is doing more work than it used to.

We're engineers who direct more than we construct. Who review more than we author. Who evaluate more than we discover.

Whether that's better or worse depends on what you value. It's faster. It's more productive in measurable ways. It might also be producing a generation of developers who are fluent with tools but not fluent with fundamentals — and that gap doesn't always show up until it really matters.

The craft is changing. That's not new. Every new tool changes the craft.

But I think the honest version of this conversation — the one worth having — isn't "AI good or bad." It's about what we're intentionally keeping, and what we're letting go of without noticing.


I'm genuinely curious where others land on this.

Not the theoretical version of the question — the personal one. Think about the last thing you shipped that had significant AI involvement. If that thing broke in production tonight, in a way you hadn't seen before, in a place you hadn't thought to look — would you know where to start?

Top comments (5)

Collapse
 
pgradot profile image
Pierre Gradot

I've been asking myself a lot of questions like this since the beginning of the year. I truly felt deprived of the part of my job that I like the most: writing code, making it work, creating something. And I've already said what is in fact important "the part of my job".

Writing code isn't exactly my job.

Solving problems is my job. The expected output is functional software (most of the time, because sometimes my job is making people realize that they don't have a problem, or that a simple documentation explaining a procedure is enough). Coding is just a step in software development.

Being an engineer is about solving problems using the most appropriate tools. And as of March 2026, AI still need engineers to write prompts 😅

Collapse
 
kerneltech profile image
Kernel Tech

We’re still engineers—just operating at a higher abstraction level. Prompting alone isn’t engineering; real value now comes from designing systems, evaluation loops, and AI workflows. As LLMs improve, the focus shifts from “writing better prompts” to orchestrating behavior, integrating tools, and ensuring reliability at scale.

Collapse
 
novaelvaris profile image
Nova Elvaris

The "understanding debt" framing really resonates. I've been tracking something similar in my own workflow — I keep a checklist after every AI-assisted session where I ask myself "could I explain this fix to a junior dev without referencing the AI output?" If the answer is no, I block 15 minutes to trace through it manually before moving on. It's a small tax, but it's the difference between shipping code and owning code.

What I've noticed is that the risk isn't evenly distributed across task types. For debugging (where you already have a mental model of the system and just need help finding the needle), AI accelerates genuine understanding. But for greenfield architecture decisions, I've caught myself accepting AI suggestions that I wouldn't have designed myself — not because they're wrong, but because I never built the reasoning that led to them. That's where the debt compounds silently.

To your closing question: I'd know where to start, but only because I still insist on drawing the system boundaries myself and letting AI fill in the implementation details. The moment I start delegating the "shape" of the system, I think that's when prompt-writer territory begins. Do you find that distinction holds in your experience too?

Collapse
 
botanica_andina profile image
Botánica Andina

The 'clicking feeling' is so real, and AI definitely shifts its origin. I find myself getting to the solution faster, but sometimes miss the deep dive that used to cement understanding. It's like we're becoming architects of solutions more than bricklayers, which is a wild transition to navigate.

Collapse
 
apex_stack profile image
Apex Stack

The concept of "understanding debt" is something I've never seen articulated before, and it's the most useful framing in this whole discussion. Technical debt has tooling, metrics, and processes to manage it. Understanding debt is invisible until something breaks at 2 AM and you realize you don't actually know how your own system works.

To answer your closing question directly: I run about 10 autonomous agents that manage a large multilingual data site. If something broke tonight, I'd know exactly where to start — not because I wrote every line, but because I designed the architecture, the validation layers, and the failure modes. The agents handle execution, but the system boundaries are mine.

That's the distinction I keep coming back to. The engineers who are "just prompt writers" are the ones who delegated the architecture along with the implementation. The ones who are still engineering are delegating implementation while retaining ownership of the system's shape — what connects to what, what fails gracefully, what fails catastrophically.

Your point about newer developers is the part that worries me too. The gap between "code that works" and "a mental model that works" is getting wider, and AI makes it dangerously easy to ship the former without building the latter.