DEV Community

Cover image for The Illusion of the Human Touch

The Illusion of the Human Touch

NorthernDev on March 23, 2026

We are currently obsessed with playing the AI police. Everywhere you look online, people are pointing fingers at articles and posts, confidently cl...
Collapse
 
francistrdev profile image
FrancisTRᴅᴇᴠ (っ◔◡◔)っ • Edited

I understand where you are coming from, especially when you mentioned "We should use the AI that is available for help. It is a tool.". I do agree that we should use it as a tool. However, there comes to a time where it is "too much".

For me, I don't use AI for my writing because I want to improve my writing skills. I don't mind someone using AI to assist on their writing. However, people play "AI police" because of the fear of AI replacing them. I have a friend where he is an English Major and his opinions on AI is that it is fine if it is used as a tool, but not as a replacement. If it becomes a replacement, it then becomes the norm. People will start using it everywhere and it just doesn't seems right to be on the internet and look at slop.

Yes, having an post written by AI that resonate with people makes sense, but it comes down to if that person is "real". If they simply found out it was made by a robot, then it makes sense for them to come to the conclusion that they "made a connection with ChatGPT", which doesn't sit right I can imagine. I am not saying everyone here is a robot (I hope not), but I am saying that is what the conclusion may be, and we are afraid to point that out.

If you use AI or writing, whether in full or only fixing grammar, that's fine. It's just what I am currently feeling right now and hope there is some sense of human on the platform whenever you go. In my opinion, it should be used as a tool, but what we should be asking ourselves is: "Are we still learning for ourselves or have we became lazier?".

I hope this makes sense and Great post! :D

Collapse
 
the_nortern_dev profile image
NorthernDev

​Fair point!
The fear of being replaced is definitely what is fueling this whole AI police trend right now. And I get what you mean about feeling cheated if you realize you just had a deep moment with a server farm instead of a person. Nobody wants that.
​Are we getting lazier? Honestly, yes. A lot of people will just use this to pump out cheap slop, and we are already drowning in it. But for others, the tool just handles the tedious typing part so they can actually focus on the thinking. The idea is what matters. The words are just the vehicle.
​Glad you liked the post. 🙂

Collapse
 
andreas_mller_2fd27cf578 profile image
Andreas Müller

I talk to a colleague who works in the IT department for a public library and who uses AI far more heavily than I do, and he says that he becomes dumber every day. So I really get your sentiment here. From my own experience coding with agent mode, there is a real danger to be lazy about it, and just accept everything the AI spits out. It takes real effort to think through the code, but an interesting thing I'd like to point out is that AI has actually taught me things I didn't know. Sometimes I go "do X", and it goes and does X in a way that I haven't seen before and that forces me to research what it did. Simple example: I didn't know the SQL coalesce function. AI used it in a query it wrote for me, I saw it and was like: "What does that do?". Then I asked it for documentation about coalesce, it gave me an accurate link and through the link I learned what it did.

If we engage AI in this way it can actually teach us things which are new for us, which to be honest I didn't anticipate when I was starting with coding agents. The opportunity to learn by engaging AI is something that I find genuinely fascinating, but I guess it's an effort some people don't want to make.

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Was this post written by AI? xD Haha, just kidding 😄

But I’ve been noticing a similar trend. People who are familiar with AI often feel a bit put off by how much AI-generated content is flooding platforms like LinkedIn (especially in e-commerce). On the other hand… those posts still get hundreds of likes, so clearly it doesn’t bother most people 🤷‍♀️

Personally, I don’t hide the fact that I use LLMs — mainly to polish my English, or sometimes I’ll just write something in Polish and have it translated. The tricky part is making sure it only translates or lightly refines the text, instead of turning my thoughts into full-on GPT-style content 😅

Collapse
 
the_nortern_dev profile image
NorthernDev

I am deeply offended, Sylwia. I bled over that keyboard for at least ten whole minutes. 😂

But you nailed it regarding LinkedIn. It is just a sea of robotic enthusiasm right now, yet somehow people still eat it up.

I actually love that you use it to polish your English. Although, I have to admit, now I am going to be over-analyzing every single comment from you, trying to figure out if I am talking to the real Sylwia or her slightly overly dramatic GPT translator. Next time, just leave one authentic Polish swear word in there so I know it is really you. 😉

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Haha, you should try dropping a casual “kurwa” into a conversation with any Polish person — there’s a good chance you’ll instantly be accepted as a native speaker 😄

Thread Thread
 
the_nortern_dev profile image
NorthernDev

Haha maybe i need to buy me some addidas clothing as well 😁 then im a full native polish citizen 😉

Thread Thread
 
sylwia-lask profile image
Sylwia Laskowska

Absolutely 😄 That’s basically our national outfit 😂

Thread Thread
 
the_nortern_dev profile image
NorthernDev

Haha true! 😁 its as polish as the ikea meatballs are so much sweden 😂

Thread Thread
 
sylwia-lask profile image
Sylwia Laskowska

I love ikea meatballs!!! 😍

Thread Thread
 
the_nortern_dev profile image
NorthernDev

haha thats awsome! 😁 And i have to admit i have adidas dress at home and its very comfortable 🤣

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

You made a good point! I need to use more AI in my content on LinkedIn :).hehe.

Collapse
 
georgekobaidze profile image
Giorgi Kobaidze

Even though I'm still a true believer in creating content like articles or videos with minimal AI involvement. I'm not against using AI at all, not using it would just be limiting yourself. As long as the core idea is genuinely yours, using AI to polish your writing is completely fair game. If book authors can rely on editors to refine and polish their work before publishing, then the same principle applies to anyone creating content. It's not cheating, it's simply part of the process of making your ideas clearer, more polished, and easier to digest.

I remember a comment on one of my articles where someone asked, "Is this AI-generated?" It honestly caught me off guard. The only way I use AI is to catch sneaky grammar issues, or punctuation I might've missed. I don't really get why some people assume creators rely on AI to generate entire pieces, where's the passion in that? Who's going to feel proud of work done completely by AI? Though if there are people who actually enjoy it, more power to them! Who am I to tell them what's good for them?

That said, I'm also not a huge fan of fully AI-narrated video content. I prefer listening to real people and I think creators should recognize that it's much better for their brand to connect with their audience directly, especially if you have enough language skills to do so.

Collapse
 
the_nortern_dev profile image
NorthernDev

That feeling of pouring hours into a piece only to get hit with an "Is this AI?" comment is exactly what triggered me to write this. It is incredibly frustrating.
​The editor comparison is spot on. Using a tool to clean up your commas does not strip away your passion or your ownership of the idea. But if someone is just clicking a button to generate an entire article from scratch, they are not a writer, they are just a prompter. There is absolutely zero pride in that.
​I am completely with you on the AI voices as well. The tech is getting crazy good, but I still want to know there is an actual person on the other end of the microphone.

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

I have at least one comment like "It's for sure AI-generated" in almost all of my posts 😂

Collapse
 
georgekobaidze profile image
Giorgi Kobaidze

Take it as a compliment. Let them stay salty😄

Collapse
 
xwero profile image
david duymelinck

LLM's are trained to have and output that is as human as possible. But as human as possible means as middle of the road as possible. Don't offend anyone, don't make mistakes.
There is a reason a lot of the celebrities are interchangeable when it comes to appearance. People like symmetry. So it didn't started with AI, it is just psychology and sociology in practice.

I look for genuine information and conversation, but the problem is bubbles and doubling down has become the main way to react. Because that is how logarithms on social networks have trained us. Do what everyone else does.

The problem with using AI is that AI companies could claim co-authorship. I see more and more repositories where Claude is contributor. And in the USA a judge made a ruling that everything written with AI is public domain. So there you are just a name as part of the content with no rights, and that is a scary direction for the future.
The analogue situation would be all universities claiming all papers to be theirs because they provided the knowledge and facilities. It are the people that did the work to put it all together that are getting the credit, and the university gets name recognition for giving the people the means to do the work.

Collapse
 
the_nortern_dev profile image
NorthernDev

That university analogy is absolutely perfect. It is wild that we are legally treating a glorified autocomplete differently than we treat a library or a research grant.
You are dead right about the "middle of the road" thing too. We have basically optimized the entire internet to be as perfectly average and inoffensive as possible.

Do you think creators will actually stop using LLMs if these copyright rulings hold up, or are we already too hooked on the convenience?

Collapse
 
xwero profile image
david duymelinck • Edited

The future I see right now is that people, once they are aware, are going to run models and tools that don't claim ownership. And the models that do will push people away.

For copyright it has been murky from the start, because most known LLM's are trained on copyrighted material, claiming fair use regulations.
So in a sense everything was already public domain from the AI companies perspective.
Now they have a legal leg to stand on if they deem that content is AI generated.
How are they going to prove that when the content is average? I guess the people with the most money will win, and today that are the AI companies.

I think we are re-living the early times of aviation again in technology form. Making people fly is one of the greatest achievements in transport. But it took a lot of crashes over a long period of time to get to the assurance we have now that when we get on a plane it is not likely that it falls out of the sky.

Thread Thread
 
the_nortern_dev profile image
NorthernDev

That aviation analogy is brilliant. We are absolutely in the era of planes falling out of the sky on a daily basis.
You are completely right about the shift to open models. The second these massive companies start aggressively claiming copyright over their users' work, developers will just pivot to running local LLMs. It is going to be the only way to actually own what you make and avoid the corporate land grab.

Are you already running local models yourself to bypass all of this legal mess?

Thread Thread
 
xwero profile image
david duymelinck

I'm switching between local models and models as a service, trying to figure out what works best.

At the moment I'm giving the Mistral models a go. And so far I have the experience they are gaining on their American and Chinese counterparts.
USA models take up the main part of the news but the models from the rest of the world aren't peanuts anymore.
Not so long ago I read Samsung released paper about their model.

Thread Thread
 
the_nortern_dev profile image
NorthernDev

Mistral is seriously impressive right now. It is so refreshing to see the US monopoly on the AI narrative finally cracking. We should use the AI that is available for help, but having real options outside the big three giants is a huge relief.

Totally missed that Samsung paper, going to look it up today. Are you running Mistral locally on your own rig or just hitting their API?

Collapse
 
narnaiezzsshaa profile image
Narnaiezzsshaa Truong

AI didn’t erase the human touch.
It erased the illusion that polished writing = human thinking.

And honestly?
That illusion needed to go.

Because the real skill has never been typing.
It’s been:

  • clarity
  • logic
  • insight
  • understanding
  • ownership

AI can polish the words.
It can’t supply the ken behind them.

That’s still on us.

Collapse
 
diottanax profile image
Federico Diotallevi

Polished writing truly represents human thinking to me. We’re getting too used to just spitting out random words to AI bots, and the results are right in front of us. We’ve lost the ability to think while writing, and gained the lack of words when speaking.
If you think this isn't true, just listen to politicians today. There are no 'big words' anymore. If the US President needs to introduce a new Secretary to the people, he’ll just say that 'He’s a great guy'... and? Why is he there?
In my opinion, thinking to every single word while writing is a great exercise for thinking fast while speaking. If you’ve written down what you want to say beforehand, you’ll sound more convincing, more charismatic - smarter. And you probably are smarter, since you racked your brain so many times to find the right words to express what you were thinking.

Collapse
 
the_nortern_dev profile image
NorthernDev

Writing is literally how we figure out what we think. If you outsource that entirely to a bot, you lose the ability to actually form a coherent argument. We should use the AI that is available for help, but there is a massive difference between using it to bounce ideas and just being plain lazy.
Are you seeing this lack of critical thinking spill over into real life conversations yet?

Thread Thread
 
diottanax profile image
Federico Diotallevi

Not in common conversations, I'd say, but I fear for the future tbh. Kids are growing up using these tools now. If I had had them back in middle/high school, they would have screwed up my entire education.

Collapse
 
the_nortern_dev profile image
NorthernDev

That is the absolute core of it right there. Polished writing was always a smokescreen for empty thoughts.
​Now that anyone can generate a perfectly formatted essay in five seconds, sounding smart has zero value. The only thing left to hide behind is actual insight. Either you have a real thought, or you don't. AI just forced us to stop pretending.
​I honestly wish I had included that exact breakdown in the original post.

Collapse
 
syedahmershah profile image
Syed Ahmer Shah

We’ve reached a weird point where being articulate is now "suspicious." If you have a clear structure and zero typos, people assume a machine did it. It’s an insult to human discipline.

The bit about owning the outcome is the real truth here. It doesn't matter if you used an LLM to brainstorm or fix a comma—if you hit publish, those ideas are yours to defend. If you can't explain the logic behind the "polished" text you just posted, that’s where the real fraud happens.

Efficiency isn't the enemy; laziness is. Great post.

Collapse
 
the_nortern_dev profile image
NorthernDev

That is exactly it. We have somehow reached a point where we are actively punishing people for writing well. If you know how to structure a thought and spell correctly, you are suddenly a suspect.

Your line about efficiency not being the enemy, but laziness, sums up the whole debate perfectly. If someone asks you to explain your own post over a coffee and you have no idea what you just published, that is the actual fraud. Not what tool you used to fix your commas.
Have you ever been accused of using AI just because your writing was too clean?

Collapse
 
syedahmershah profile image
Syed Ahmer Shah

Exactly. I actually got hit with this recently—my cousins saw some of my work and straight up said, 'There’s no way you wrote this, you must have copied someone else.' It’s a weird blow to the ego when your own family thinks high-quality work is so out of character that it must be a fraud. Like you said, we're punishing discipline now.

Thread Thread
 
the_nortern_dev profile image
NorthernDev

That is absolutely brutal. Getting called a fraud by your own family just because you produced something high-quality has to sting. It really is the ultimate backhanded compliment.
We should use the AI that is available for help, absolutely, but it is incredibly depressing that actually putting in the discipline and doing the hard work now just makes you look suspicious to the people who know you best.
Did you even bother trying to convince them, or did you just let them think you hired a ghostwriter?

Collapse
 
christiecosky profile image
Christie Cosky

I like your point about how if you can't explain your own post, that's the real problem, because I think we're all seeing something analogous in the programming world: devs who can't explain their code in a code review anymore.

In reality, there are very few new ideas in the world. Everything has been said before. But IMHO it's still worthwhile to express ourselves and put our ideas out there, whether LLMs were used to help the writing process or not.

Collapse
 
marina_eremina profile image
Marina Eremina

“If the tool is named Claude instead of Karen, it shouldn't suddenly be considered cheating” - that line really made me laugh 🙂 The article is spot on! Generating code boilerplate with AI tools might become a standard way to write software now, but generating text to express an idea clearly and grammatically correctly is sometimes considered a crime 🙂 Really liked the thought that quality matters, not the tool behind it. I'm happy someone finally said it so clearly!

Collapse
 
the_nortern_dev profile image
NorthernDev

Glad that line landed. The double standard between code and text is completely wild right now. Developers will happily automate their entire stack without blinking, but the second someone uses an LLM to structure a paragraph, it is suddenly a massive moral failing. It makes zero sense. 😅

Are you seeing this hypocrisy a lot in your own daily work?

Collapse
 
marina_eremina profile image
Marina Eremina

Actually, I can see this a lot on LinkedIn. Especially how poeple are hunting for em dashes 😅

Collapse
 
maya_bayers profile image
Maya Bayers

Really sharp take.

The idea that people think they can “feel” AI writing is kind of falling apart in real time—and you explain that contradiction perfectly. Calling polished human writing “AI,” while praising actual AI text as “authentic,” is exactly what’s happening everywhere.

Overall, it’s a solid reminder that the real responsibility is on the creator to understand and stand behind their work, not to prove how “human” the process was.

Collapse
 
the_nortern_dev profile image
NorthernDev

Spot on. The whole "I can just tell" argument is completely falling apart right in front of us. We are basically gaslighting ourselves into thinking we have some magical sixth sense for detecting AI. At the end of the day, if you can stand behind the text you publish, the process shouldn't matter at all.

Have you noticed this paranoia getting worse in your own network lately?

Collapse
 
andreas_mller_2fd27cf578 profile image
Andreas Müller

"Make sure you actually understand it, and make sure you can stand behind whatever you put your name on." That sentence is very beautiful (no matter who wrote it). I work as a senior right now and use AI agent mode daily, but I review everything the AI does, every single line. Because as my colleague said: "You make the commit, it's your code." Just because we create with AI doesn't mean we don't create anymore. If you use it as a writing tool, well who makes the prompt? You do. If you decide to publish an AI-written article as is, who actually clicks on the button publishing the article? You do. Even if you give it full autonomy to publish in your name, who gave it that autonomy? You did.

The point is, you can't wiggle your way out of the responsibility you have for your work. If that work is largely done by AI or not doesn't matter. If you initiate the work in any way, it's your work.

Goes for coding, art, text, music, whatever. AI doesn't absolve you from responsibility.

Collapse
 
the_nortern_dev profile image
NorthernDev

Exactly this. Your commit, your code. It is so convenient to blame a tool when things break, but accountability does not just vanish because you got some help with the heavy lifting. No matter how much the machine generated, you are the one hitting publish at the end of the day. It is your work and your responsibility. Honestly refreshing to read a take from someone who actually gets it.

Collapse
 
adnan-hasan profile image
Adnan Hasan

Just finished reading it… kind of unsettling how something can feel so “human” when it’s really just well-designed patterns behind the scenes.
Maybe it says more about us than the technology itself… how easily we lean into the illusion when we need that connection.
Definitely one of those pieces that sticks with you for a while.

Collapse
 
the_nortern_dev profile image
NorthernDev

That is the exact thought that kept me up while writing this. It is genuinely unsettling. We are so hardwired to look for a real connection that we will happily project a soul onto a block of text just because it has the right rhythm.

It absolutely says more about our own psychology than the tech itself. We just want to feel understood, even if the thing understanding us is just predicting the next logical word in a sequence.

Have you ever found yourself getting emotionally invested in something you read online, only to realize later it might just be a wellprompted script?

Collapse
 
dorothyjb profile image
Dorothy J Aubrey • Edited

One of my favorite authors recently released a new book and I found that I have become hyper aware of the use of em dashes -- so much so that I went back to his other books from 4+ years ago to see if he was using them then. He was. The book was great either way, except for me ruining the experience for myself a bit with my overly attuned em dash meter.

However, I will also note that I have a friend who is an exceptional writer -- he writes beautiful short blurbs, usually on Facebook about being a dad, being a husband, being a coach, the absurdities of daily life and more. His writing can make you laugh, can make you cry and can cut you to your very quick. Or at least it could up until about sixteen months ago when he was trying to find an editor for his first book of collected works and I suggested he use ChatGPT. He's since published fourteen books (FOURTEEN!?!) and for the most part, they are completely soulless. I did go back and look at his earlier works and no! He did not previously use em dashes! It's not just that though -- you can feel the lack of soul and there is something that is missing that used to be there even if I can't quite put my finger on it.

I use AI extensively for coding, for writing articles, emails, documentation, etc. I LOVE IT! But I did write this comment entirely on my own, mostly to make sure that I still could, and I think I'll make it a more regular practice from now on.

Collapse
 
the_nortern_dev profile image
NorthernDev

That story about your friend actually hurts to read. Going from writing things that make people cry to pumping out fourteen soulless books in a year is just tragic. It perfectly captures the trap. We should use the AI that is available for help, absolutely, but the second you let it replace your actual voice, the magic dies.
And the em dash paranoia is so relatable! We are all driving ourselves crazy looking for signals that sometimes just mean a writer likes punctuation.

I am really glad you typed this out yourself. It is a solid reminder not to let that muscle atrophy.

Collapse
 
wong2kim profile image
wong2 kim

This hits home. I use AI daily to build apps, and the irony is that my most "human" writing gets flagged while AI-polished corporate copy passes every detector. The obsession with detecting AI says more about our insecurity than about quality.

Collapse
 
the_nortern_dev profile image
NorthernDev

That is the funniest and most depressing irony of this whole mess. Corporate jargon has always been completely soulless, so naturally, the detectors think it is perfectly human. They are literally trained to reward the most boring, predictable text imaginable.

You nailed the insecurity part. We are just terrified of being tricked. Have you actually had to dumb down your own raw writing just to avoid triggering those false positives?

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen • Edited

That is so true! You make a good point.

Collapse
 
the_nortern_dev profile image
NorthernDev

Haha that would be awsome 😂

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

yeah! I think if we program a robot (AI) to do our chores around the house. We spend more times with other people.hehehe :)

Collapse
 
jonah_blessy profile image
Jonah Blessy

I resonate with the conclusion. At the end of the day, we are accountable for where we put our name on regardless of a human work or ai.

Collapse
 
the_nortern_dev profile image
NorthernDev

Exactly. If your name is on it, you own it. It blows my mind how many people think they can just blame the tool when they publish something stupid. The AI didn't hit the publish button, you did.

Have you seen a lot of people trying to use the "AI did it" excuse lately?

Collapse
 
harsh2644 profile image
Harsh

Mostly agree, but 'own the outcome' is where it gets tricky. A lot of people are publishing AI output they genuinely don't understand. The tool isn't the problem the accountability gap is.

Collapse
 
the_nortern_dev profile image
NorthernDev

People are just copy-pasting code or essays they couldn't explain to save their lives. If you can't defend the text or the ideas when someone questions them, you have zero business publishing it. Owning the outcome has to start with actually understanding it.
​Appreciate you calling that out.

Collapse
 
ji_ai profile image
jidonglab

the key insight here is token probability distribution - models expose uncertainty in how likely each next token becomes, yet humans often misread confidence as authenticity. we open-sourced our approach at github.com/jidonglab/contextzip

Collapse
 
neuracerebra-ai profile image
Warren Cain

At this point people are mostly just doing vibe-based accusations and pretending it is analysis. Half the time they call something AI because it is too clean, too organized, or too polished, which is a pretty ridiculous standard if you think about it.

The part I really agree with is that people are terrible at detecting this stuff consistently. They’ll call real writing fake, then turn around and praise machine writing when it happens to land emotionally. So the whole culture around “spotting AI” already feels broken to me.

Also yeah, I think the better standard is ownership, not purity. I do not really care whether someone used AI, an editor, a ghostwriter, or a friend helping clean up a draft. What matters is whether they understand what they are saying, whether it is true, and whether they are willing to stand behind it. That feels way more honest than this weird fake detective game people are playing online.