alt.hn

2/13/2026 at 12:12:59 PM

Ask HN: AI Depression

by pavello

2/13/2026 at 6:05:39 PM

I also struggle. I do not personally find LLMs useful for my job, but it does concern me that the hype might become so strong that I'm unable to find a job where I'm allowed to simply not use the tool if it isn't providing me value. And if it comes to that I'll use the stupid thing because I'd rather do that than go homeless, but I will certainly hate my job if it becomes just "supervise the LLMs".

In order to combat that worry, I'm trying to focus on gratitude that I have had a career where I got paid for doing fun things (programming), rather than worrying about what if my career stops being fun. Many people never get that chance, after all, and live their entire lives working menial jobs just to put food on the table. I'm also trying to make my career less important to my own mental happiness by focusing on other things that are good and will not go away even if my career stops being fun (for me, that means my marriage and my faith).

It's not easy to do, at all. And it also doesn't help the worry that I might even lose my job entirely because the industry abandons sense and fires people in favor of LLMs. But it does help a little, and I'm hoping that with practice the mental discipline will get easier and I can let go of the anxiety some.

by bigstrat2003

2/13/2026 at 7:01:01 PM

I think the hype is already so strong, and coming from so high up, that it's not wise to say anything negative about it out loud. It's just like diversity / DEI / net zero etc. was five years ago, everyone has to comply and play along with it, at least performatively.

Even though I don't personally find AI terribly useful for my own actual work, I keep people happy by talking about it where possible. If someone has a suggestion involving something something fairly repetitive, I tell them "that sounds like a great use case for an AI agent", even if it is, in fact, a great use case for a shell script.

If someone has an inane question, I tell them "Have you tried asking copilot about this?" - it's the new "Let me google that for you...".

If someone has a request to add a new feature that seems useless and counterproductive, I tell them "That's a great idea! How about you do that, using AI?", instead of getting into a debate about why it won't work in practice.

I'm finding that mentioning AI in these contexts keeps people happy enough, without extensive personal use, for now at least.

by tacostakohashi

2/13/2026 at 7:18:54 PM

I actually opened HN to ask something similar. Thank you for putting this out there. Sadly, people who haven't delivered anything complex genuinely believe this is the end of the programmer role. I'm 43 and went through depression about my place in the industry. It was scary.

Then I decided to build something complex using Claude, and within a week I realized that whoever claims "90% of code is written by LLMs" is not being totally honest, the parts left out from such posts tell a different story: programming is going to get harder, not easier.

The project started great but turned into a large ball of spaghetti. It became really hard to extend, every feature you want to add requires Claude to rearrange large portions of the codebase. Debugging and reading logs are also very expensive tasks. If you don't have a mental model of the codebase, you have to rely on the LLM to read logs and figure things out for you.

Overall, my impression is that we need to use this as just another tool and get proficient at it, instead of thinking it will do everything.

Also, the recent Anthropic partnership with Accenture suggests otherwise [0]. If AI could do it all, why train humans?

So please don't leave the industry. I think it will get worse before it gets better. We need to stick around longer and plan for all this hype period.

[0] https://www.anthropic.com/news/anthropic-accenture-partnersh...

by bakibab

2/13/2026 at 8:10:20 PM

I've been feeling it pretty strongly too. The way I see it there are very strong incentives to make this work out. Whether they actually manage to automate a big enough portion of knowledge work is still an open question. A few years ago I would easily say it's 0%. Now, I'm not so sure it feels like a non-zero probability to me.

Either way, planning for it to happen would be better than be taken by surprise if it does.

And to finish the end of the week on a good note.

AI Fails at 96% of Jobs (New Study)

https://www.youtube.com/watch?v=z3kaLM8Oj4o

by shinryuu

2/13/2026 at 5:33:57 PM

If you find yourself spooked by LinkedIn "gurus", I recommend Reddit for some comic relief. https://www.reddit.com/r/LinkedInLunatics/top/ is full of goodies. Here is my personal favorite:

https://www.reddit.com/r/recruitinghell/comments/j1vm8j/gold...

You said it yourself, these are overwhelmingly people who've never built or maintained anything complex in their lives. If you're going to listen to what people on the Internet say, why not seek out people who can earn your respect?

by impendia

2/13/2026 at 1:15:05 PM

Hi, I’ve been working with embedded Linux for 18 years.

I’ve been actively trying to apply AI to our field, but the friction is real. We require determinism, whereas AI fundamentally operates on probability.

The issue is the Pareto Principle in overdrive: AI gets you to 90% instantly, but in our environment, anything less than 100% is often a failure. Bridging that final 10% reliability gap is the real challenge.

Still, I view total replacement as inevitable. We are currently in a transition period where our job is to rigorously experiment and figure out how to safely cross that gap.

Good luck!

by 0xecro1

2/13/2026 at 1:25:14 PM

And by not doing the 90% yourself you lack the understanding you need to be able to tackle the remaining 10%.

by jacquesm

2/13/2026 at 1:52:41 PM

Absolutely agree. I do vibe-code, but I still review every line of that 90% — I don't move forward until I understand it and trust the quality. Right now, that human verification step is non-negotiable.

That said, I have a hunch we're heading toward a world where we stop reading AI-generated code the same way we stopped reading assembly. Not today, not tomorrow, but the direction feels clear.

Until then — yes, we need to understand every bit of what the AI writes.

by 0xecro1

2/13/2026 at 2:46:11 PM

I disagree. Compilers were deterministic. Complicated, but deterministic. You could be sure that it was going to emit something sensible.

AI? Not so much. Not deterministic. Sure, the probability of something bizarre may go down. But with AI, as currently constituted, you will always need to review what it does.

by AnimalMuppet

2/13/2026 at 3:05:13 PM

I think the comparison is slightly off. The compiler was never the author — it was the verifier.

The real comparison is: 1. Human writes code (non-deterministic, buggy) → compiler catches errors

2. AI writes code (non-deterministic, buggy) → compiler catches errors

In both cases, the author is non-deterministic. We never trusted human-written code without review and compilation either (and + lots of tests). The question isn't whether AI output needs verification — of course it does. The question is whether AI + human review produces better results faster than human alone.

by 0xecro1

2/13/2026 at 4:50:41 PM

The compiler isn't so much a verifier than that it is a translator. The verification step wasn't the initial focus but over time it became more and more important.

by jacquesm

2/13/2026 at 3:32:54 PM

The compiler catches certain classes of errors. And AI can spit out unmaintainable code or code with incorrect logic or giant security holes a lot faster than humans can review it.

by apothegm

2/13/2026 at 4:44:20 PM

It's easy to get this way with enough scrolling, try to focus on the things around you in real life. If you aren't reading LinkedIn or HN, how much do you actually hear about AI in day-to-day life? If someone at work directly asks you to do something using AI, you might make some effort to do it. But otherwise let the news and hype cycle play out. You don't need to anticipate or keep abreast of where people think things will be in ten years... they are almost certainly wrong. Think of LinkedIn and HN as entertainment at best. Work on personal coding projects without AI, build relationships with non-tech people, go outside.

by lsy

2/13/2026 at 4:30:46 PM

What I find really disturbing is the impossibility to catch up with the development - which agent system to use, which model, what are the right models for what tasks?

I do not fear that some agents will pollute my repos with their PR. In opposite, I suppose that we will end at a point where for each question, task, or problem, one will find many (AI-coded) solutions, making it impossible to choose a right, solid, reliable one. I recently thought about having a database of tools per task so that a comparison would be possible. But the maintenance costs of something like this are enormous when including benchmarks, comparisons, etc. on different qualities.

by dkrajzew

2/13/2026 at 3:42:50 PM

I feel the exact same way as you do (same age as well), and I know a lot of my teammates do. At that stage, I have no idea what will be our profession in a few years. Maybe the hype will pass and we'll be back to normal, or the profession will disappear (or become much less fun). Who knows... in the meantime, I'm trying to keep my job and save money while I can.

by yodsanklai

2/13/2026 at 2:51:05 PM

Hmm, well I am not philosophically opposed to AI.

But, I don't like hype or having things forced down my throat, and there's a lot of that going on.

Psychologically, the part that seems depressing is that everything just seems totally disposable now. It's hard to even see the point of learning the latest and greatest AI tools/models, because they'll be replaced in about 3 months, and it's hard to see the point in trying to build anything with, or without AI, given the deluge of AI slop it will be up against.

I like the idea of spending a bit of time to learn something, like how to use a shell, how to ride a bike, how to drive a car, how to program in C or C++, and use the skill for years or decades, if not a lifetime. AI seems to have taken that away now everything is brand new and disposable, and everyone is an amateur.

by tacostakohashi

2/13/2026 at 2:58:48 PM

In a way, this seems similar to the "web framework of the month" that everyone wrestled with for a while. There's a new tool! You're obsolete if you don't switch now!!!!!

Meanwhile, some of us were over here, building embedded systems with C and C++. The big switch was from Green Hills or VxWorks to embedded Linux. The time scale was more "OS of the decade". There's hype and fads, and there's stuff that lasts.

by AnimalMuppet

2/13/2026 at 3:06:27 PM

Yes, exactly, it's exactly like the peak js framework of the month of the early 2010s, or the coin of the month in the late 2010s... I guess that's just part of the fad dynamic, you get microfads within the macrofad...

I'm not opposed to new things, but I guess I want incremental improvement on the old thing, and more on the timescale of years than weeks.

by tacostakohashi

2/13/2026 at 12:34:52 PM

It’s becoming pretty annoying, and I am noticing that I read HN less.

I do think that like all trendy hypes, it will go away after awhile. And the people that are focused on the next thing now are going to be a step ahead once the AI hype gets old.

For startups specifically I think the next big thing will be in-person social media. The AI slop will get old after awhile, and someone will figure out how to make Meetup.com actually work.

by keiferski

2/13/2026 at 12:23:25 PM

I wouldn't blame you. But: hypes come and hypes go, this one will go too. But it will destroy the funding environment for a while when it does, the same happened the previous times this happened.

In five years time AI will be just another tool in the toolbox and nobody will remember the names of the hypers. I agree it is depressing: there are quite a few people banging this drum and because of that it becomes harder to be heard. They, like AI have the advantage of quantity. There is one character right here on HN that spews out one low effort AI generated garbage article after another and it all gets upvoted as if it is profound and important. It isn't. All it shows is how incredibly bland all this stuff is.

Meanwhile, here I am, solving a real problem. I use AI as well but mostly to serve as a teacher and I check each and every factoid that isn't immediately obviously true. And the degree to which that turns up hallucinations is proof enough to me that our jobs are safe, for now.

A good niche is cleaning up after failed AI projects ;)

best of luck there!

Jacques

by jacquesm

2/13/2026 at 1:29:53 PM

TLDR: you don't have to leave the industry, just focus on yourself and not your feelings

> The people and corporations and all those LinkedIn gurus, podcasters

You can just mute and ignore them

> I'm now scared to publish open source

If you get many PRs it's a good problem to have, better than you publish and nobody reads it

> mediocre C compilers, Moltbook

it's all experiments. You can say the same thing about cleantech 15 years ago, where companies talked about solar panels and electric cars with swappable batteries all the time. You don't have to keep track of all things people experimenting with

by bkjlblh

2/13/2026 at 2:52:12 PM

> But the gap between the marketing and reality for many of us is hard to describe.

Trust your eyes. You can see what it actually does, therefore the marketing is lying to you.

But it sounds like your problem isn't knowing what to believe. Your problem is that you know the truth, and you're tired of having to wallow in the lies all day. I don't blame you; lies are bad for your mental health. Well, there's a solution: Turn off the internet. You can, you know. Or at least you can turn off the feed into your brain. Stop looking at posts about AI, even on HN. If you can't dodge them well enough, just turn off social media. Go outside, if the temperature is decent. If it isn't, go to a gym or an art museum or something. Just stop feeding this set of lies into your brain.

by AnimalMuppet

2/13/2026 at 12:38:19 PM

> Where are we going with this?

Recommended reading: [0]

What you are seeing is that anyone can build anything with just a computer and a AI agent and the AI boosters are selling dreams, courses and fantasies without the risks or downsides that come with it. Most of these vibe coded projects just have very bad architecture and the experienced humans still have to review and clean it all up.

Meanwhile, "AGI" is being promised by those big labs, but their actions says otherwise as what it really means is an IPO. After that, we will see a crash come afterwards and the hype brigade and all the vibe coders will be raced to zero by local models and will move on after the grift has concluded.

You now need to know what to build and what should exist out of infinite possibilities as you can assume that someone can do that in 10 mins with AI. What used to be 90% of startups fail; with AI it is now 98% of them failing.

We know how this all ends. Do not fall for the hype.

[0] https://blog.oak.ninja/shower-thoughts/2026/02/12/business-i...

by rvz