4/7/2026 at 3:46:12 PM
As much as people on Hacker News complain about subscription models for productivity and creativity suites, the open arms embrace of subscription development tools (services, really) which seek to offload the very act itself makes me wonder how and why so many people are eager to dive right in. I get it. LLMs are cool technology.Is this a symptom of the same phenomenon behind the deluge of disposable JavaScript frameworks of just ten years ago? Is it peer pressure, fear of missing out? At its root, I suspect so; of course I would imagine it's rare for the C-suite to have ever mandated the usage of a specific language or framework, and LLMs represent an unprecedented lever of power to have an even bigger shot at first mover's advantage, from a business perspective. (Yes, I am aware of how "good enough" local models have become for many.)
I don't really have anything useful nor actionable to say here regarding this dialling back of capability to deal with capacity issues. Are there any indications of shops or individual contributors with contingency plans on the table for dialling back LLM usage in kind to mitigate these unknowns? I know the calculus is such that potential (and frequently realised) gains heavily outweigh the risks of going all in, but, in the grander scheme of time and circumstance, long term commitments are starting to be more apparently risky. I am purposefully trying to avoid "begging the question" here; if instead of LLMs, this were some other tool or service, reactions to these events would have been far more pragmatic, with less of a reticence to invest time on in-house solutions when dealing with flaky vendors.
by xantronix
4/7/2026 at 3:54:26 PM
HN is a big community that has always had a mix of people who value newness as a feature vs those who prioritize simplicity and reliability. Unless you're recognizing the exact same names taking these contradictory opinions it's probably different groups of people for the most part.It seems like every LLM thread for the past couple years is full of posts saying that the latest hot AI tool/approach has made them unbelievably more productive, followed by others saying they found that same thing underwhelming.
by rurp
4/7/2026 at 4:05:42 PM
> I get it. LLMs are cool technology.I don't think many of you have legitimately tried Claude Code, or maybe you're holding it wrong.
I'm getting 10x the work done. I'm operating at all layers of the stack with a speed and rapidity I've never had before.
And before anyone accuses me of being some "vibe coder", I've built five nines active-active money rails that move billions of dollars a day at 50kqps+, amongst lots of other hard hitting platform engineering work. Serious senior engineering for over a decade.
This isn't just a "cool technology". We've exited the punch card phase. And that is hard or impossible to come back from.
If you're not seeing these same successes, I legitimately think you're using it wrong.
I honestly don't like subscription services, hyperscaler concentration of power, or the fact I can't run Opus locally. But it doesn't matter - the tool exists in the shape it does, and I have to consume it in the way that it's presented. I hope for a different offering that is more democratic and open, but right now the market hasn't provided that.
It's as if you got access to fiber or broadband and were asked to go back to ISDN/dial up.
by echelon
4/7/2026 at 4:25:48 PM
Man I really thought this was satire. It’s phenomenal that you can gain 10x benefits at all layers of the stack, you must have a very small development team or work alone.I just don’t see how I could export 10x the work and have it properly validated by peers at this point in time. I may be able to generate code 10-20x faster, but there are nuances that only a human can reason about in my particular sector.
by nerptastic
4/7/2026 at 5:21:19 PM
Senior engineer with 25 years of experience here. I wish I spent enough time actually coding that 10x-ing my coding productivity would matter much to my job. Most of my day is spent wrangling requirements, looking after junior devs, stamping out confusion brush fires before they get out of control, and generally just trying to steer the app away from a trainwreck down the line.When I do code, it's almost always something novel that I don't know how I'm going to implement until I code a few pieces and see how they fit together. If it's a fairly routine feature based on an existing pattern, I assign it to one of the other devs.
by suzzer99
4/7/2026 at 9:58:04 PM
This is basically the thing I keep coming back to with the agentic tools. It is the wrangling requirements, stamping out confusion, and steering away from a trainwreck down the line that are the actual challenging parts of the job and we can't automate those yet. Once you do actually know the code change you want to make though it is pretty nice to change it 10x faster than before.by shuntress
4/7/2026 at 4:34:14 PM
I noticed that too. At start. It vaguely reminded me of the famous Navy SEAL copypasta.by hsuduebc2
4/7/2026 at 5:36:33 PM
What the fuck did you just fucking say about me, you little bitch?by seanw444
4/7/2026 at 10:36:40 PM
I see some confusion from your downvotes. Sean cited the start of "navy seal copypasta"The Navy Seal copypasta is a internet meme consisting of an absurdly over the top tough guy threat, posted to mock someone who is acting somehow aggressive, insecure, or self important online, usually after a minor argument.
Here it's full text:
>What the fuck did you just fucking say about me, you little bitch? I'll have you know I graduated top of my class in the Navy Seals, and I've been involved in numerous secret raids on Al-Quaeda, and I have over 300 confirmed kills. I am trained in gorilla warfare and I'm the top sniper in the entire US armed forces. You are nothing to me but just another target. I will wipe you the fuck out with precision the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me over the Internet? Think again, fucker. As we speak I am contacting my secret network of spies across the USA and your IP is being traced right now so you better prepare for the storm, maggot. The storm that wipes out the pathetic little thing you call your life. You're fucking dead, kid. I can be anywhere, anytime, and I can kill you in over seven hundred ways, and that's just with my bare hands. Not only am I extensively trained in unarmed combat, but I have access to the entire arsenal of the United States Marine Corps and I will use it to its full extent to wipe your miserable ass off the face of the continent, you little shit. If only you could have known what unholy retribution your little "clever" comment was about to bring down upon you, maybe you would have held your fucking tongue. But you couldn't, you didn't, and now you're paying the price, you goddamn idiot. I will shit fury all over you and you will drown in it. You're fucking dead, kiddo.
by hsuduebc2
4/7/2026 at 5:35:59 PM
> I just don’t see how I could export 10x the work and have it properly validated by peers at this point in time.In my experience, the people who 10X their output with Claude Code fit one of two categories:
1. They're not really taking the time to understand the code they're submitting. They might do a skim over the output and see that it looks reasonable and passes tests, but they aren't taking time to understand the code as if they were pair programming. Only when it breaks and the LLM can't patch it up quickly do they go in and fully understand the code.
2. They moved very slowly before Claude Code. I've had some coworkers who would take 2-3 days to get a simple PR out because, to be frank, their work days weren't full of a lot of work. Every time they'd run into a question they'd stop and then bumble around for a few hours until they could talk to the ticket creator about it. They'd get tired of working on a task by 2PM and then save the rest of the work for tomorrow. They'd get an idea and decide to rewrite the PR the next day, and on and on with distractions. When they start using Claude Code the LLM doesn't have the same holdups, so now every time where they were getting stuck or tired before is replaced by an LLM powering through to some solution. Their cognitive load is reduced so they're no longer freezing up during the day. They aren't really becoming 10X engineers like they think, but really just catching up to normal pace
by Aurornis
4/7/2026 at 5:53:26 PM
I don't know if we're all 10x'ing but our entire org is shipping PRs using an in-house framework akin to Stripe's Minions [1] and many of those PRs are generated from Slack. We definitely have work to do on the latter part of the SDLC to have more confidence in these changes but we can still rely on the existing observability layer to make sure things are working as expected.Another commenter mentioned that Docker, git, etc. were all tools that greatly enhanced productivity and coding agents are just another tool that does that. I would agree, but argue that it's more impactful than all of those tools combined.
[1] https://stripe.dev/blog/minions-stripes-one-shot-end-to-end-...
by rockostrich
4/7/2026 at 10:01:28 PM
Regarding point #2, while it is of course entirely possible that they are slackers it is more likely that they lack the knowledge you are leveraging in order to declare that the PRs are "simple"by shuntress
4/7/2026 at 8:48:32 PM
It's simpler actually: author trying to make a business developing AI product.by deely3
4/9/2026 at 6:23:58 AM
Yes, and he's shilling in almost every thread. This is tiring.by AlexeyBelov
4/7/2026 at 5:29:46 PM
It is satire! They have been doing this bit for a while and people keep falling for it lolby nothinkjustai
4/7/2026 at 5:22:54 PM
I use Claude Code a lot, but I don't understand these "I'm doing 10X the work" comments.I spend a lot of time reviewing any code that comes out of Claude Code. Even using Opus 4.6 with max effort there is almost always something that needs to be changed, often dramatically.
I can see how people go down the path of thinking "Wow, this code compiles and passes my tests! Ship it!" and start handing trust over to Opus, but I've already seen what this turns into 6 months down the road: Projects get mired down in so much complexity and LLM spaghetti that the codebase becomes fragile. Everyone is sidetracked restructuring messy code from the past, then fighting bugs that appear in the change.
I can believe some of the more recent studies showing LLMs can accelerate work by circa 20% (1.2X) because that's on the same order of magnitude that I and others are seeing with careful use.
When someone comes out and claims 10X more output, I simply cannot believe they're doing careful engineering work instead of just shipping the output after a cursory glance.
by Aurornis
4/7/2026 at 6:33:14 PM
I find that it's relative to the amount of planning time you spend... I feel like I've gotten around 5x the output while using Claude Code w/ Opus over what I will get done myself... That said, I'm probably spending about 3x as much time planning as I would when just straight coding for/by myself. And that's generally the difference.I can use the agent to scaffold a lot of test/demo frameworks around the pieces I'm working on pretty cleanly and have the agent fill in. I still spend a lot of time validating the tests and the code being completed though.
The errors I tend to get from the agent are roughly similar to what I might see from a developer/team that works remotely... you still need to verify. The difference is the turn around seems to be minutes over days. You're also able to observe over simply review... When I see a bad path, I can usually abort/cancel, revert back to the last commit and try again with more planning.
by tracker1
4/7/2026 at 9:53:22 PM
That's part of why I don't get AI for directly writing code at all. If I am going to be reviewing anything that comes out of it (and I will) then I might as well just write it myself. It's easier and faster, although it does also make it easier to fall victim to blind spots.by Pay08
4/7/2026 at 11:24:42 PM
I think there’s a subset of engineers who were never all that good (we have no idea how many there are) who benefit most from llm’s.We should also keep in mind there’s always been an insane shortage of high quality devs. So I’m not surprised with what we seeing.
But this notion that an elite dev is seeing 10x productivity gain is absolute nonsense. LLM’s hold experts back in most contexts.
by eijkene
4/7/2026 at 4:29:48 PM
You must be using it wrong, because I'm getting 100x the work done and currently at 1.5 million MRR with this SAAS I vibe coded over the weekend.After I solved entrepreneurship I decided to retire and I now spend my days reading HN, posting on topics about AI.
by dandellion
4/7/2026 at 4:40:00 PM
You're still manually posting? All of my HN posting, trolling, shitposting and spamming is taken care of by a fleet of bots I vibecoded in the last 5 minutes.by darth_aardvark
4/7/2026 at 5:28:16 PM
You gest, but I know people who've done this."I gotta be present." Me: Reenacting the Malcolm Reynolds too many responses meme.
by slowmovintarget
4/7/2026 at 5:01:10 PM
Mind if I use this as a copypasta for the future? This checks off every point people bring on LinkedIn and elsewhere.In all seriousness though, writing code, or even sitting down and properly architecting things, have never been bottlenecks for me. It has either been artificial deadlines preventing me from writing proper unit tests, or the requirement for code review from people on my team who don't even work on the same codebase as I do on a daily basis. I have often stated and stand by the assertion that I develop at the speed of my own understanding, and I think that is a good virtue to carry forth that I think will stand the test of time and bring about the best organisational outcomes. It's just a matter of finding the right place that values this approach.
Edit for context: My team is an ops team that needed a couple developers; I was picked to implement some internal tooling. The deadlines I was given for the initial development are tied directly to my performance evaluation. My boss has only ever been a manager for almost two years. He has only ever had development headcount for less than a year. He has never been on a development team himself. The man does not take breaks and micromanages at every opportunity he gets. He is paranoid for his job, thinking he is going to be imminently replaced by our (cheaper) EU counterparts. His management style and verbal admonitions reflect this; he frequently projects these insecurities onto others, using unnecessarily accusatory speech. I am not the only developer on my team who has had such interactions with him. I have screenshots of conversations with him that I felt necessary to present to a therapist. This degree of time pressure is entirely unprecedented in my 20 year career. Yes, this is a dysfunctional environment.
by xantronix
4/7/2026 at 5:18:25 PM
> artificial deadlines preventing me from writing proper unit tests, or the requirement for code review from people on my team who don't even work on the same codebase as I do on a daily basisI have never experienced this, and it sounds remarkably dysfunctional to me.
by mikebenfield
4/7/2026 at 5:55:39 PM
Believe me, it is very dysfunctional. As I've mentioned to your first replyer, my boss has only had developers for less than a year. This is an operations team I was assigned to in order to provide them some much needed tooling. The pressure my boss has perceived from above has led to my own significant burnout. The guy does not take days off and has always been logged into Slack on the odd hours I would need to pull up some HR form or another. I am currently off work for several months dealing with the fallout from all that.I've tried everything I can to cope and am not sure I will be willing to return to that team once I am past my medical leave.
by xantronix
4/7/2026 at 5:40:17 PM
[flagged]by lijok
4/7/2026 at 5:50:26 PM
Beg pardon? I've been doing this for 20 years. My boss has been a boss for two years and has only had developer headcount for less than a year. This degree of pressure is unprecedented in my career.by xantronix
4/7/2026 at 7:21:24 PM
[flagged]by lijok
4/7/2026 at 7:37:28 PM
Please explain, I'd like to have a productive dialogue about this. I assume you are referring to my boss?by xantronix
4/7/2026 at 5:04:56 PM
It’d be cool to see your process in depth. You should record some of your sessions :)I mostly believe you. I have seen hints of what you are talking about.
But often times I feel like I’m on the right track but I’m actually just spinning when wheels and the AI is just happily going along with it.
Or I’m getting too deep on something and I’m caught up in the loop, becoming ungrounded from the reality of the code and the specific problem.
If I notice that and am not too tired, I can reel it back in and re-ground things. Take a step back and make sure we are on reasonable path.
But I’m realizing it can be surprisingly difficult to catch that loop early sometimes. At least for me.
I’ve also done some pretty awesome shit with it that either would have never happened or taken far longer without AI — easily 5x-10x in many cases. It’s all quite fascinating.
Much to learn. This idea is forming for me that developing good “AI discipline” is incredibly important.
P.s. sometimes I also get this weird feeling of “AI exhaustion”. Where the thought of sending another prompt feels quite painful. The last week I’ve felt that a lot.
P.p.s. And then of course this doesn’t even touch on maintaining code quality over time. The “after” part when the LLM implements something. There are lots of good patterns and approaches for handling this, but it’s a distinct phase of the process with lots of complexities and nuances. And it’s oh-so-temping to skip or postpone. More so if the AI output is larger — exactly when you need it most.
by dwaltrip
4/7/2026 at 4:22:41 PM
I mean at this point can we just conclude that there are a group of engineers who claim to have incredible success with it and a group that claim it is unreliable and cannot be trusted to do complex tasks.I struggle to believe that a ton of seemingly intelligent software engineers are too dumb to figure out how to use Claude code to get reliable results, it seems much more likely to me that it can do well at isolated tasks or new projects but fails when pointed at large complex code bases because it just... is a token predictor lol.
But yeah spinning up a green fields project in an extensively solved area (ledgers) is going to be something an AI shines at.
It isn't like we don't use this stuff also, I ask Cursor to do things 20x a day and it does something I don't like 50% of the time. Even things like pasting an error message it struggles with. How do I reconcile my actual daily experience with hype messages I see online?
by ericmcer
4/7/2026 at 4:50:00 PM
Right, I keep seeing people talking past each other in this same way. I don't doubt folks when they say they coded up some greenfield project 10x faster with Claude, it's clearly great at many of those tasks! But then so many of them claim that their experience should translate to every developer in every scenario, to the point of saying they must be using it wrong if they aren't having the same experience.Many software devs work in teams on large projects where LLMs have a more nuanced value. I myself mostly work on a large project inside a large organization. Spitting out lines of code is practically never a bottleneck for me. Running a suite of agents to generate out a ton of code for my coworkers to review doesn't really solve a problem that I have. I still use Claude in other ways and find it useful, but I'm certainly not 10x more productive with it.
by rurp
4/7/2026 at 5:26:56 PM
> But yeah spinning up a green fields project in an extensively solved area (ledgers) is going to be something an AI shines at.I couldn't disagree with this more. It's impressive at building demos, but asking it to build the foundation for a long-term project has been disastrous in my experience.
When you have an established project and you're asking it to color between the lines it can do that well (most of the time), but when you give it a blank canvas and a lot of autonomy it will likely end up generating crap code at a staggering pace. It becomes a constant fight against entropy where every mess you don't clean up immediately gets picked up as "the way things should be done" the next time.
Before someone asks, this is my experience with both Claude Code (Sonnet/Opus 4.6) and Codex (GPT 5.4).
by dns_snek
4/7/2026 at 4:26:30 PM
I suspect many people here have tried it, but they expected it to one-shot any prompt, and when it didn't, it confirmed what they wanted to be true and they responded with "hah, see?" and then washed their hands of it.So it's not that they're too stupid. There are various motivations for this: clinging on to familiarity, resistance to what feels like yet another tool, anti-AI koolaid, earnestly underwhelmed but don't understand how much better it can be, reacting to what they perceive to be incessant cheerleading, etc.
It's kind of like anti-Javascript posts on HN 10+ years ago. These people weren't too stupid to understand how you could steelman Node.js, they just weren't curious enough to ask, and maybe it turned out they hadn't even used Javascript since "DHTML" was a term except to do $(".box").toggle().
I wish there were more curiosity on HN.
by hombre_fatal
4/7/2026 at 4:44:21 PM
So what do I do differently then?Hypothetically, you have a simple slice out of bounds error because a function is getting an empty string so it does something like: `""[5]`.
Opus will add a bunch of length & nil checks to "fix" this, but the actual issue is the string should never be empty. The nil checks are just papering over a deeper issue, like you probably need a schema level check for minimum string length.
At that point do you just tell it like "no delete all that, the string should never be empty" and let it figure that out, or do I basically need to pseudo code "add a check for empty strings to this file on line 145", or do I just YOLO and know the issue is gone now so it is no longer my problem?
My bigger point is how does an LLM know that this seemingly small problem is indicative of some larger failure, like lets say this string is a `user.username` which means users can set their name to empty which means an entire migration is probably necessary. All the AI is going to do is smoosh the error messages and kick the can.
by ericmcer
4/7/2026 at 5:53:41 PM
1. I'm working in Rust, so it's a very safe and low-defect language. I suspect that has a tremendous amount to do with my successes. "nulls" (Option<T>) and "errors" (Result<T,E>) must be handled, and the AST encodes a tremendous amount about the state, flow, and how to deal with things. I do not feel as comfortable with Claude Code's TypeScript and React outputs - they do work, but it can be much more imprecise. And I only trust it with greenfield Python, editing existing Python code has been sloppy. The Rust experience is downright magical.2. I architecturally describe every change I want made. I don't leave it up to the LLM to guess. My prompts might be overkill, but they result in 70-80ish% correctness in one shot. (I haven't measured this, and I'm actually curious.) I'll paste in file paths, method names, struct definitions and ask Claude for concrete changes. I'll expand "plumb foo field through the query and API layers" into as much detail as necessary. My prompts can be several paragraphs in length.
3. I don't attempt an entire change set or PR with a single prompt. I work iteratively as I would naturally work, just at a higher level and with greater and broader scope. You get a sense of what granularity and scope Claude can be effective at after a while.
You can't one shot stuff. You have to work iteratively. A single PR might be multiple round trips of incremental change. It's like being a "film director" or "pair programmer" writing code. I have exacting specifications and directions.
The power is in how fast these changes can be made and how closely they map to your expectations. And also in how little it drains your energy and focus.
This also gives me a chance to code review at every change, which means by the time I review the final PR, I've read the change set multiple times.
by echelon
4/7/2026 at 6:44:15 PM
I hope you're not 100% serious.Otherwise you should switch to haskal since it makes logic errors and bugs mathematically impossible.
by Chu4eeno
4/7/2026 at 5:10:25 PM
I have encountered the exact same kind of frustration, and no amount of prompting seems to prevent it from "randomly" happening.`the error is on line #145 fix it with XYZ and add a check that no string should ever be blank`
It's the randomness that is frustrating, and that the fix would be quicker to manually input that drives me crazy. I fear that all the "rules" I add to claude.md is wasting my available tokens it won't have enough room to process my request.
by UI_at_80x24
4/7/2026 at 5:21:46 PM
Yup, this is why i firmly believe true productivity, as in, it aiding you to make you faster, is limited by the speed of review.I think Claude makes me faster, but the struggle is always centered around retaining own context and reviewing code fully. Reviewing code fully to make sure it’s correct and the way I want it, retaining my own context to speed up reviews and not get lost.
I firmly believe people who are seeing massive gains are simply ignoring x% lines of code. There’s an argument to be made for that being acceptable, but it’s a risk analysis problem currently. Not one I subscribe to.
by unshavedyak
4/7/2026 at 5:21:29 PM
Use planning+execution rather than one-shotting, it'll let you push back on stuff like this. I recommend brainstorming everything with https://github.com/obra/superpowers, at least to start with.Then work on making sure the LLM has all the info it needs. In this example it sounds like perhaps your hypothetical data model would need to be better typed and/or documented.
But yeah as of today it won't pick up on smells as you do, at least not without extra skills/prompting. You'll find that comforting or annoying depending on where you stand...
by julian37
4/7/2026 at 6:20:18 PM
Always start an implementation in Claude Code plan mode. It's much more comprehensive than going straight to impl. I never read their prompt for plan mode before, but it deep-dives the code, peripheral files, callsites, documentation, existing tests, etc.You get a better solution but also a plan file that you can review. And, also important, have another agent review. I've found that Codex is really good at reviewing plans.
I have an AGENTS.md prompt that explains that plan file review involves ranking the top findings by severity, explaining the impact, and recommending a fix to each one. And finally recommend a simpler directional pivot if one exists for the plan.
So, start the plan in Claude Code, type "Review this plan: <path>" in Codex (or another Claude Code agent), and cycle the findings back into Claude Code to refine the plan. When the plan is updated, write "Plan updated" to the reviewer agent.
You should get much better results with this capable of much better arch-level changes rather than narrow topical solutions.
If that's still not working sufficiently for you, maybe you could use more support, like a type-system and more goals in AGENTS.md?
by hombre_fatal
4/7/2026 at 7:08:22 PM
IMO, plan mode is pretty useless. For bug fixes and small improvements, I already know where to edit (and can do it quickly with vim-fu).For new features, I spend a bit of time thinking, and I can usually break it down in smaller tasks that are easy to code and verify. No need to wrangle with Plan mode and a big markdown file.
I can usually get things one-shotted by that point if I bother with the agent.
by skydhash
4/8/2026 at 4:14:21 PM
My manager and I have been experimenting with it for some stuff, and our most recent attempt at using plan mode was a refactor to change a data structure and make some conversion code unnecessary, then delete it. The plan looked fine, but after it ran the data structure change was incomplete, most of the conversion code was still there, and it introduced several bugs by changing lines it shouldn't have touched at all. Also removed several "why" style comments and arbitrarily changed variable names to be less clear in code it otherwise didn't change.This was the costliest one we had access to, chosen as an experiment - took $20 over almost a half hour to run.
by Izkata
4/8/2026 at 8:14:34 PM
Did you do the plan review cycles like I suggested? It's a critical point.Plan mode gives you a plan file, then you refine that, and impl derives from it.
Also, do you know it cost $20 because you're using the Claude API? I'd definitely use a subscription for interactive/development use.
by hombre_fatal
4/8/2026 at 11:26:55 PM
We reviewed the plan manually, asked it a few questions to clarify parts, and manually tweaked other parts.I didn't catch what it was, some web dashboard that showed the cost per prompt. We could see it going up as it ran. We were just using the plan our company provided.
by Izkata
4/7/2026 at 5:05:18 PM
Not the person you're replying to but yes, sometimes I do tell the agent to remove the cruft. Then I back up a few messages in the context and reword my request. Instead of just saying "fix this crash", or whatever, I say "this is crashing because the string is empty, however it shouldn't be empty, figure out why it's empty". And I might have it add some tests to ensure that whatever code is not returning/passing along empty strings.by dpkirchner
4/7/2026 at 4:28:14 PM
“I struggle to believe that a ton of seemingly intelligent software engineers are too dumb to figure out how to use Claude code to get reliable results”Seemingly is doing the heavy lifting here. If you read enough comment threads on HN, it will become obvious why they aren’t getting results.
by rattlesnakedave
4/7/2026 at 5:03:44 PM
> I struggle to believe that a ton of seemingly intelligent software engineers are too dumb to figure out how to use Claude code to get reliable results.They're not dumb, but I'm not surprised they're struggling.
A developer's mindset has to change when adding AI into the mix, and many developers either can’t or won’t do that. Developers whose commits that look something like "Fixed some bugs" probably aren’t going to take the time to write a decent prompt either.
Whenever there's a technology shift, there are always people who can't or won't adapt. And let's be honest, there are folks whose agenda (consciously or not) is to keep the status quo and "prove" that AI is a bad thing.
No wonder we're seeing wildly different stories about the effectiveness of coding agents.
by alwillis
4/7/2026 at 4:38:27 PM
Here's my 100 file custom scaffolding AI prompt that I've been working on for the last four months, and can reliably one-shot most math olympic problems and even a rust to do list.by dandellion
4/7/2026 at 7:01:18 PM
Case in pointby rattlesnakedave
4/8/2026 at 1:01:44 AM
Let's come back to these comments in ten years, shall we? Should be pretty entertaining.by echelon
4/7/2026 at 5:50:19 PM
I see two basic cases for the people who are claiming it is useless at this point.One is that they tried AI-based coding a year or two ago, came to the IMHO completely correct at that time conclusion that it was nearly useless, and have not tried it since then to see that the situation has changed. To which the solution is, try it again. It changed a lot.
The other are those who have incorporated into their personal identity that they hate AI and will never use it. I have seen people do things like fire AI at a task they have good reasons to believe it will fail at, and when it does, project that out to all tasks without letting themselves consciously realize that picking a bad task on purpose skews the deck.
To those people my solution is to encourage them to hold on to their skepticism. I try to hold on to it as well despite the incredible cognitive temptation not to. It is very useful. But at the same time... yeah, there was a step change in the past year or so. It has gotten a lot more useful...
... but a lot of that utility is in ways that don't obviate skilled senior coding skills. It likes to write scripting code without strong types. Since the last time I wrote that, I have in fact used it in a situation where there were enough strong types that it spontaneously originated some, but it still tends to write scripting code out of that context no matter what language it is working in. It is good at very straight-line solutions to code but I rarely see it suggest using databases, or event sourcing, or a message bus, or any of a lot of other things... it has a lot of Not Invented Here syndrome where it instead bashes out some minimal solution that passes the unit tests with flying colors but can't be deployed at scale. No matter how much documentation a project has it often ends up duplicating code just because the context window is only so large and it doesn't necessarily know where the duplicated code might be. There's all sorts of ways it still needs help to produce good output.
I also wonder how many people are failing to prompt it enough. Some of my prompts are basically "take this and do that and write a function to log the error", but a lot of my prompts are a screen or two of relevant context of the project, what it is we are trying to do, why the obvious solution doesn't work, here's some other code to look at, here's the relevant bugs and some Wiki documentation on the planning of the project, we should use {event sourcing/immutable trees/stored procedures/whatever}, interact with me for questions before starting anything. This is not a complete explanation of what they are doing anymore, but there's still a lot of ways in which what an LLM can really do is style transfer... it is just taking "take this and do that and write a function to log the error" and style-transforming that into source code. If you want it to do something interesting it really helps to give it enough information in the first place for the "style transfer" to get a hold of and do something with. Don't feel silly "explaining it to a computer", you're giving the function enough data to operate on.
by jerf
4/7/2026 at 8:43:36 PM
I can see huge utility with AI as a guide and helper.But not being one leg in the code myself is not something I am comfortable with. It starts feeling like management and not development. I really feel the abdication very strongly and it makes me unable and unwilling to put a hard stamp on quality. I have seen too much hallucination or half missed requirements to put that much trust in AI.
It's the same with code reviews of hard tickets. You can scroll past and just approve, but do you really understand what your colleague has built? Are you really in the driver's seat? It feels to me like YOLOing with major consequences.
I dont but, at all that people doing 20x output have any idea what they are coding. They are just pressing the yolo button and no one, not the engineer, not the AI and not management is in the driver's seat. it is a very scary time.
by sutib
4/8/2026 at 11:59:02 PM
I reread your comment and I think you might be sincere. To address this point:> If you're not seeing these same successes, I legitimately think you're using it wrong.
I'm not sure how you could say that, considering I'm not using it at all. I don't want to, and I don't plan to. If that becomes an issue, I'm exiting this industry because I simply don't fucking care any longer. I am fine living the rest of my life and dying happy and sore being an automotive technician.
by xantronix
4/7/2026 at 4:17:01 PM
I'm still reviewing all the code that's created, and asking for modifications, and basically using LLMs as a 2000 wpm typist, and seeing similar productivity gains. Especially in new frameworks! Everything is test driven development, super clean and super fast.The challenge now is how to plan architectures and codebases to get really big and really scale, without AI slop creating hidden tech debt.
Foundations of the code must be very solid, and the architecture from the start has to be right. But even redoing the architecture becomes so much faster now...
by epistasis
4/7/2026 at 4:50:11 PM
I'm getting 1,000x improvement building notepad applications with 6 9s. No one is faster.Need some help selling these notepad apps, do you have a prompt for that?
by ipaddr
4/7/2026 at 7:06:13 PM
I don't want to sound like I'm trying to one-up you, but I've basically vibe coded the entire internet.I'm surprised nobody thought of it before me but basically the LLM's are trained on the internet and I just had it spit back out everything.
It's running in parallel so I can validate it, which of course I'm using LLM's to do that.
Once it's ready I will put it on the market, but get this, my internet will be cheaper than the current internet. I'll probably just make it one cheaper, like if the current internet costs, for example, 7, I'll make my internet cost 6.
by RaftPeople
4/7/2026 at 4:13:33 PM
> and I have to consume it in the way that it's presentedI'm just curious, why do you "have to"? Don't get me wrong, I'm making the same choice myself too, realizing a bunch of global drawbacks because of my local/personal preference, but I won't claim I have to, it's a choice I'm making because I'm lazy.
by embedding-shape
4/7/2026 at 4:30:27 PM
What are the reasonable options besides a Claude Code subscription (or an equivalent from Codex or Copilot)?I could pay API prices for the same models, but aside from paying much more for the same result that doesn't seem helpful
I could pay a 4-5 figure sum for hardware to run a far inferior open model
I could pay a six figure sum for hardware to run an open model that's only a couple months behind in capability (or a 4-5 figure sum to run the same model at a snail's pace)
I could pay API costs to semi-trustworthy inference provider to run one of those open models
None of those seem like great alternatives. If I want cutting-edge coding performance then a subscription is the most reasonable option
Note that this applies mostly to coding. For many other tasks local models or paid inference on open models is very reasonable. But for coding that last bit of performance matters
by wongarsu
4/7/2026 at 5:13:22 PM
I use my OAI subscription on my Claude Code. I get the benefit of the Claude Code interface with the intelligence of OAI models.by prabal97
4/7/2026 at 4:20:03 PM
My job title is "provide value".I'm given a tool that lets me 10x "provide value".
My personal preferences and tastes literally do not matter.
by echelon
4/7/2026 at 4:24:03 PM
As a professional you have a choice in how you produce whatever it is you produce. Sure, you can go for the simplest, most expensive and "easiest" way of doing things, or you can do other things, depending on your perspective and requirements. None of this is set in stone, some people make choices based on personal preferences, and that matters as much to them as your choices matter to you.by embedding-shape
4/7/2026 at 6:58:35 PM
> If you're not seeing these same successes, I legitimately think you're using it wrong.What is “using it right”? You wrote claims, but explain nothing about your process. Anything not reproducible is either luck or lie.
by skydhash
4/7/2026 at 4:13:52 PM
> fact I can't run Opus locallyYet
by blurbleblurble
4/7/2026 at 4:28:53 PM
> And before anyone accuses me of being some "vibe coder", I've built five nines active-active money rails that move billions of dollars a day at 50kqps+, amongst lots of other hard hitting platform engineering work. Serious senior engineering for over a decadeYou sound like a pro wrestler. I'd like to know what "hard-hitting" engineering work is. Hydraulic hammers?
by britzkopf
4/7/2026 at 4:48:46 PM
I mean five nines is legitimately difficult to accomplish for a lot of problem spaces.It's also like.... difficult to honestly and accurately measure. And account for whether or not you're getting lucky based on your underlying dependencies (servers, etc) not crashing as much as advertised, or if it's actually five nines. Or whether you've run it for a month and gotten <30s of measure downtime and declared victory, vs run it for three years with copious software updates.
I always assume most people claiming five nines are just not measuring it correctly, or have not exercised the full set of things that will go wrong over a long enough period of time (dc failures, network partitions, config errors, bad network switches that drop only UDP traffic on certain ports, erroneous ACL changes, bad software updates, etc etc)
Maybe they did it all correct though, in which case, yea, seems hard hitting to me.
by dmoy
4/7/2026 at 8:46:58 PM
5 nines is at best a temporary achievement, given enough time.by sutib
4/7/2026 at 5:16:35 PM
I read this as satire. I still think it is.by surgical_fire
4/7/2026 at 6:39:58 PM
> the open arms embrace of subscription development tools (services, really) which seek to offload the very act itself makes me wonder how and why so many people are eager to dive right inHere's a reason not in your list.
Short version: A kind of peer pressure, but from above. In some circles I'm told a developer must have AI skills on their resume now, and those probably need to be with well known subscription services, or they substantially reduce their employment prospects.
Multiple people I know who are employers have recently, without prompting, told me they no longer hire developers who don't use AI in their workflow.
One of them told me all the employers they know think "seniors" fall into two camps, those who are embracing AI and therefore nimble and adaptive, and those who are avoiding it and therefore too backward-looking, stuck-in-their-ways to be a good hire for the future. So if they don't see signs of AI usage on a senior dev's resume now, that's an automatic discard. For devs I know laid off from an R&D company where AI was not permitted for development (for IP/confidentiality reasons), that's unfair as they were certainly not backward-looking people, but the market is not fair.
Another "business leader" employer I met recently told me his devs are divided into those who are embracing AI and those who aren't, said he finds software feature development "so slow!", and said if it wasn't for employment law he'd fire all his devs who aren't choosing to use AI. I assume he was joking, but it was interesting to hear it said out loud without prompting.
I've been to several business leadership type meetups in recent months, and it seems to be simply assumed that everyone is using AI for almost everything worth talking about. I don't think they really are, so it's interesting to watch that narrative playing out.
by jlokier
4/7/2026 at 4:11:07 PM
Apart from local AI, a serious choice is aggregated API such as new-api [0]. An API provider aggregated thousands of accounts has much better stability than a single account. It's also cheaper than the official API because of how the subscription model works, see e.g. the analysis [1].by woctordho
4/7/2026 at 4:15:31 PM
>An API provider aggregated thousands of accounts has much better stability than a single accountIsn't this almost certainly against ToS, at least if you're using "plans" (as opposed to paying per-token)?
by gruez
4/7/2026 at 4:17:37 PM
You don't even need to be a customer served by Anthropic or OpenAI so the Terms of Service are irrelevant. That's how I live in China and use almost free Claude and GPT which they don't sell here.by woctordho
4/7/2026 at 4:25:53 PM
Wait, is this just something like openrouter, that routes your requests to different API providers, where you're paying per-token rates? Or is this taking advantage of fixed price plans, by offering an API interface for them, even though they're only supposed to be used with the official tools?by gruez
4/7/2026 at 4:40:12 PM
It's taking advantage of fixed price plans or even free plans.by woctordho
4/7/2026 at 6:54:00 PM
considering it has things like a turnstile "handler", I'm assuming it attempts to abuse the free chat interface.by Chu4eeno
4/7/2026 at 4:25:10 PM
That seems like Anthropic's problem.by throwaway27448
4/7/2026 at 4:26:20 PM
It's going to be quickly your problem when they figure out you're breaching ToS and ban your account.by gruez
4/7/2026 at 7:45:43 PM
The whole point of these services is that it’s not your account. It’s very much anthropic’s problem, and honestly I don’t care they’re getting ripped off.by hananova
4/7/2026 at 7:40:05 PM
So... I make another account? Problem solved.by throwaway27448
4/7/2026 at 4:00:00 PM
I really enjoy coding. I've build a number of projects, personal and professional, with Python, Rust, Java and even some Scala in the mix. However, I've been addicted to Claude Code recently, especially with the superpowers skill. It feels like I can manifest code with my mind. When developing with Claude, I am presented with design dilemmas, architectural alternatives, clarification questions, things that really make me think about the problem. I then choose a solution, propose alternatives, discuss, and the code manifests. I came to realize that I enjoy the problem solving, not the actual act of writing the code. Like I have almost cloned my self, and my clones are working on the projects and coming back to me for instructions. It feels amazingby michael_j_x
4/7/2026 at 4:12:42 PM
"Addicted" "Superpowers" "manifest with my mind" "it feels amazing"Why does it sound like you're on drugs? I know that sounds extremely rude, but I can't think of any other reasonable comparison for that language.
It's hard to take these kinds of endorsements seriously when they're written so hyperbolically, in terms of the same cliches, and focused on entirely on how it makes you feel rather than what it does.
by throw4847285
4/7/2026 at 4:30:14 PM
Reading a bunch of posts related to Claude Code and some folks voice genuine upset about rate limits and model intelligence while others seem very upset they can't get their fix because they've reached the five hours limits is genuinely concerning to how addictive LLMs can be for some folks.by cbg0
4/7/2026 at 4:35:34 PM
I think the social aspect is underreported. I think this applies even for people using Claude Code and not just those treating an LLM as a therapist. In other words, I wonder how many of these people can't call their doctor to make an appointment or call a restaurant to order a pizza. And I say this as someone who struggles to do those things.People claim that DoorDash and other similar apps are about efficiency, but I suspect a large portion is also a desire to remove human interaction. LLMs are the same. Or, in actuality, to create a simulacrum of human interaction that is satisfying enough.
by throw4847285
4/7/2026 at 4:41:08 PM
It's reflecting the value we get from it, relative to the cost of continuing if we switch to the API pricing. It is genuinely upsetting to hit the limits when you face a substantial drop in productivity.Imagine being an Uber driver and suddenly have to switch to a rickshaw for several hours.
by vidarh
4/7/2026 at 4:34:29 PM
"superpowers" is the exact name of the specific Claude code skill. The rest of your concerns is just me expressing my excitement, as until recently I was very skeptic of the whole vibe-coding movement, but have since done a complete 180.by michael_j_x
4/7/2026 at 4:26:07 PM
The drug is the llm coding. I kind of get it, when I was a kid and first got a computer I felt the same way after I learned assembly language. The world is your oyster and you can do what felt like anything. It was why I spent almost every waking hour working on my computer. That wore off eventually but I've spent some time on my backlog of projects with Claude and it feels bit like the old days again.by djmips
4/7/2026 at 4:16:10 PM
> Why does it sound like you're on drugs, specifically cocaine?This has basically been what all of Silicon Valley sounds like to me for a few years now.
They are known for abusing many psycho-stimulants out there. The stupid “manifesto” Marc Andreessen put out a while back sounded like adderall-produced drivel more than a coherent political manifesto.
by guzfip
4/7/2026 at 4:24:21 PM
If I were to go off into the woods, take a lot of drugs, and write my own crank manifesto, the central conceit would be that ADHD is the key to understanding the entirety of Silicon Valley. A bunch of people with stimulus driven brains creating technologies that feed themselves and the rest of the populace more and more stimulation, setting a new baseline and requiring new technologies for higher levels of stimulation in an endless loop until we all stimulate ourselves to death. Delayed gratification is the enemy.This is similar to how we have already found hacks in our evolutionary programming to directly deliver high amounts of flavor without nutrition, and we've been working on ever more complex means of delivering social stimulation without the need for other human (one of the key appeals of AI for many people, as well).
Of course these are all the ravings of a crank and should be ignored.
by throw4847285
4/7/2026 at 4:27:09 PM
No, you're right. But a million monkeys on cocaine may eventually provide value to shareholders.by throwaway27448
4/7/2026 at 4:13:24 PM
That’s like saying enjoying composing music, but not enjoying playing music. Or creating stories, but don’t like writing. Yes they’re different activities, but linked together. The former is creativity, the latter is a medium of transmission.Code is notation, just like music sheets, or food recipes. If your interaction with anyone else is with the end result only (the software), the. The code does not matter. But for collaboration, it does. When it’s badly written, that just increase everyone burden.
It’s like forcing everyone to learn a symphony with the record instead of the sheets. And often a badly recorded version.
by skydhash
4/7/2026 at 4:46:54 PM
> That’s like saying enjoying composing music, but not enjoying playing musicDo you think that is impossible? There are plenty of people who enjoy composing music on things like trackers, with no intent of ever playing said music on an instrument.
I love coding, but I also like making things, and the two are in conflict: When I write code for the sake of writing code, I am meticulous and look for perfection. When I make things, I want to move as fast as possible, because it is the end-product that matters.
There is also a hidden presumption in what you've written that 1) the code will be badly written. Sometimes it is, but that is the case for people to, but often it is better than what I would produce (say, when needing to produce something in a language I'm not familiar enough with), 2) and that the collaboration will be with people manually working on the code. That is increasingly often not true.
by vidarh
4/7/2026 at 6:52:56 PM
> When I write code for the sake of writing code,I struggle to understand that comparison. Code is notation, you can’t write code for the sake of writing code. You have a problem and you instruct the computer how to do it. And for the sake of your collaborator and your futher self, you take care of how you write that. There’s no real distinction IMO.
> There is also a hidden presumption in what you've written that 1) the code will be badly written
The computers does not really care about what programming language you’re using and the name of your variables and other indentifiers. People do. You can have correct code (decompiled assembly or minified JavaScript) and no one will wants to collaborate on that.
Code is often the most precise explanation of some process. By being formal, it’s a truthful representation of the process. Specs and documentations can describe truth, but they do not embody it.
You can always collaborate with markdown files. But eventually someone will have to look at the code and understand what it does, because that’s the truth that matters. Anything else is prayers and hope. And if you’ve never cared about maintainability and quality of the code, it will probably be an arduous process.
by skydhash
4/8/2026 at 5:34:11 AM
> Code is notation, you can’t write code for the sake of writing code.Of course you can.
> You have a problem and you instruct the computer how to do it.
And sometimes that problem is not the point. Just like sometimes I write for the joy of writing, not because I particularly care about a reader or the meaning of the output.
> The computers does not really care about what programming language you’re using and the name of your variables and other indentifiers. People do. You can have correct code (decompiled assembly or minified JavaScript) and no one will wants to collaborate on that.
This has no relation whatsoever to the sentence you quoted.
by vidarh
4/8/2026 at 1:44:26 PM
> This has no relation whatsoever to the sentence you quoted.Maybe I wasn’t clear. What I wanted to convey is that the use of programming languages, paradigms, patterns, and other software engineering principles is related to the human side of programming.
You can solve a problem correctly, but with the resulting code being hard to parse. Or you can write readable code but with bugs. And almost everyone prefers the latter.
So badly written means incomprehensible code, mostly due to the size of it in the case of Agents. It’s all right if no one cares about the code. But if you expect someone to review it, changeset that even the author don’t understand is slop.
by skydhash
4/8/2026 at 3:04:52 PM
So again, this presumes that the result must be incomprehensible. That is not at all my experience. It may become incomprehensible if you let it, just as is the case with human developers. It won't be if you enforce reviews, and your harness demands cleanups and sets clear standards.by vidarh
4/7/2026 at 4:18:51 PM
Using your analogy, I enjoy composing music and enjoy playing music. I don't enjoy going through the notion of writing the notes on a piece of paper with the pen. I have to do it because people can't read my mind, but if they could I would avoid it. Claude code is like that. The code that gets written, feels like the code that I would have writtenby michael_j_x
4/7/2026 at 4:05:01 PM
I feel this sentiment. It’s more like pair programming with someone both smarter and dumber than you. If you’re reviewing the code it is putting down, you’re likely to spot what it’s getting wrong and discussing it.What I don’t understand, are the people who let it go over night or with whole “agent teams” working on software. I have no idea how they trust any of it.
by withinboredom
4/7/2026 at 4:05:16 PM
Yep, I want to make stuff. Writing the code by hand was just a means to an end.by snarfy
4/8/2026 at 1:04:50 PM
I think some workflows are just that much faster with AI. And if not I can spare the time for a prompt to get work done in parallel to the stuff I work on.There is a cost though, the context switches of topics aren't free. But if I need to visualise a something, I let an LLM create a page. If I have two tables of data that needs to be joined/mapped, I let an LLM do the first shot, often that is enough.
I cannot even hope to reach that speed. It isn't a magic tool, but it really accelerates some task.
That speed allows for in-house solutions to become viable again, software that really adapts specific business processes instead of some wonky ERP package that never really fit what you were trying to do.
I have our dbs schema checked into a Gitea repository, which our AIs can just access to quickly ingest schema definitions. If data safety is an issue, use a local model. It is extremely beneficial if you quickly can establish context and let your AI deal with real problems. And it is quite good at that.
by raxxorraxor
4/7/2026 at 3:59:11 PM
> if instead of LLMs, this were some other tool or service, reactions to these events would have been far more pragmatic, with less of a reticence to invest time on in-house solutions when dealing with flaky vendorsAs an example, a long term goal at the employer I work for is exactly this: run LLMs locally. There's a big infrastructure backlog through, so it's waiting on those things, and hopefully we'll see good local models by then that can do what Claude Sonnet or GPT-5.3-Codex can do today.
by supriyo-biswas
4/7/2026 at 6:02:46 PM
Well, I wanted to understand what is this cool new tech everybody is talking about so I bought a Max plan, experimented with various setups recommended by various experts, vibe-coded a few apps and then threw it all away.I still use more traditional approach for finding bugs and other issues in my code, but the agentic workflow doesn't give me any net value.
by benterix
4/7/2026 at 4:43:55 PM
I would love nothing more than ditching Claude for a local solution tomorrow. But it doesn't exist today, so it is what it is - you gotta keep up with the joneses.Maybe in 5 years we'll have an open weights model that is in the "good enough" category that I can run on a RTX 9000 for 15k dollars or whatever.
by stefan_
4/7/2026 at 5:17:56 PM
I don't want to get too much into the details but I don't work in or for the Valley and I don't think I'll ever be able to afford that sort of expenditure on computing. A down payment on a car, or a vital medical procedure? Sure. I'm probably not alone here.by xantronix
4/7/2026 at 4:48:10 PM
People will always go along with a removal of friction even against their benefit. It's natural bias, we have a preference for not spending energy.It's why we pay stupid amounts for takeout when it's a button away, it's why we accept the issues that come with online dating rather than breaking the ice outside, it's why there's been decades scams that claim to get you abs without effort...
LLMs are the ultimate friction removal. They can remove gaps or mechanical work that regular programming can, but more importantly they can think for you.
I'm convinced this human pattern is as dangerous as addiction. But it's so much harder to fight against, because who's going to be in favor of doing things with more effort rather than less? The whole point of capitalism is supposed to be that it rewards efficiency.
by torben-friis
4/7/2026 at 5:04:20 PM
> stupid amounts for takeoutAw hell. You found my vice and my own cognitive dissonance here. If I want to truly stand by my convictions, I should probably cook more and log off. Waiting for signs that the tides are turning and that people are beginning to value a slower, more methodical approach again isn't doing anything in the current moment to stave off the genuine feelings of dread that have honestly led to some suicidal ideation.
(this is serious and not sarcasm, by the way)
by xantronix
4/7/2026 at 6:35:59 PM
Think that you're likely average!By which I mean, it's likely you're not the only one feeling that dread. We're due for a counter movement, and it's a matter of time to see it flower.
by torben-friis
4/7/2026 at 5:30:48 PM
Stand strong. Plenty of us working towards healthy offline communities.by abnercoimbre
4/7/2026 at 6:21:57 PM
Thanks friend. I really appreciate that. This is genuinely hard right now.by xantronix
4/7/2026 at 3:59:50 PM
I think most people understand the need for subscriptions here. It is an ongoing massive compute cost, and that’s what you’re paying for. Your local system is not capable of running the massive amount of compute required for this. If it were then we would see more people up in arms about it.by dyauspitr
4/7/2026 at 4:06:06 PM
We could run it locally, but the problems that matter simply don't change.We're paying for servers that sit idle at night, you don't find enough sysadmins for the current problems, the open source models aren't as strong as closed source, providing context (as in googling) means you hook everything up to the internet anyway, where do you find the power and the cooling systems and the space, what do you do with the GPUs after 3 years?
Suddenly that $500/month/user seems like a steal.
by stephbook
4/7/2026 at 6:19:01 PM
Is this pace necessary? I feel like this is causing people to consider code to be disposable, and I think that both are a massive mistake.by xantronix
4/7/2026 at 4:35:47 PM
The great part is that you can always build your own selfhosted tools. There is nothing that can't be done at home, it's just a calculation of how much you're willing to spend.Lately though the RAM crisis is continuing and making things like this more unfeasible. But you can still use a lot of smaller models for coding and testing tasks.
Planning tasks I'd use a cloud hosted one, for now, because gemma4 isn't there yet and because the GPU prices are still quite insane.
The cool and fun part is that with ollama and vllm you can just build your own agentic environment IDE, give it the tools you like, and make the workflow however you like. And it isn't even that hard to do, it just needs a lot of tweaking and prompt fiddling.
And on top of that: Use kiwix to selfhost Wikipedia, stackoverflow and devdocs. Give the LLM a tool to use the search and read the pages, and your productivity is skyrocketing pretty quickly. No need anymore to have internet, and a cheap Intel NUC is good enough for self-hosting a lot of containers already.
Source: I am building my own offline agentic environment for Golang [1] which is pretty experimental but sometimes it's also working.
by cookiengineer
4/7/2026 at 5:15:07 PM
I'm definitely all in on self-hosting, though I rent my compute and pay for bandwidth with Linode and storage with rsync.net.The LLM bit though, personally, is just not for me.
by xantronix
4/7/2026 at 9:19:37 PM
[flagged]by aaztehcy
4/7/2026 at 4:19:33 PM
>I get it. LLMs are cool technology.It would be cool to run SOTA models on my own hardware but I can't. Hence, the subscription.
by DeathArrow
4/8/2026 at 12:40:08 PM
Think of the stupidest product you can think of and you likely only know about it because people buy/bought them en masse. AI is no different from any other product; plenty will pay/adopt for exactly the reasons you said. There is powerful motivations for people to feel “ahead” of others (or more informed, or more “cool”, or more knowledgeable, or more experienced, or whatever their ego requires), even if their situation is exactly the same.That said, I’m not sure I follow your statement of less resistance to the development of internal tools when the opposite seems to be the case; companies (or more specifically developers) are perhaps too quick to think they can just vibe-code a replacement for any vendor in a weekend these days.
by therealpygon
4/7/2026 at 4:09:48 PM
Contingency plan? Just code without it like before. AI could disappear today and I would be very disappointed but it's not like I forgot how to code without it. If anything, I think it's made me a better programmer by taking friction away from the execution phase and giving me more mental space to think in the abstract at times, and that benefit has certainly carried over to my work where we still don't have copilot approved yet.by jimmaswell
4/7/2026 at 5:11:01 PM
[dead]by xantronix
4/7/2026 at 3:55:37 PM
[dead]by onlyrealcuzzo
4/7/2026 at 4:02:42 PM
[dead]by garganzol