2/14/2026 at 4:45:10 PM
Kickstarter is full of projects like this where every possible shortcut is taken to get to market. I’ve had some good success with a few Kickstarter projects but I’ve been very selective about which projects I support. More often than not I can identify when a team is in over their heads or think they’re just going to figure out the details later, after the money arrives.For a period of time it was popular for the industrial designers I knew to try to launch their own Kickstarters. Their belief was that engineering was a commodity that they could hire out to the lowest bidder after they got the money. The product design and marketing (their specialty) was the real value. All of their projects either failed or cost them more money than they brought in because engineering was harder than they thought.
I think we’re in for another round of this now that LLMs give the impression that the software and firmware parts are basically free. All of those project ideas people had previously that were shelved because software is hard are getting another look from people who think they’re just going to prompt Claude until the product looks like it works.
by Aurornis
2/14/2026 at 5:00:29 PM
At this point, I trust LLMs to come up with something more secure than the cheapest engineering firm for hire.by lr4444lr
2/14/2026 at 6:34:22 PM
"Anyone else out there vibe circuit-building?"by nozzlegear
2/14/2026 at 5:14:25 PM
The cheapest engineering firms you hire are also using LLMs.The operator is still a factor.
by Aurornis
2/14/2026 at 5:43:42 PM
Yeah, but they’ll add another layer of complexity over doing it yourselfby jama211
2/14/2026 at 6:01:48 PM
The people doing these kickstarters are outsourcing the work because they can’t do it themselves. If they use an LLM, they don’t know what to look for or even ask for, which is how they get these problems where the production backend uses shared credentials and has no access control.The LLM got it to “working” state, but the people operating it didn’t understand what it was doing. They just prompt until it looks like it works and then ship it.
by Aurornis
2/14/2026 at 6:33:48 PM
You're still not following.The parents are saying they'd rather vibe code themselves than trust an unproven engineering firm that does(n't) vibe code.
by caminante
2/14/2026 at 9:26:11 PM
> they'd rather vibe code themselves than trust an unproven engineering firmYou could cut the statement short here, and it would still be a reasonable position to take these days.
LLMs are still complex, sharp tools - despite their simple appearance and proteststions of both biggest fans and haters alike, the dominating factor for effectiveness of an LLM tool on a problem is still whether or not you're holding it wrong.
by TeMPOraL
2/14/2026 at 7:45:59 PM
LLMs definitely write more robust code than most. They don't take shortcuts or resort to ugly hacks. They have no problem writing tedious guards against edge cases that humans brush off. They also keep comments up to date and obsess over tests.by Kiro
2/14/2026 at 9:34:55 PM
I had 5.3-Codex take two tries to satisfy a linter on Typescript type definitions.It gave up, removed the code it had written directly accessing the correct property, and replaced it with a new function that did a BFS to walk through every single field in the API response object while applying a regex "looksLikeHttpsUrl" and hoping the first valid URL that had https:// would be the correct key to use.
On the contrary, the shift from pretraining driving most gains to RL driving most gains is pressuring these models resort to new hacks and shortcuts that are increasingly novel and disturbing!
by BoorishBears
2/14/2026 at 7:54:48 PM
Interesting and completely wrong statement, what gave you this impression?by devmor
2/14/2026 at 8:16:00 PM
The discourse around LLMs has created this notion that humans are not lazy and write perfect code. They get compared to an ideal programmer instead of real devs.by Kiro
2/14/2026 at 9:46:30 PM
This. The hacks, shortcuts and bugs I saw in our product code after i got hired, were stuff every LLM would tell you not to do.by joe_mamba
2/14/2026 at 9:11:12 PM
Amen. On top of that, especially now, with good prompting you can get closer to that better than you think.by gxs
2/14/2026 at 8:43:56 PM
LLM's at best asymptotically approach a human doing the same task. They are trained on the best and the worst. Nothing they output deserves faith other than what can be proven beyond a shadow of a doubt with your own eyes and tooling. I'll say the same thing to anyone vibe coding that I'd say to programmatically illiterate. Trust this only insofar as you can prove it works, and you can stay ahead of the machine. Dabble if you want, but to use something safely enough to rely on, you need to be 10% smarter than it is.by salawat
2/14/2026 at 8:20:32 PM
I know right. I kept waiting for a sarcasm tag at the endby dylanowen
2/14/2026 at 8:23:55 PM
right and wrong don't exist when evaluating subjective quantifiersby majorchord
2/14/2026 at 5:13:17 PM
And the cheapest engineering firm won't use LLMs as well, wherever possible?by lukan
2/14/2026 at 7:34:18 PM
The cheapest engineering firm will turn out to be headed up by an openclaw instance.by fc417fc802
2/14/2026 at 5:18:02 PM
fun fact, LLMs come in cheapest and useless and expensive but actually does what's being asked, too.So, will they? Probably. Can you trust the kind of LLM that you would use to do a better job than the cheapest firm? Absolutely.
by TheRealPomax
2/14/2026 at 5:02:51 PM
this.by minimalthinker