3/22/2026 at 11:45:24 PM
I enjoyed reading theses perspectives, they are reasoned and insightful.I'm undecided about my stance for gen AI in code. We can't just look at the first order and immediate effects, but also at the social, architectural, power and responsibility aspects.
For another area, prose, literature, emails, I am firm in my rejection of gen AI. I read to connect with other humans, the price of admission is spending the time.
For code, I am not as certain, nowadays I don't regularly see it as an artwork or human expression, it is a technical artifact where craftsmanship can be visible.
Will gen AI be the equivalent of a compiler and in 20 years everyone depends on their proprietary compiler/IDE company?
Can it even advance beyond patterns/approaches that we have built until then?
I have many more questions and few answers and both embracing and rejecting feels foolish.
by ysleepy
3/22/2026 at 11:53:19 PM
I'm worried about a few big companies owning the means of production for software and tightening the screws.by tracerbulletx
3/23/2026 at 1:18:31 AM
Given how fast the Open Source models have been able to catch up their closed-source counterparts, I think at least on the model/software side this will be a non-issue. The hardware situation is a bit grimmer, especially with the recent RAM prices. Time will tell: if in 2–3 years time, we can get to a situation where a 512GB–1TB VRAM / unified memory + good fp8 rig is a few thousands and not tens of thousands of dollars, we'll probably be good.by TheCoreh
3/23/2026 at 6:10:45 PM
A few thousands of dollars plus the energy to run the system is unaffordable to most of the world's developers. Not that it is going to be the first way in which the Global South is kept from closing the gap.by lejalv
3/23/2026 at 12:52:20 AM
This has already happened or happening quite fast with cloud. Where setting up own data center, or even few servers could be crime against humanity if it does not use whole Kubernetes/Devops/Observability stack.by geodel
3/22/2026 at 11:58:47 PM
This is my immediate concern as well. Sam said in an interview that he sees "intelligence" as a utility that companies like OpenAI would own and rent out.by kvirani
3/23/2026 at 1:43:28 AM
The problem is the cat is already out of the bag on the technology. Anyone can go over to Huggingface, follow a cookbook [0], and build their own models from the ground up. He cannot prevent that from taking place or other organizations releasing full open weight/open training data models as well, on permissive licenses, which give individuals access to be able to modify those models as they see fit. Sam wishes he had control over that but he doesn't nor will he ever.by Ucalegon
3/23/2026 at 2:16:20 AM
Im thinking mainly if they manage to get some kind of regulations that make open source impractical for commercial use, or hardware gets too expensive for small hobbyists and bootstrapped startups, or if the large data center models wildly out class open source models. I love using open source models but I can't do what I can do with 1m context opus, and that gap could get worse? Or maybe not, it could close, I don't know for sure, and how long will Chinese companies keep giving out their open source models? Lots of unknowns.by tracerbulletx
3/23/2026 at 2:49:45 AM
I know someone who just spent 10 days of GPU time on a RTX 3060 to build a DSLM [0] which outperforms existing, VC backed (including from Sam himself), frontier model wrappers that runs on sub 500 dollar consumer hardware which provides 100% accurate work product, which those frontier model wrappers cannot do. The fact that a two man team in a backwater flyover town speaks to how out of the badly out of the bag the tech is. Where the money is going to be isn't based off of building the biggest models possible with all of the data, its going to be about building models which specifically solve problems and can run affordably within enterprise environments by building to proprietary data since thats the differentiator for most businesses. Anthropic/OAI just do not have the business model to support this mode of model development to customers who will reliably pay.[0] https://www.gartner.com/en/articles/domain-specific-language...
by Ucalegon
3/23/2026 at 12:06:32 AM
Hopefully it continues to get commoditized to the point where no monopoly can get a stranglehold on it, since the end product ("intelligence") can be swapped out with little concern over who is providing it.by arcanemachiner
3/23/2026 at 12:15:17 AM
> Hopefully it continues to get commoditized to the point where no monopoly can get a stranglehold on itI believe this is the natural end-state for LLM based AI but the danger of these companies even briefly being worth trillions of dollars is that they are likely to start caring about (and throwing lobbying money around) AI-related intellectual property concerns that they've never shown to anyone else while building their models and I don't think it is far fetched to assume they will attempt all manner of underhanded regulatory capture in the window prior to when commoditization would otherwise occur naturally.
All three of OpenAI, Google and Anthropic have already complained about their LLMs being ripped off.
https://www.latimes.com/business/story/2026-02-13/openai-acc...
https://cloud.google.com/blog/topics/threat-intelligence/dis...
https://fortune.com/2026/02/24/anthropic-china-deepseek-thef...
by georgemcbay
3/23/2026 at 12:25:51 AM
Which is a wildly hypocritical tack for them to take considering how all their models were created, but I certainly wouldn’t be surprised if they did.by rescripting
3/23/2026 at 8:46:59 AM
In other words, it is an existential question for them. And given that some of the people running these companies have no moral convictions, expect a complete shitshow. Regulation. Natural security classifications. Endless lawfare. Outright bribery. Anything and everything to retain their valuations.by archagon
3/23/2026 at 7:23:02 PM
> For code, I am not as certain, nowadays I don't regularly see it as an artwork or human expression, it is a technical artifact where craftsmanship can be visible.Humans are vital for non-craftsmanship reasons. Human curiosity and the ability to grok the big picture was vital in detecting the XZ backdoor attempt. If there is an wholesale AI-takeover, I don't think such attacks would have been detected 5 years in the future.
AI will make future attacks much easier for several reasons: changes ostensibly by multiple personas but actually controled by the same entity. Maintainers who are open to AI-assisted contributions will accept drive-by contributions, and will likely have less time to review each contribution in depth, and will have a narrower context than the attacker on each PR.
AI-generated code fucks with trust and reputation: I trust the code I generate [1] with or without AI, I trust AI-generated code by others far less than their manually generated code. I'm not aure what the repercussions are yet.
1. I am biased and likely over-optimistic about the security and number of bugs.
by overfeed
3/23/2026 at 9:09:25 PM
> I'm undecided about my stance for gen AI in code.Just make sure that you isolate whatever is generated so that if you ever decide that copyright means something to you after all that you don't end up with a worthless codebase.
by jacquesm