4/14/2026 at 4:11:47 PM
The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve. You can find evidence for both. LLMs probably can't get another 10 times better. But then, almost literally at any minute, someone could come up with a new architecture that can be 10 times better with the same or fewer resources. LLMs strike me as still leaving a lot on the table.If we're nearing the top of a sigmoid curve and are given 10-ish years at least to adapt, we probably can. Advancements in applying the AI will continue but we'll also grow a clearer understanding of what current AI can't do.
If we're still at the bottom of the curve and it doesn't slow down, then we're looking at the singularity. Which I would remind people in its original, and generally better, formulation is simply an observation that there comes a point where you can't predict past it at all. ("Rapture of the Nerds" is a very particular possible instance of the unpredictable future, it is not the concept of the "singularity" itself.) Who knows what will happen.
by jerf
4/14/2026 at 5:14:28 PM
I model this as "stacked sigmoid curves". I have no reason to believe that any specific technological implementation will be exponential in impact vs sigmoidal.However if we throw enough money and smart people at the problems and get enough value from the early sigmoid curves, the effective impact of a large number of stacked sigmoids could theoretically average to a linear impact, but if the sigmoids stay of a similar magnitude (on average) and appear at a higher velocity over time, you end up with an exponential made up of sigmoids*
* To be fair, it has been so long since I have done math that this may be completely incorrect mathematically - I'm not sure how to model it. However I think in practice more and more sigmoids coming faster and faster with a similar median amplitude is gonna feel very fast to humans very soon - whether or not it's a true exponential.
I'm honestly having a very hard time thinking through the likely implications of what's currently happening over the next 2-10 years. Anyone who has the answers, please do share. I'm assuming from Cynafin that it's a peturbated complex adaptive system so I can just OODA or experiment, sense and respond to what happens - not what I think might happen.
by peterbell_nyc
4/14/2026 at 5:17:53 PM
Why is everyone so damn obsessed with the singularity? You don't need superintelligence to disrupt humanity. We easily have enough advancement to change the economy dramatically as is. The adoption isn't there yet.by fny
4/14/2026 at 6:07:53 PM
Even after I explained the exact usage I was invoking, the attractive nuisance of all the science fiction that has gotten attached to the term still prevented you and Quarrelsome from reading my post as written.I really wish the term hadn't been mangled so much. Though the originator of the term bears a non-trivial amount of the responsibility for it, having written some rather good science fiction on the topic himself. The original meaning from the paper is quite useful and nothing has stepped up to replace it.
All the singularity means as I explicitly used it here is you entirely lose the ability to predict the future. It is relative to who is using it... we are all well past the Caveman Singularity, where no (metaphorical) caveman could possibly predict anything about our world. If we stabilize where we are now I feel like I have at least a grasp on the next ten years. If we continue at this pace I don't. That doesn't mean I believe AI will inevitably do this or that... it means I can't predict anymore, which is really the exact opposite. AI doesn't have to get to "superintelligence" to wreck up predictions.
by jerf
4/14/2026 at 7:11:14 PM
>the originator of the term ... rather good science fictionI guess you are thinking of Vernor Vinge but the term first came up with John von Neumann in the 1950s:
>...on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue
by tim333
4/14/2026 at 6:25:07 PM
> The adoption isn't there yet.It's worth noting that after ~50 years[edit: to preempt nitpicking, yes I know we've been using computers productively quite a bit longer than that, but that's roughly the time when the computerized office started to really gain traction across the whole economy in developed countries], we've only extracted a tiny proportion of the hypothetical value of computers, period, as far as benefits to the economy and potential for automation.
I actually think a lot of the real value of LLMs is "just" going to be making accessing a little (only a little!) more of that existing unrealized benefit feasible for the median worker.
My expectation is that we'll also harness only a tiny proportion of the hypothetical value of LLMs. We're just not good enough at organizing work to approach the level of benefit folks think of when they speculate about how transformational these things will be. A big deal? Yes. As big a deal as some suppose? Probably not.
[edit: in positive ways, I mean. I think we're going to see huge boosts in productivity to anti-social enterprises. I'd not want to bet on whether the development of LLMs are going to be net-positive or net-harmful to humanity, not due to the "singularity" or "alignment" or whatever, but because of the sorts of things they're most-useful for]
by lamasery
4/14/2026 at 5:42:39 PM
Moreover the singularity makes this crass assumption that a single player takes all. It seems to ignore a future of many, many AI players, or many, many human + AI players instead.Furthermore, regardless of how smart one thing is, it cannot win towards infinite games of poker against 7 billion humans, who as a race are cognitively extremely diverse and adaptive.
by Quarrelsome
4/14/2026 at 5:58:58 PM
> regardless of how smart one thing is, it cannot win towards infinite games of poker against 7 billion humans,AI isn't one thing though. Really its kind of a natural evolution of 'higher order life'. I think that something like a 'organization', (corps, governments, etc) once large enough is at least as alive as a tardigrade. And for the people who are its cells, it is as comprehensible as the tardigrade is to any of its individual cells. So why wouldn't organizations over all of human history eventually 'evolve' a better information processing system than humans making mouth sounds at each other? (writing was really the first step on this). Really if you look at the last 12,000 years of human society as actually being the first 12,000 years of the evolutionary history of 'organizations', it kinda makes a lot of sense. And so much of it was exploring the environment, trying replication strategies, etc. And we have a lot of different organizations now, like an evolutionary explosion, where life finds various niches to exploit.
/schitzoposting
by kaibee
4/14/2026 at 7:42:03 PM
> AI isn't one thing though.What's the single in "singularity" doing then?
My issue is I feel like some people treat intelligence as an integer value and make the crass assumption that "perfect intelligence" beats all other intelligences and just think that's quite a thick way to think about it. A fool can beat an expert over the course of towards infinite hands because they happen to do something unexpected. Everything is a trade off and there's no such thing as perfect, every player has to take risk.
by Quarrelsome
4/14/2026 at 5:52:35 PM
that's kind of optimistic. for example a misaligned super AI might engineer a virus that wipes out most of the 7 billion humans. that would put a damper on the adaptability of the human race...by ikrenji
4/14/2026 at 7:43:17 PM
and then might overfit the lack of danger in that aftermath, leading to those fragmented humans doing something to overthrow it. For all we know this AI might get bored and decide to make a cure, or turn itself off, or anything really.by Quarrelsome
4/14/2026 at 5:52:56 PM
The singularity does no such thing.by fzzzy
4/14/2026 at 7:42:22 PM
well that's certainly cleared it all up.by Quarrelsome
4/14/2026 at 6:14:30 PM
We've had enough advancement to change the economy for many decades, but the powers that be have insisted that, despite the lack of need, we continue to toil doing completely unnecessary work, because that's what's required to extend their fiefdoms.Not that the singularity has any relevance here, either - except maybe that the robots take over, and the billionaires have missed the boat? I don't know.
by gilfaethwy
4/14/2026 at 7:06:13 PM
>Why is everyone so damn obsessed with the singularity?I don't think most are - it tends to regarded as rather cranky stuff, and a lot of people who use the term are a bit cranky.
Even so AI maybe overtaking human intelligence is an interesting thing in human history.
by tim333
4/14/2026 at 7:09:02 PM
An interesting thing in AI history. For human history, it’s epochal.by afthonos
4/14/2026 at 10:03:05 PM
Why is everyone so damn obsessed with the singularity? You don't need superintelligence to disrupt humanity.And at the same time, we don't take advantage of the intelligence we already have.
by CamperBob2
4/14/2026 at 7:45:08 PM
>Why is everyone so damn obsessed with the singularity?Because they are captives (to a system of incentives that is already "superintelligent" in comparison to any individual) who are hoping for salvation (something to make them free against their will; since it is their will which is captured).
Singularity, then, is the point at which the system itself "finally becomes able to imagine what it is like to be a person", and decides to stop torturing people. IMO, this is unlikely to work out like that.
by balamatom
4/14/2026 at 7:23:40 PM
Because it's happening no matter how much you'd rather ignore it or scoff at it.by guelo
4/14/2026 at 7:09:14 PM
I've said it before, but it would be a mistake to just focus on the models, and ignore everything else that is changing in the ecosystem -- tools, harnesses, agents, skills, availability of compute, etc. -- things are changing very quickly overall.The thing that is changing most rapidly, however, is the understanding of how to harness this insanely powerful, versatile, and unpredictable new technology.
Like, those who experimented deeply with LLMs could tell that even if all model development completely froze in 2024, humanity had decades worth of unrealized applications and optimizations to explore. Even with AI recursively accelerating this process of exploration. As a trivial example, way back in 2023 anyone who got broken code from ChatGPT, fed it the error message, and got back working code, knew agents were going to wreck things up very quickly. It wasn't clear that this would look like MD files, Claude Code, skills, GasTown, and YOLO vibe-coding, but those were "mere implementation details."
I'm half-convinced an ulterior goal of these AI companies (other than the lack of a better business model) to give away so many cheap tokens is to encourage experimentation and overcome this "capability overhang."
Given all this, it's very hard to judge where we are on the curve, because there isn't just one curve, there are actually multiple inter-playing curves.
by keeda
4/14/2026 at 8:21:55 PM
Anyone who believes in materialism should recognize that there is still a lot of room to improve.by joquarky
4/14/2026 at 5:26:56 PM
Neither! A logistic curve is just an exponential with a carrying capacity - it is still an exponential! There is no reason to believe that AI capability, which grows logarithmically with the handwaved-resources used on it (roughly, this is compute and training data), grows, has grown, or is growing exponentially!I know this sounds like "the moderate position" to people but you are accepting that something logarithmic is somehow in fact exponential (these are inverse functions of one another) based on no evidence or argument.
Here is Sam Altman, the one man in the world with the most incentive to overstate AI capability, accepting the extremely-well-known logarithmic growth: https://blog.samaltman.com/three-observations
What we see in reality is a basically-linear growth pattern due to pushing exponentially more resources into this logarithm.
by juped
4/14/2026 at 4:38:14 PM
Somewhere around 2005-2007, when people were wondering if the Internet was done, PG was fond of saying "It has decades to run. Social changes take longer than technical changes."I think we're at a similar point with LLMs. The technical stuff is largely "done" - LLMs have closer to 10% than 10x headroom in how much they will technologically improve, we'll find ways to make them more efficient and burn fewer GPU cycles, the cost will come down as more entrants mature.
But the social changes are going to be vast. Expect huge amounts of AI slop and propaganda. Expect white-collar unemployment as execs realize that all their expensive employees can be replaced by an LLM, followed by white-collar business formation as customers realize that product quality went to shit when all the people were laid off. Expect the Internet as we loved it to disappear, if it hasn't already. Expect new products or networks to arise that are less open and so less vulnerable to the propagation of AI slop. Expect changes in the structure of governments. Mass media was a key element in the formation of the modern nation state, mass cheap fake media will likely lead to its fragmentation as any old Joe with a ChatGPT account can put out mass quantities of bullshit. Probably expect war as people compete to own the discourse.
by nostrademons
4/14/2026 at 4:52:08 PM
> Somewhere around 2005-2007, when people were wondering if the Internet was doneLiterally who wondered that? Drives me nuts when people start off an argument with an obvious strawman. I remember the time period of 2005-2007 very well, and I don't remember a single person, at least in tech, thinking the Internet was done. I don't know, maybe some ragebait articles were written about it, but being knee-deep in web tech at that time, I remember the general feeling is that it was pretty obvious there was tons to do. E.g. we didn't necessarily know what form mobile would take, but it was obvious to most folks that the tech was extremely immature and that it would have a huge impact on the Internet as it progressed. That's just one example - social media was still in its nascent stages then so it was obvious there would be a ton of work around that as well.
by hn_throwaway_99
4/14/2026 at 5:07:06 PM
If you were in tech in 2005-2007 you were part of a small minority of the general population. It often didn't feel like a small minority because, well, you knew all those other people on the Internet, but that's a pretty strong selection bias.There is, of course, the Paul Krugman quote from 1998 that by 2005 the Internet would be no more important than a fax machine. [1]
Here's Wired in 2007 saying, in reference to Facebook, "no company in its right mind would give it a $15 billion valuation". [2]
I remember, being at Google in ~2011, we used to laugh at the Wall Street analysts because they would focus on CPC numbers to forecast a valuation, which is important only if the number of clicks is remaining constant. We knew, of course that total Internet usage was still growing quite rapidly and that queries had increased by roughly 4x over the 2009-2013 timeframe.
And a lot of people will say "If you're so smart, why aren't you rich?", and I'll point out that many people who assumed the Internet had lots of room to grow in 2005-2007 did end up very rich. Google stock has increased roughly 20x since 2007 (and 40x from its 2009 lows). Meta is now worth $1.6T, a 100x increase over the $15B valuation that everyone thought was insane in 2007. Amazon is also up about 100x. It would not be possible to take the other side of the trade and make these kind of profits if the majority of people did not think the Internet was largely over.
[1] https://www.snopes.com/fact-check/paul-krugman-internets-eff...
by nostrademons
4/14/2026 at 6:38:03 PM
> If you were in tech in 2005-2007 you were part of a small minority of the general population. It often didn't feel like a small minority because, well, you knew all those other people on the Internet, but that's a pretty strong selection bias.Didn't we only pass 50% of households having a home PC in like... '00 or '01 or something? And I mean just in the US, which was way ahead of the curve.
> Here's Wired in 2007 saying, in reference to Facebook, "no company in its right mind would give it a $15 billion valuation". [2]
I actually think that's correct... if the smartphone hadn't taken off right after that. The "consumer" Internet and computing, the attention economy, et c., functionally is the smartphone. A desktop computer and even a laptop aren't in use when driving, at the store, at the park, every moment on vacation, et c. It'd still only be nerds lugging computers everywhere if nobody'd managed to make a smartphone that's capable-enough and pleasant-enough-to-use to expand the market beyond the set of folks who might have had a beeper in earlier years (the part of the market Blackberry was addressing). A gigantic proportion of the "GDP of the Internet", if you will, exists because smartphones exist.
by lamasery
4/14/2026 at 5:00:47 PM
> I don't know, maybe some ragebait articles were written about it, but being knee-deep in web tech at that time, I remember the general feeling is that it was pretty obvious there was tons to doAlmost definitely professional ragebaiters in Wired or Time or whatever, yeah.
by magicalist
4/14/2026 at 5:21:40 PM
I was also in tech at that time, in fact I worked for Google during that period and people definitely thought that the Internet had reached its peak. So many criticisms back then not about just peak Internet but that all these companies were blowing money on unproven business models, they were unsustainable, unprofitable, it was all just hype.You also had numerous telecommunications companies going bust in one of the largest sector collapses in modern financial history, the largest bankruptcy in history (at that time) was WorldCom, followed by the second largest bankruptcy in history with Global Crossing... Lucent Technologies went belly up and the largest telecom company at the time Nortel lost 90% of its value, eventually going bankrupt in 2009.
And then of course the great recession hit, tech companies took a massive blow, Microsoft, Google, Intel, Apple and other tech giants lost 50% of their stock value in a matter of months. You don't lose 50% of your value because people think you have a promising future.
It wouldn't be until the explosive rise of smart phones and close to zero percent interest rates that sentiment turned around and tech companies ballooned in value in what would end up being the longest bull run in U.S. history.
by Maxatar
4/14/2026 at 5:08:31 PM
I agree with the gist of your points, but not much with these two:>followed by white-collar business formation as customers realize that product quality went to shit when all the people were laid off.
These will be rare boutique affairs. Based on how mass production and cheap shipping played out, most people value price over quality. The economy will rearrange itself around those savings, making boutique products and services expensive.
>mass cheap fake media will likely lead to its fragmentation as any old Joe with a ChatGPT account can put out mass quantities of bullshit.
We have this today. And that's not a "same as it ever was" dismissal. Today, there are a lot of terminally online people posting the equivalent of propaganda (and actual propaganda). Social media pushes hot takes in audiences' faces, a portion of them reshare it, and it spreads exponentially. The only limitation to propaganda today is how much time the audience spends staring at the "correct" content provider.
by vharuck
4/14/2026 at 4:47:09 PM
You are very strong on the "slop" bias. Why?In managing a large to enterprise sized code base, I experience the opposite. I can guarantee a much more homogenous quality of the code base.
It is the opposite of slop I am seeing. And that at a lower cost.
Today,I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.
by tossandthrow
4/14/2026 at 4:51:11 PM
Which company do you work at so we can avoid your migrated endpoints?by chaps
4/14/2026 at 5:31:12 PM
All big tech companies are mandating employees to use AI for tasks. Unless there's a similar movement to open source that is AI-free, you're going to need to be tech-free of you want to avoid companies that use AI.by bsmith
4/14/2026 at 4:52:46 PM
Wtf. You don't even know what the migration was about?by tossandthrow
4/14/2026 at 4:56:09 PM
I mean, I'm always down for learning something new. But I hope what I learn includes the name of the company I'd like to avoid.by chaps
4/14/2026 at 5:02:31 PM
Your tone is in conflict with the statement that you are curious.by tossandthrow
4/14/2026 at 5:09:00 PM
It's because you're deflecting. :)by chaps
4/14/2026 at 5:18:22 PM
Deflecting from what? Telling the company name so you can avoid it due to your incredibly curious nature?by tossandthrow
4/14/2026 at 5:37:22 PM
Sigh.Look friend, I really hope you can realize how you sound in your post. You're extraordinarily confidently saying that you refactored some ambiguous endpoints in 30 minutes. Whenever I see someone act that confidently towards refactoring, thousands alarms go off in my head. I hope you see how it sounds to others. Like, at least spend longer than a lunch break on it with just a tad more diligence. Or hell, maybe even consider LIEing about how much time you spent on it. But my point is that your shortcuts will burn you. If you want to go down that path, I'm happy to be a witness to eventual schadenfreude.
My issue isn't with the fact that you used AI. My issue is with how confident you are that it worked well and exactly to spec. I'm very well aware of what these systems can do. Hell, I've been able to get postgres to boot inside linux inside postgres inside linux inside postgres recently with these tools. But I'm also acutely aware of the aggressive modes that these systems can break in.
So again, which company should we all avoid so that we can avoid your, specifically your, refactoring?
by chaps
4/14/2026 at 7:06:17 PM
I definitely did not say anything about ambiguous endpoints.The migration was relatively straight forward and could likely have been implemented as automatic code transforms.
What I did say was that it was complex.
by tossandthrow
4/14/2026 at 7:22:03 PM
Yikes. Have a good one.by chaps
4/14/2026 at 6:11:28 PM
[dead]by ath3nd
4/14/2026 at 4:55:01 PM
One point: yes, you're speaking from the power position. God-mode over a fleet of minions has always been an engineer's wet-dream. That's not even bad per-say. It's the collateral damage down stream that's at issue. Maybe you don't see any damage, but that's largely the point. Is it really up to you to say?by apsurd
4/14/2026 at 4:59:41 PM
What is the collateral damage? In ensuring that a bunch of endpoints use the same structure using LLMs?by tossandthrow
4/14/2026 at 5:08:59 PM
Let's not debate that it's possible to make very large very safe changes. It is possible that you did that.This is about "slop bias". I'd wager that empowering everyone, especially power-positions to ship 50x more code will produce more code that is slop than not. You strongly oppose this because it's possible for you to update an API?
I'm stuck on the power-position thing because I'm living it. I'm pro-AI but there are AI-transformation waves coming in and mandating top-down. From their green-field position it's undeniable crush-mode killin' it. Maintenance of all kinds is separate and the leaders and implementors don't pay this cost. Maybe AI will address everything at every level. But those imposing this world assume that to be true, while it's the line-engineers and sales and customer service reps that will bear the reality.
by apsurd
4/14/2026 at 5:16:40 PM
> Maybe AI will address everything at every level.I think this is the idea you need to entertain / ponder more on.
I largely agree with you, what I don't agree with is the weighting about the individual elements.
My point was that I could do a 30 minutes cleanup in order to streamline hundreds of endpoints. Without AI I would not have been able to justify this migration due to business reasons.
We get to move faster, also because we can shorten deprication tails and generally keep code bases more fit more easily.
In particular, we have dropped the external backoffice tool, so we have a single mono repo.
An Ai does tasks all the way from the infrastructure (setting policies to resources) and all the way to the frontends.
Equally, if resources are not addressed in our codebase, we know at a 100% it is not in use, and can be cleaned up.
Unused code audits are being done on a weekly schedule. Like our sec audits, robustness audits, etc.
by tossandthrow
4/14/2026 at 5:31:59 PM
Yeah, the more I debate the AI-lovers the more I can empathize with the possibility it may very well turn out to be everything is an Agent. Encodable.I'm not a doomer either, but I do think this arc is a human arc: there's going to be a lot of collateral damage. To your point, Agents with good stewardship can also implement hygiene and security practices.
It's important we surface potential counter metrics and unintended side effects. And even in doing so the unknown unknowns will get us. With that said, I like this positive stewardship framing, I'll choose to see and contribute to that, thanks!
by apsurd
4/14/2026 at 7:00:37 PM
I definitely don't identify as an AI lover. For me year 0 of Ai was February 6th 2026 and the release of Opus 4.6.Until that day we had roughly zero Ai code in the code base (additions or subtractions). So in all reasonable terms I am a late adopter.
For code bases Ai does not concern me. We have for quite some time worked with systems that are too complex for single people to comprehend, so this is a natural extension of abstraction.
On the other hand, am super concerned about Ai and the society. The impact of human well being from "easy" Ai relations over difficult human connection. The continued human alienation and relational violation (I think the "woke" discourse will go on steroids).
I think society is going to be much less tolerant. And that frightens me.
by tossandthrow
4/14/2026 at 6:58:01 PM
>> Works flawlessly, debt principal down.I don't doubt it completed the initial coding work in a short time, but the fact that you've equated that with flawless execution is on the concerning-scary spectrum. I can only assume you're talking "compiles-runs-ship it"
The danger is not generating obvious slop, it's accepting decent and convincing outputs as complete and absolving ourselves of responsibility.
by skeeter2020
4/14/2026 at 7:59:31 PM
You are right, and it happens that the output looks decent.Code idioms, or patterns if you will, is largely our solution.
We have small pattern/[pattern].md files througout the code base where we explain how certain things should be done.
In this case, the migration was a normalization to the specific pattern specified in the pattern file for the endpoints.
Semantics was not changed and the transform was straight forward. Just not task I would be able to justify spending time on from a business perspective.
Now, the more patterns you have, and the more your code base adheres to these patterns, the easier you can verify the code (as you recognize the patterns) and the easier you cal call out faulty code.
It is easier to hear an abnormality in music than in atmospheric noise. It is the same with code.
by tossandthrow
4/14/2026 at 5:18:34 PM
Seeing plenty of this. The quality of agentic code is a function of the quantity and quality of adversarial quality gates. I have seen no proof that an agentic system is incapable of delivering code that is as functional, performant and maintainable as code from a great team of developers, and enough anecdotes in the other direction to suggest that AI "slop" is going to be a problem that teams with great harnesses will be solving fairly soon if they haven't already.by peterbell_nyc
4/14/2026 at 5:39:08 PM
I take your point but then it makes me think is there no more value in diversity?[Philosophy disclaimer] So in a code-base diversity is probably a bad idea, ok that makes sense. But in an agentic world, if everything is run through the Perfect Harness then humans are intentionally just triggers? Not even that, like what are humans even needed for? Everything can be orchestrated. I'm not against this world, this is an ideal outcome for many and it's not my place to say whether it's inevitable.
What I'm conflicted on is does it even "work" in terms of outcomes. Like have we lost the plot? Why have any humans at all. 1 person billion dollar company incoming. Software aside, is the premise even valid? 1 person's inputs multiplied by N thousand agents -> ??? -> profit
by apsurd
4/14/2026 at 8:06:07 PM
These are the right questions to ask.by tossandthrow
4/14/2026 at 4:57:32 PM
> Today, I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.This is either a very remarkable or a very frightening statement. You're claiming flawless execution within the same day as the change.
If you're unable to tell us which product this is, can you at least commit to report back in a month as to how well this actually went?
by hliyan
4/14/2026 at 5:02:04 PM
It is a part of the smoke testing process right now.But we run 90% test coverage, e2e test etc. None of which had been altered, and are all passing.
Migrations are generally not that high risk if you have a code base in alright shape.
by tossandthrow
4/14/2026 at 5:54:45 PM
Ironically the post saying it is not slop sounds exactly like ai slop.by bluecheese452
4/14/2026 at 7:08:53 PM
Too. Many spelling errors for that to be slop...by tossandthrow
4/14/2026 at 4:37:55 PM
> The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve.Even using the models we have today, we have revolutionized VFX, video production, and graphics design.
Similarly, many senior software engineers are reporting 2-10x productivity increases.
These tools are some of the most useful tools of my career. I don't even think the general consumer public needs "AI" in their products. If we just create control surfaces for experts to leverage and harness the speed up and shape and control the outcomes, we're going to be in a very good spot.
These alone will have ripple effects throughout the economy and innovation. We've barely begun to tap into the benefits we have already.
We don't even need new models.
by echelon
4/14/2026 at 5:30:08 PM
> Similarly, many senior software engineers are reporting 2-10x productivity increases.But are they making 2-10x compensation compared to before these tools? If not, these tools are not really useful to you, they are useful to your employer. The most shocking thing I find about LLM-assisted development is how gleefully we are just handing all this value over to our employers, simultaneously believing that they are great because we're producing more. Totally bonkers!
by ryandrake
4/14/2026 at 5:34:51 PM
> handing all this value over to our employers, simultaneously believing that they are great because we're producing more.You could turn the table and say that you can now launch your own business with far fewer resources.
Who needs financial capital if you can do it all with solo / small team labor capital?
Gossip Goblin ditched his studio and now a16z is trying to throw him money, which he's turned down. He's turning everyone down.
https://www.youtube.com/watch?v=-Rzl7nUdEs4
Dude is legit talented and doesn't need studio capital anymore.
This is the end of the Hollywood nepotism pyramid, where limited production capital was available to only a handful of directors.
We're kind of at the start of a revolution here. I'd be way more worried if I were Disney or Paramount.
Couldn't you take a sabbatical and end it with a brand new SaaS you own and control? That's entirely within reach now.
The people this is going to hurt are the ICs that don't have a go-getting type personality where they take full-stack ownership: marketing, branding, design, customer relationships, etc. If you can do those things, you're going to be a rock star with total autonomy.
You ought to see what the indie game devs are doing with AI (when they aren't getting yelled at on Steam by the haters). It's legitimately incredible. Game designers are taking on full-stack ownership over the entire experience, and they're making some incredible stuff.
by echelon
4/14/2026 at 5:55:50 PM
> If you can do those things, you're going to be a rock star with total autonomy.What percentage of developers can do these things? 1%? 0.1%? 0.01%? A very small percentage of developers have the desire to take on the full-stack, the temperament of good entrepreneurs, the product judgment of good Product Managers and ability of good Project Managers to juggle dependencies and timeframes. What about the rest of them? The remaining 99+% of us are just handing value over to our employers and getting a 5% raise in return--if we're lucky.
So, the fact that a small percentage of rockstar developers can capture the full value of AI-assisted development reinforces the point that a small number of people/businesses are capturing that value. The vast majority of workers are not capturing any value.
by ryandrake
4/14/2026 at 6:22:41 PM
So... a tiny fraction of people get to capture the value again, and at even greater environmental (and thus societal) cost than before? Wow, what a world.by gilfaethwy
4/14/2026 at 4:23:19 PM
"given 10-ish years at least to adapt, we probably can"Social media would like a word...
by forgetfreeman
4/14/2026 at 4:48:43 PM
We can adapt by shutting down social media. We don't really need that. It's been pretty bad since before the AI wave took off.by 8n4vidtmkvmk
4/14/2026 at 5:32:10 PM
We needed a better phone book we ended up in a world where most of our fellow citizens fucking casino.by fellowniusmonk
4/14/2026 at 4:35:58 PM
We aren’t anywhere near AGI. They’ve consumed the entirety of human knowledge and poisoned the well, and it still can’t help but tell you to walk to the car wash.A peasant villager was sentient without a single book, film or song. You don’t need this much data to be sentient. They’re using a stupid method, and a better one will be discovered some day.
by MagicMoonlight
4/14/2026 at 5:18:04 PM
Sentience isn't intelligence.by pixl97
4/14/2026 at 4:28:33 PM
We are bottom. It's just a start.We are in era of pre pentium 4 in AI terms.
by faangguyindia
4/14/2026 at 4:30:45 PM
And you have evidence as basis for this very confident statement... where?by fnimick
4/14/2026 at 4:34:42 PM
Intuition. It comes from the spiritual awakening and being aware of your consciousness. Only Time will prove what turns out be right.by faangguyindia
4/14/2026 at 4:37:59 PM
You worship the AI?by sophacles
4/14/2026 at 4:41:22 PM
I see AI has great utility and we'll figure out ways to better it. If I had any power, i would run Nuclear Power plants to run AI dafacenters and find other near infinite sources of energy to create deeper and deeper AIs. This level of ai tech is at its infancy, it's evidently clear. People are assuming it will stall soon, and won't go beyond a certain point. I don't believe this at all, I am believing it will go much much fatherer then thisby faangguyindia
4/14/2026 at 6:14:35 PM
An LLM is never, ever going to find "other near infinite sources of energy". All it can do is predict the next word in an effort to make the user stop prompting it. That's all it does. It does not have the ability to find solutions to the worlds problems.by leptons
4/14/2026 at 5:21:06 PM
Weird comparison - The P4 was a major flop out of the gate (rambus anyone?) and at least by any good metric took three revisions (P4c - hypertheading) to make it come out where it should have ahead of its predecessor. The Pentium 3, before it that you are perhaps referring to was the peak of its era. So...it's going downhill right or what are you even saying?by hypercube33
4/14/2026 at 5:28:30 PM
I’m seeing these extremely short but supremely confident hot takes with nothing to back them up on HN more and more these days. It’s like X is leaking.by ofjcihen