3/28/2026 at 9:15:24 PM
I've always said this but AI will win a fields medal before being able to manage a McDonald's.Math seems difficult to us because it's like using a hammer (the brain) to twist in a screw (math).
LLMs are discovering a lot of new math because they are great at low depth high breadth situations.
I predict that in the future people will ditch LLMs in favor of AlphaGo style RL done on Lean syntax trees. These should be able to think on much larger timescales.
Any professional mathematician will tell you that their arsenal is ~ 10 tricks. If we can codify those tricks as latent vectors it's GG
by vatsachak
3/28/2026 at 9:23:24 PM
Tricks are nothing but patterns in the logical formulae we reduce.Ergo these are latent vectors in our brain. We use analogies like geometry in order to use Algebraic Geometry to solve problems in Number Theory.
An AI trained on Lean Syntax trees might develop it's own weird versions of intuition that might actually properly contain ours.
If this sounds far fetched, look at Chess. I wonder if anyone has dug into StockFish using mechanistic interpretability
by vatsachak
3/28/2026 at 10:38:22 PM
Some DeepMind researchers used mechanistic interpretability techniques to find concepts in AlphaZero and teach them to human chess Grandmasters: https://www.pnas.org/doi/10.1073/pnas.2406675122by myffical
3/28/2026 at 10:58:02 PM
This argument, that LLMs can develop new crazy strategies using RLVR on math problems (like what happened with Chess), turns out to be false without a serious paradigm shift. Essentially, the search space is far too large, and the model will need help to explore better, probably with human feedback.by hodgehog11
3/28/2026 at 11:44:50 PM
The search space for the game of Go was also thought to be too large for computers to manage.by narrator
3/29/2026 at 7:18:51 AM
It still is [1].[1] https://www.vice.com/en/article/a-human-amateur-beat-a-top-g...
by thesz
3/29/2026 at 11:19:32 AM
The blind spot exploiting strategy you link to was found by an adverserial ML model...by stalfie
3/29/2026 at 1:19:27 AM
Yes and making a horse drawn cart drive itself was thought to be impossible so why don't we have faster than light travel yet...by sealeck
3/29/2026 at 6:43:48 AM
Yes but "the search space is too large" is something that has been said about innumerable AI-problems that were then solved. So it's not unreasonable that one doubts the merit of the statement when it's said for the umpteenth time.by Finbel
3/29/2026 at 7:27:19 AM
I should have been more specific then. The problem isn't that the search space is too large to explore. The problem is that the search space is so large that the training procedure actively prefers to restrict the search space to maximise short term rewards, regardless of hyperparameter selection. There is a tradeoff here that could be ignored in the case of chess, but not for general math problems.This is far from unsolvable. It just means that the "apply RL like AlphaGo" attitude is laughably naive. We need at least one more trick.
by hodgehog11
3/30/2026 at 12:12:54 AM
The other trick could be bootstrapping through mathlib.As you said brute forcing the search space as the starting procedure would take way too long for the AI to build intuition.
But if we could give it a million or so lemmas of human math, that would be a great starting point.
by vatsachak
3/29/2026 at 8:20:25 AM
I agree that LLMs are a bad fit for mathematical reasoning, but it's very hard for me to buy that humans are a better fit than a computational approach. Search will always beat our intuition.by throwaway27448
3/29/2026 at 11:47:14 AM
Yes and no. I think we have vastly underestimated the extent of the search space for math problems. I also think we underestimate the degree to which our worldview influences the directions with which we attempt proofs. Problems are derived from constructions that we can relate to, often physically. Consequently, the technique in the solution often involves a construction that is similarly physical in its form. I think measure theory is a prime example of this, and it effectively unlocked solutions to a lot of long-standing statistical problems.by hodgehog11
3/29/2026 at 6:04:57 PM
That linked article says its about RLVR but then goes on to conflate other RL with it, and doesn't address much in the way of the core thinking that was in the paper they were partially responding to that had been published a month earlier[0] which laid out findings and theory reasonably well, including work that runs counter to the main criticism in the article you cited, ie, performance at or above base models only being observed with low K examples.That said, reachability and novel strategies are somewhat overlapping areas of consideration, and I don't see many ways in which RL in general, as mainly practiced, improves upon models' reachability. And even when it isn't clipping weights it's just too much of a black box approach.
But none of this takes away from the question of raw model capability on novel strategies, only such with respect to RL.
by ineedasername
3/28/2026 at 9:33:02 PM
Stockfish's power comes from mostly search, and the ML techniques it uses are mainly about better search, i.e. pruning branches more efficiently.by slopinthebag
3/28/2026 at 9:37:16 PM
The weights must still have some understanding of the chess board. Though there is always the chance that it makes no sense to usby vatsachak
3/28/2026 at 9:53:22 PM
Why must it involve understanding? I feel like you’re operating under the assumption that functionalism is the “correct” philosophical framework without considering alternative views.by emp17344
3/29/2026 at 10:29:56 AM
There is no understanding, the weights are selected based on better fit. Our cells have no understanding of optics just because they have the eyes coded in their DNA.by PowerElectronix
3/28/2026 at 9:41:18 PM
Even that is probably too much. It has no understanding of what "chess" is, or what a chess board is, or even what a game is. And yet it crushes every human with ease. It's pretty nuts haha.by slopinthebag
3/28/2026 at 9:51:35 PM
Actually, the neural net itself is fairly imprecise. Search is required for it to achieve good play. Here's an example of me beating Stockfish 18 at depth 1: https://lichess.org/XmITiqmiby anematode
3/28/2026 at 9:54:20 PM
chess is just a simple mathematical construct so that's not surprisingby Sopel
3/28/2026 at 9:44:14 PM
Does Stockfish have weights or use a neural net? I know older versions did not.by hollerith
3/28/2026 at 9:50:35 PM
yesby Sopel
3/28/2026 at 9:51:06 PM
The ML techniques it uses are only about evaluation, but you were closeby Sopel
3/28/2026 at 11:00:43 PM
As a professional mathematician, I would say that a good proof requires a very good representation of the problem, and then pulling out the tricks. The latter part is easy to get operating using LLMs, they can do it already. It's the former part that still needs humans, and I'm perfectly fine with that.by hodgehog11
3/30/2026 at 12:03:27 AM
I guess I'm using Rota's vocabulary where he implicitly uses the word trick to mean representationby vatsachak
3/29/2026 at 12:00:12 AM
But are you ok with the trendline of ai improvement? The speed of improvement indicates humans will only get further and further removed from the loop.I see posts like your all the time comforting themselves that humans still matter, and every-time people like you are describing a human owning an ever shrinking section of the problem space.
by threethirtytwo
3/29/2026 at 7:38:52 AM
I used to be worried, but not so much anymore.It used to be the case that the labs were prioritising replacing human creativity, e.g. generative art, video, writing. However, they are coming to realise that just isn't a profitable approach. The most profitable goal is actually the most human-oriented one: the AI becomes an extraordinarily powerful tool that may be able to one-shot particular tasks. But the design of the task itself is still very human, and there is no incentive to replace that part. Researchers talk a bit less about AGI now because it's a pointless goal. Alignment is more lucrative.
Basically, executives want to replace workers, not themselves.
by hodgehog11
3/29/2026 at 12:41:46 PM
On the contrary the depth and breadth we're becoming able to handle agentically now in software is growing very rapidly, to the point where in the last 3 months the industry has undergone a big transformation and our job functions are fundamentally starting to change. As a software engineer I feel increasingly like AGI will be a real thing within the next few years, and it's going to affect everyone.by latentsea
3/29/2026 at 7:55:23 PM
I don’t write code anymore. I don’t use ide’s anymore. The agent writes code. My job is to manage ai now.The paradigm shift has already happened to me and there will be more shifts to come.
by threethirtytwo
3/30/2026 at 12:20:12 AM
"to the point where in the last 3 months the industry has undergone a big transformation "Oh... this again.
by k33d
3/29/2026 at 12:22:39 AM
Humans needing to ask new question due to curiosity push the boundaries further, find new directions, ways or motivations to explore, maybe invent new spaces to explore. LLMs are just tools that people use. When people are no longer needed AI serves no purpose at all.by tartoran
3/29/2026 at 12:33:44 AM
Who said LLMs can’t push boundaries either?People can use other people as tools. An LLM being a tool does not preclude it from replacing people.
Ultimately it’s a volume problem. You need at least one person to initialize the LLM. But after that, in theory, a future LLM can replace all people with the exception of the person who initializes the LLM.
by threethirtytwo
3/29/2026 at 3:44:55 AM
The initialization problem is solved - maybe the next Nobel price will be given to a Mac mini.by tossandthrow
3/28/2026 at 9:52:47 PM
> I've always said this but AI will win a fields medal before being able to manage a McDonald's.I love this and have a corollary saying: the last job to be automated will be QA.
This wave of technology has triggered more discussion about the types of knowledge work that exist than any other, and I think we will be sharper for it.
by madrox
3/28/2026 at 10:04:25 PM
The ownership class will be sharper. They will know how to exploit capital and turn it into more capital with vastly increased efficiency. Everybody else will be hosed.by bitwize
3/29/2026 at 1:05:43 AM
I'm not sure if people will be more hosed than before. Historically, what makes people with capital able to turn things into more capital is its ability to buy someone's time and labor. Knowledge labor is becoming cheaper, easier, and more accessible. That changes the calculus for what is valuable, but not the mechanisms.by madrox
3/29/2026 at 11:34:30 AM
> Historically, what makes people with capital able to turn things into more capital is its ability to buy someone's time and labor.You forgot to include resources:
What makes people with capital able to turn things into more capital is their ability to buy labor and resources. If people with more capital can generate capital faster than people with less capital, then (unless they are constrained, for example, by law or conscious) the people with the most capital will eventually own effectively all scarce resources, such as land. And that's likely to be a problem for everyone else.
by tmoertel
3/29/2026 at 5:50:33 PM
Fair, though I don’t see how AI is really changing the equation hereby madrox
3/29/2026 at 7:22:56 PM
AI doesn't change the equation; it makes the equation more brutal for people who don't have capital.If you don't have capital, the only way to get it is by trading resources or labor for it. Most poor people don't have resources, but they do have the ability to do labor that's valued. But AI is a substitute for labor. And as AI gets better, the value of many kinds of labor will go towards zero.
If it was hard for poor people to escape poverty in the past, it's going to be even harder with AI. Unless we change something about the structure of society to ensure that the benefits of AI are shared with poor people.
by tmoertel
3/29/2026 at 10:14:08 PM
Ok, I'm following you. You're saying because labor gets cheaper it will be harder to make a living providing labor. Not disagreeing, but I wonder how much weight to give this argument. History shows a precedent of productivity revolutions changing the workforce, but not eliminating it, and lifting the quality of life of the population overall (though it does also create problems). Mixed bag with the arc bending towards betterment for all. You could argue that this moment is unprecedented in history, but unless the human spirit changes, for better or worse, we will adapt as we always have, rich and poor alike.If the value of many kinds of labor go towards zero, those benefits also go to the poor. ChatGPT has a free tier. The method of escaping poverty will still be the same. Grow yourself. Provide value to your community.
by madrox
3/30/2026 at 3:27:01 PM
Entire classes of workers have been put in the poorhouse on a near permanent basis due to technological changes, many tines during the past two centuries of industrial civilization. Without systemic structural changes to support the workforce this will happen/is already happening with AI.by bitwize
3/29/2026 at 3:42:56 AM
There is a fundamental problem with this thinking, you are making an assumption about scale. There is the apocryphal quote "I think there is a world market for maybe five computers".You have to believe that LLM scaling (down) is impossible or will never happen. I assure you that this is not the case.
by zer00eyz
3/29/2026 at 12:56:23 AM
but what if we succeed in gamifying the latent knowledge in LLM's to upload it to our human brains, by some kind of speed / reaction game?by DoctorOetker
3/29/2026 at 1:17:13 PM
> Any professional mathematician will tell you that their arsenal is ~ 10 tricks. If we can codify those tricks as latent vectors it's GGAnd if we can train the systems to discover new tricks, whoa Nelly.
by pfdietz
3/28/2026 at 10:36:32 PM
Are they actually producing new math? In the most recent ACM issue there was an article about testing AI against a math bench that was privately built by mathematicians, and what they found is that even though AI can solve some problems, it never truly has come up with something novel and new in mathematics, it is just good at drawing connections between existing research and putting a spin on it.by ryanar
3/29/2026 at 2:06:26 AM
I'm not accusing you in particular, but I feel like there's a lot of circular reasoning around this point. Something like: AI can't discover "new math" -> AI discovers something -> since it was discovered by AI it must not be "new math" -> AI can't discover "new math"For example, there was a recent post here about GPT-5.4 (and later some other models) solving a FrontierMath open problem: https://news.ycombinator.com/item?id=47497757
That would definitely be considered "new math" if a human did it, but since it was AI people aren't so sure.
by in-silico
3/29/2026 at 4:14:14 AM
There is a kind of rubrik I use on stuff like this. If LLMs are discovering new math, why have I only read one or two articles where it's happening? Wouldn't it be happening with regularity?The most obvious example of this thinking is, if LLMs are replacing developers, why us open ai still hiring?
by parineum
3/29/2026 at 4:44:38 AM
I can only say that at family meetings, I hear people talk about contracting with a shop that used to have 4 web designers, but now it's 1 guy, delivering 4x faster than before.So devs are being replaced.
by specvsimpl
3/29/2026 at 11:59:48 AM
Why aren't they delivering 4x more work? Does the world no longer need software?by ori_b
3/29/2026 at 7:53:02 AM
Nah AI is not replacing people! /sAnd other stories people tell themselves to sleep better at night
by Bombthecat
3/28/2026 at 10:41:01 PM
It's finding constructions and counterexamples. That's different from finding new proof techniques, but still extremely useful, and still gives way to novel findings.by hodgehog11
3/29/2026 at 9:49:59 AM
> I predict that in the future people will ditch LLMs in favor of AlphaGo style RL done on Lean syntax trees. These should be able to think on much larger timescales.This is certainly my hope.
In my spare time, I'm slowly, very slowly, inching towards a prototype of something that could work like that.
by Yoric
3/28/2026 at 9:34:32 PM
> AI will win a fields medal before being able to manage a McDonald'sOf course, because it takes multi-modal intelligence to manage a McDonalds. I.e. it requires human intelligence.
> I predict that in the future people will ditch LLMs in favor of AlphaGo style RL
Same for coding as well. LLM's might be the interface we use with other forms of AI though.
by slopinthebag
3/28/2026 at 9:40:32 PM
Something like building Linux is more akin to managing a McDonald's than it is to a 10 page technical proof in Algebraic Groups.Programming is more multimodal than math.
Something like performance engineering might be free lunch though
by vatsachak
3/28/2026 at 10:52:28 PM
> Programming is more multimodal than mathI have no idea how you come to this conclusion, when the evidence on the ground for those training models suggests it is precisely the opposite.
We are much further along the path of writing code than writing new maths, since the latter often requires some degree of representational fluency of the world we live in to be relevant. For example, proving something about braid groups can require representation by grid diagrams, and we know from ARC-AGI that LLMs don't do great with this.
Programming does not have this issue to the same extent; arguably, it involves the subset of maths that is exclusively problem solving using standard representations. The issues with programming are primarily on the difficulty with handling large volumes of text reliably.
by hodgehog11
3/29/2026 at 11:57:33 PM
Grid Diagrams can be specified (hopefully) through algebraic equations.The way that most math is currently done is that someone provides an extremely specified problem and then one has to answer that extremely specified problem.
The way that programming is currently done is through constructing abstractions and trying to create a specification of the problem.
Of course I'm not saying we're close to creating a silicon Grothendieck (I think that Bourbaki actually reads like a codebase) but I'm saying that we're much closer to constructing algorithms that can solve specified programs as opposed to specifying underspecified problems
Think about the difference in specificity of
Prove Fermat's last theorem vs Build a web browser
by vatsachak
3/29/2026 at 3:10:19 AM
I guess the comment you are replying to really meant to say “software engineering” not “programming”.by zeroonetwothree
3/29/2026 at 1:09:04 AM
Nah, LLM's are solving unique problems in maths, whereas they're basically just overfitting to the vast amounts of training data with writing code. Every single piece of code AI writes is essentially just a distillation of the vast amounts of code it's seen in it's training - it's not producing anything unique, and it's utility quickly decays as soon as you even move towards the edge of the distribution of it's training data. Even doing stuff as simple as building native desktop UI's causes it massive issues.by slopinthebag
3/28/2026 at 9:48:00 PM
Yeah, it's hard to compare management and programming but they're both multimodal in very different ways. But there's gonna be entire domains in which AI dominates much like stockfish, but stockfish isn't managing franchises and there is no reason to expect that anytime soon.I feel like something people miss when they talk about intelligence is that humans have incredible breadth. This is really what differentiates us from artificial forms of intelligence as well as other animals. Plus we have agency, the ability to learn, the ability to critically think, from first principles, etc.
by slopinthebag
3/28/2026 at 9:50:32 PM
Exactly. It's what the execs are missing.Also animals thrive in underspecified environments, while AIs like very specific environments. Math is the most specified field there is lol
by vatsachak
3/29/2026 at 12:39:58 AM
Oooh yeah that's really good framing. Humans have been building machines that outperform humans for hundreds of years at this point, but all in problems which are extremely well specified. It's not surprising LLM's are also great in these well specified domains.One difference between intelligence and artificial intelligence is that humans can thrive with extremely limited training data, whereas AI requires a massive amount of it. I think if anybody is worried about being replaced by AI, they should look at maximising their economic utility in areas which are not well specified.
by slopinthebag
3/30/2026 at 12:08:23 AM
Exactly. I would not want to have a pure math career or a performance engineering career in 10 years.by vatsachak
3/29/2026 at 12:54:52 PM
So specified .. that it can actually prove it can't be completely specified by any single specificationby gottheUIblues
3/30/2026 at 12:07:17 AM
All mathematical statements we care about fall out of the purview of incompletenessby vatsachak
3/28/2026 at 10:14:38 PM
[dead]by cindyllm
3/28/2026 at 10:12:40 PM
But LLMs have proven themselves better at programming than most professional programmers.Don't argue. If you think Hackernews is a representative sample of the field then you haven't been in the field long enough.
What LLMs have actually done is put the dream of software engineering within reach. Creativity is inimical to software engineering; the goal has long been to provide a universal set of reusable components which can then be adapted and integrated into any system. The hard part was always providing libraries of such components, and then integrating them. LLMs have largely solved these problems. Their training data contains vast amounts of solved programming problems, and they are able to adapt these in vector space to whatever the situation calls for.
We are already there. Software engineering as it was long envisioned is now possible. And if you're not doing it with LLMs, you're going to be left behind. Multimodal human-level thinking need only be undertaken at the highest levels: deciding what to build and maybe choosing the components to build it. LLMs will take care of the rest.
by bitwize
3/28/2026 at 10:32:05 PM
A bit optimistic I'd say. It's put some software engineering within reach of some people who couldn't do it prior. Where 'some' might be a lot, but still far from all.I was thinking the other day of how things would go if some of my less tech savvy clients tried to vibe code the things I implement for them, and frankly I could only imagine hilarity ensuing. They wouldn't be able to steer it correctly at all and would inevitably get stuck.
Someone needs to experiment with that actually: putting the full set of agentic coding tools in the hands of grandma and recording the outcome.
by abcde666777
3/28/2026 at 11:45:06 PM
It's still going to take a knowledgeable person to steer an LLM. The point is that code written entirely by humans is finished as a concept in professional work—if you're writing it yourself you're not working efficiently or employing industry best practice.by bitwize
3/29/2026 at 2:53:55 AM
I think it's dramatic to say it's the end of hand written code. That's like saying it's the end of bespoke suits. There are scenarios where carefully hand written and reviewed code are still going to have merit - for example the software for safety critical systems such as space shuttles and stations, or core logic within self-driving vehicles.Basically when every single line needs to be reviewed extremely closely the time taken to write the code is not a bottleneck at all, and if using AI you would actually gain a bottleneck in the time spent removing the excess and superfluous code it produces.
And my intuition is that the line between those two kinds of programming - let's call them careful and careless programming to coin an amusing terminology - I think that line may not shrink as far back as some think, and I think it definitely won't shrink to zero.
by abcde666777
3/29/2026 at 5:03:39 AM
You are aware of software verification? The AI can prove (mathematically) that its code implements the spec.by specvsimpl
3/29/2026 at 9:39:16 AM
That just takes you back to the debate about the code being the spec.by abcde666777
3/29/2026 at 12:29:01 PM
The code lets you shoot yourself in the foot in a lot more ways than a spec does, though. Few people would make specs that include buffer overflows or SQL injection.by 986aignan
3/29/2026 at 3:32:59 PM
"and don't have any security vulnerabilities" isn't a spec though. As soon as you get specific you're right back in it.by magicalist
3/29/2026 at 12:39:44 AM
That is akin to saying if you aren't using an IDE you are not working efficiently or employing industry best practice, which is insane when you consider people using Vi often run rings around people using IDEs.AI usage is a useless metric, look at results. Thus far, results and AI usage are uncorrelated.
by slopinthebag
3/29/2026 at 6:20:21 PM
I keep hearing anecdata that suggest significant to huge productivity increases—"a task that would have taken me weeks now takes hours" is common. There is currently not a whole lot of research that supports that, however:1) there hasn't been a whole lot of research into AI productivity period;
2) many of the studies that have been done (the 2025 METR study for example) are both methodologically flawed and old, not taking into account the latest frontier models
3) corporate transitions to AI-first/AI-native organizations are nowhere near complete, making companywide productivity gains difficult to assess.
However, it isn't hard to find stories on Hackernews from devs about how much time generative AI has saved them in their work. If the time savings is real, and you refuse to take advantage of it, you are stealing from your employer and need to get with the program.
As for IDEs, if you're working in C# and not using Visual Studio, or Java and not using JetBrains, then no—you are not working as efficiently as you could be.
by bitwize
3/29/2026 at 12:38:07 AM
Actually I will argue. Complex systems are akin to a graph, attributes of the system being the nodes and the relationships between those attributes being the edges. The type of mechanistic thinking you're espousing is akin to a directed acyclic graph or a tree, and converting an undirected cyclic graph into a tree requires you to disregard edges and probably nodes as well. This is called reductionism, and scientific reductionism is a cancer for understanding complex phenomena like sociology or economics, and I posit, software as well.People and corporations have been trying for at least the last five decades to reduce software development to a mechanistic process, in which a system is understandable solely via it's components and subcomponents, which can then be understood and assembled by unskilled labourers. This has failed every time, because by reducing a graph to a DAG or tree, you literally lose information. It's what makes software reuse so difficult, because no one component exists in isolation within a system.
The promise of AI is not that it can build atomic components which can be assembled like my toaster, but rather that it can build complex systems not by ignoring the edges, but managing them. It has not shown this ability yet at scale, and it's not conclusive that current architectures ever will. Saying that LLM's are better than most professional programmers is also trivially false, you do yourself no favours making such outlandish claims.
To tie back into your point about creativity, it's that creativity which allows humans to manage the complexity of systems, their various feedback loops, interactions, and emergent behaviour. It's also what makes this profession broadly worthwhile to its practitioners. Your goal being to reduce it to a mechanistic process is no different from any corporation wishing to replace software engineers with unskilled assembly line workers, and also completely misses the point of why software is difficult to build and why we haven't done that already. Because it's not possible, fundamentally. Of course it's possible AI replaces software developers, but it won't be because of a mechanistic process, but rather because it becomes better at understanding how to navigate these complex phenomena.
This might be besides the point, but I also wish AI boosters such as yourself would disclose any conflict of interests when it comes to discussing AI. Not in a statement, but legally bound, otherwise it's worthless. Because you are one of the biggest AI boosters on this platform and it's hard to imagine the motivation of spending so much time hardlining a specific narrative just for the love of the game, so to speak.
by slopinthebag
3/30/2026 at 6:19:13 PM
> Saying that LLM's are better than most professional programmers is also trivially false, you do yourself no favours making such outlandish claims.You grossly underestimate how awful one can be and still call themselves a "professional" in the field. Software engineering has effectively no standard certification of competence, which is part of why it's not actually an engineering field at all. So I stand by my statement that LLMs are better at writing code than most people working professionally as programmers. Again, Hackernews is not a representative sample, let alone the kind of programmer we admire and view as authoritative here on Hackernews. Most programmers require considerable oversight as well as detailed standards to follow in order to produce work without gumming up a code base. If you want to know why so much enterprise stuff is so bloated with heavy frameworks and a twisty maze of best practices like OO, SOLID, GoF patterns, etc. it's for this reason. The LLMs have access to a vast (if compressed/summarized) repository of knowledge about programming problems and commonly employed solutions in a variety of languages, and the ability to draw upon it instantly. Most humans, including myself, do not.
Anyway, as Tim Bryce observed in 2005, based on his father Milt's work in the 70s, most of the creativity and human in software development happens in the business/systems analysis phase, not programming, at least if you're employing a structured, rigorous, proven methodology. Milt Bryce turned systems design from an art into a proven, repeatable science, and with that a view of programming that's largely mechanistic. "There are very few true artists in programming; most programmers are just house painters."
> This might be besides the point, but I also wish AI boosters such as yourself would disclose any conflict of interests when it comes to discussing AI.
I'm not boosting squat. I'm telling it like it is, and talking about decisions in our field that have already been made. It is no longer up for debate that AI use is an integral part of software engineering now, and writing code "the old way", in an editor with maybe autocomplete, refactoring tools, etc., will soon go the way of punchcards. The business class that actually runs things has already decided this. If you're getting suspicious and demanding conflict-of-interest disclosures from someone who spells this out, your understanding is out of date.
by bitwize
3/28/2026 at 11:36:16 PM
As of now, no models have solved a Millennium Prize Problem[1].by kelseyfrog
3/29/2026 at 5:31:39 AM
Most Fields medals winners haven't either, except one.by raincole
3/29/2026 at 5:15:20 AM
This is the real Litmus test isn't it? There will be a deafening silence from critics when AI decides P vs NP.by utopcell
3/28/2026 at 11:34:21 PM
It will be heavily still reliant onexpert human input and interactions. Knuth is an expert, and know how to guide.by 3abiton
3/28/2026 at 9:21:25 PM
I think this is mostly about existing legislature, not about technology.In any other context than when your paycheck depends on it, you would probably not be following orders from a random manager. If your paycheck depended on following the instructions of an AI robot, the world might start to look pretty scary real soon.
by smokel
3/29/2026 at 3:04:01 AM
> If your paycheck depended on following the instructions of an AI robot, the world might start to look pretty scary real soon.That's already the case, minus AI, for gig workers. Their only agency is to accept or decline a ride/delivery, the rest is follow instructions.
by jfim
3/28/2026 at 10:47:22 PM
AI actually has to follow all rules, even the bad rules. Like when autonomous car drives super carefully.Imagine mcdonald management would enforce dog related rules. No more filthy muppets! If dog harasses customers, AI would call cops, and sue for restraining order! If dog defecates in middle of restaurant, everything would get desinfected, not just smeared with towels!
Nutters would crucify AI management!
by throw3747488
3/28/2026 at 9:26:50 PM
There's a lot to being a manager- Coherent customer interaction
- Common sense judgements
- Scheduling
- Quality control
All which are baked into humans but not so much into LLMs
Even if it were legal to have an LLM as a GM, I think it would fair poorly
by vatsachak
3/28/2026 at 9:34:54 PM
I've never seen you say thatby NamlchakKhandro
3/28/2026 at 9:38:06 PM
You will have to take my word that I started saying this in Dec 2024 lolby vatsachak