alt.hn

4/13/2026 at 11:26:04 PM

The AI revolution in math has arrived

https://www.quantamagazine.org/the-ai-revolution-in-math-has-arrived-20260413/

by sonabinu

4/14/2026 at 4:24:37 AM

Last week I got together with my math alumni friend. We cracked some beers, we chatted with voice mode ChatGPT and toyed around with Collatz Conjecture and we sent some prompt to a coding agent to build visualizations and simulation. It was a lot of fun directing these agents while we bounced off ideas and the models could explore them.

I think with the right problem and the right agentic loop it’s clear to me improvements will speed up.

by bgirard

4/14/2026 at 4:45:24 AM

I think voice mode uses weaker models, just an FYI relative to the SOTA

by drakenot

4/14/2026 at 4:04:52 PM

The bigger problem for me is that the realtime voice modes lack tool use, so they can't look anything up or do anything. Model strength definitely also matters, but even dumb models can be helpful when they can look things up and try things out. And smart models that don't do those things kinda suck.

by pxc

4/14/2026 at 12:13:34 PM

Can get around this with a local STT model and use text input but UX is probably clunkier

by SOLAR_FIELDS

4/14/2026 at 5:41:24 AM

Definitely, seems like gpt 3

by scrollop

4/14/2026 at 3:12:34 AM

> As they did so, they also learned how to improve the prompts they gave AlphaEvolve. One key takeaway: The model seemed to benefit from encouragement. It worked better “when we were prompting with some positive reinforcement to the LLM,” Gómez-Serrano said. “Like saying ‘You can do this’ — this seemed to help. This is interesting. We don’t know why.”

Four top logical people in the world are acknowledging this. It is mind-blowing and we don't know why.

by dogscatstrees

4/14/2026 at 3:38:25 AM

I know why.

Several people had problems with Sonnet burning through all their credits grinding on a problem it can't solve. Opus fixes this — it has a confidence threshold below which it exits the task instead of grinding.

"I spent ~$100 last week testing both against multiplication. Sonnet at 37-digit × 37-digit (~10³⁷) never quits — 15+ minutes, 211KB of output, still actively decomposing numbers when I stopped it. Opus will genuinely attempt up to ~50 digits (112K tokens on a real try), starts doubting around 55 digits, and by 80-digit × 80-digit surrenders in 330 tokens / 9 seconds with an empty answer." -- Opus, helping me with the data

The "I don't think this is worth attempting" heuristic is the difference. Sonnet doesn't have it, or has it set much higher. In order to get Opus and some other models to work on harder problems that it assumes it is not worth attempting, it requires an increase of confidence level.

I'll finish writing this up this week. I'm making flashy data visual animations to make the point right now.

by dataviz1000

4/14/2026 at 6:58:56 AM

So we have a bunch of imposter syndrome techies who would 5x with just a hint of encouragement, and now we’re trying to 2x them with LLMs, but in order to get there those same techies will have to gas up and inspire the LLM with the leadership and vision they themselves are wanting for?

The Universe seems against free lunches… if AGI is possible, finding a manager good enough to get the AGI to update its timesheet will not be (in practice).

by bonesss

4/14/2026 at 3:41:51 AM

It makes sense to me.

Originally LLMs would get stuck in infinite loops generating tokens forever. This is bad, so we trained them to strongly prefer to stop once they reached the end of their answer.

However, training models to stop also gave them "laziness", because they might prefer a shorter answer over a meandering answer that actually answered the user's question.

Mathematics is unusual because it has an external source of truth (the proof assistant), and also because it requires long meandering thinking that explores many dead ends. This is in tension with what models have been trained to do. So giving them some encouragement keeps them in the right state to actually attempt to solve the problem.

by zarzavat

4/14/2026 at 6:12:32 AM

It was just yesterday that this top post [] was decrying the "peril of laziness lost", that LLMs inherently lack the virtue of laziness.

So which one are they?

[] https://news.ycombinator.com/item?id=47743628

by dogscatstrees

4/14/2026 at 6:43:48 AM

I think laziness is not a minimum of effort. Laziness can actually be more effort towards a simpler or more practical solution, because those solutions are more pleasant in some way, and therefore more attractive to pursue.

by LoganDark

4/14/2026 at 7:15:50 AM

Reminds me of Larry Wall's three virtues of a programmer: laziness, impatience and hubris.

by zarzavat

4/14/2026 at 3:24:42 AM

Do we know why it works for humans?

Models are trained on human outputs. It’s not super surprising to me that inputs following encouraging patterns product better results outputs; much of the training material reflects that.

by brookst

4/14/2026 at 3:33:27 AM

> Do we know why it works for humans?

Try to figure it out. You can do it.

by latentsea

4/14/2026 at 3:29:07 AM

If I had to wager a lazy, armchair guess, I think it forces it to think harder/longer

The answer is probably more straightforward than we think, e.g. “the user thinks I can do this so I better make sure I didn’t miss anything”

by gxs

4/14/2026 at 3:34:59 AM

This seems pretty obvious, no?

It's pattern matching on training material. There is almost certainly an overlap between positivity and success in the training material. Positive prompts cause the pattern matching to weight towards positivity and therefor more successful material.

by CivBase

4/14/2026 at 3:59:31 AM

The training or system prompts have shoved the probabilities toward a space that tends to select “halt” sooner. You need to drag the probability weights around until they are less likely to reach “halt” so soon.

Nice language often sorta does this for whatever model(s) they looked at, and is also something people are likely to try. Probably lots and lots of nonsense token combos would work even better, but who’s gonna try sticking “gerontocratic green giant giraffes” on the end of their prompts to see if it helps?

Positive or negative language likely also prevents pulling the probabilities away from the correct topic, being so generic a thing. The above suggestion might only be ultra-effective if the topic is catalytic converters, for some reason, and push the thing into generating tokens about giraffes otherwise. How would you ever discover the dozens or thousands of more-effective but only-sometimes nonsense token combos? You’d need automation and a lot of brute force, or some better way to analyze the LLM’s database.

by lamasery

4/14/2026 at 4:11:07 AM

Mathematics seems like the ideal candidate for AIs to achieve absurd results. It's a purely abstract grammar with true auto-verifiability. Even SWE has the requirement of interacting with real physical things. In math there's no external feedback required, you're solely bounded by the rate and quality of token generation.

by sm0ss117

4/14/2026 at 4:33:07 AM

This misses the mark on at least two accounts: 1. Proofs without human understanding have less value for mathematicians 2. At least for now, interestingness depends on human judgment. It is subjective and not as verifiable.

by drivebyhooting

4/14/2026 at 4:48:01 AM

Every new mathematician that comes along doesn’t know everything that has come before him. He needs to go learn all the math that his predecessors did. I don’t see how an LLM coming up with these proofs changes that.

by dyauspitr

4/14/2026 at 5:17:47 AM

Because the problem space is basically infinite. If a person is working on a problem, its probably interesting to at least one person. Randomly walking through the problem space might be interesting, but I don't know how the signal will fare against other humans.

by streb-lo

4/14/2026 at 5:38:27 AM

Grammar seems like you’re talking about LLMs specifically. Well, isn’t Sudoku just math? LLMs suck at Sudoku last I checked. When told not to code a solver, its very first deduction was wrong.

by meroes

4/14/2026 at 3:30:05 PM

Generally when people talk about using LLMs to do mathematics research they’re not talking about the LLM alone, but the LLM + a harness for it to write and execute theorem provers such as Lean or Coq to validate their results.

by evenhash

4/14/2026 at 2:19:18 AM

I wonder when AI will be able to discern the passage of time

by claysmithr

4/14/2026 at 3:29:55 AM

Can't you just give it the time in each prompt? Would that work?

I've seen this mentioned a few times though, so I think maybe it's more complicated than this?

by Buttons840

4/14/2026 at 2:38:07 AM

It already does time in prompt-blocks. It knows time is linear and what just happened, what happened before that, and what happened before that.

by 1970-01-01

4/14/2026 at 2:47:32 AM

When I tried to use it as an AI CEO and Life Coach, it never was able to discern time passing, what I've already done, what needed to be done. It just said the same stuff over and over, stuff I've already done. That and it's kind of stuck in the era it was trained in. If it felt time passing like a human maybe it would be conscious?

Nevertheless not having a sense of time makes it really bad at planning anything. I used Gemini Pro.

by claysmithr

4/14/2026 at 2:33:12 AM

Altman has estimated one year until ChatGPT is capable of measuring time passed.

https://tech.yahoo.com/ai/chatgpt/articles/chatgpt-fails-mis...

by maplethorpe

4/14/2026 at 2:50:13 AM

Sounds like Musk setting deadlines for Mars landings.

by ambicapter

4/14/2026 at 5:41:32 AM

It's so hard to predict you know, these planets keep moving...

by keyle

4/14/2026 at 2:57:57 AM

Can’t tell if you are being sarcastic but Altman’s whole job is to make bullshit near future predictions about rapid development of AI in the public.

by VladVladikoff

4/14/2026 at 3:04:20 AM

Thankyou for stating the obvious, for some reason we need to repeat this. ^^;

by random__duck

4/14/2026 at 4:52:51 AM

There's no need to "estimate" it. "Time" is not something built into training and sampling a generative distribution. He might as well have told you your Naive Bayes email filters will measure time passed.

by viccis

4/14/2026 at 7:00:26 AM

All these overly optimistic articles about AI solving maths problems are very annoying. Can we agree that maths is not about solving problems, but about understanding them by developing a language and the conditions for new insights? It is misleading because GPTs do provide easy access to new information, but they do not deepen understanding.

I think AI-assisted research will likely have a very negative net impact on mathematics in the long run by lowering the average level of understanding within the community.

Also, research directions are influenced by what people can solve, and this will slowly shift research toward purely algebraic/symbolic manipulations that mathematicians no longer fully keep track of.

by doubledamio

4/14/2026 at 3:26:42 AM

Interesting development. It feels like AI is getting much better at symbolic reasoning, not just pattern recognition.

by norejisace

4/14/2026 at 6:30:42 AM

Boring mathematical reality here. This is nice and all that but as a (part time) corporate mathematician, I'd like an AI that organises conference trips, picks the best accommodation and food and gaslights the execs into approving it. Then fixes the perpetually broken coffee machine. Everything else for me starts on paper and is mostly undergrad level problems which I need to do by hand to keep my brain going for when I actually might need it one day. And with the geopolitical instability out there at the moment I'm not that willing to put my eggs into the basket.

by 440bx

4/14/2026 at 5:20:01 AM

What is the telos for AI chewing around the edges of pure math problems? Does AI care about math?

by viccis

4/14/2026 at 2:56:51 AM

There are several high value prizes for mathematical research. Let me know when an "AI" has earned one of them. Otherwise:

> When Ryu asked ChatGPT, “it kept giving me incorrect proofs,” [...] he would check its answers, keep the correct parts, and feed them back into the model

So you had a conversational calculator being operated by an actual domain expert.

> With ChatGPT, I felt like I was covering a lot of ground very rapidly

There's no way to convert that feeling into a measurement of any actual value and we happen to know that domain experts are surprisingly easy to fool when outside of their own domains.

by themafia

4/14/2026 at 3:31:05 AM

Wow that was your takeaway?

> “2025 was the year when AI really started being useful for many different tasks,” said Terence Tao

I think I’ll go out on a limb and agree with Terrence Tao, I think the dude is well known in the math community, or something

by gxs

4/14/2026 at 3:34:32 AM

If anything his simping for AI models makes me more suspect of him than I ever was because my own eyes show me their limits.

by noobermin

4/14/2026 at 4:11:15 AM

Any chance your eyes are wrong? Or only people who disagree with you are.

by jryle70

4/14/2026 at 4:19:08 AM

> go out on a limb and agree with Terrence Tao

Is AI his specialty?

> I think the dude is well known in the math community, or something

I believe this is called "appeal to authority." Which is why, instead of disagreeing with him, I suggested a more cogent endpoint that could be used to establish the facts the article's title suggests.

by themafia

4/14/2026 at 3:48:36 AM

I think he means useful for mathematicians getting paid shilling for AI models

by p1dda

4/14/2026 at 4:02:03 AM

[dead]

by sparin9

4/14/2026 at 5:41:10 AM

[flagged]

by 4ajsH17

4/14/2026 at 6:24:33 AM

https://www.quantamagazine.org/about/ says "launched by the Simons Foundation in 2012"

and https://www.simonsfoundation.org/about/ has "Since its founding in 1994 by Jim and Marilyn Simons"

https://en.wikipedia.org/wiki/Jim_Simons explains how Jim Simons got rich.

The book 'The Man Who Solved the Market' - https://www.gregoryzuckerman.com/the-books/the-man-who-solve... is a nice read.

HN discussion on a review of the book - https://news.ycombinator.com/item?id=29392041

by homarp

4/14/2026 at 5:54:41 AM

More neo-luddite nonsense.

by Wissenschafter

4/14/2026 at 4:27:54 AM

We can define a Dyson Sphere in math.

We cannot build one.

AI outputting axiomatically valid syntax isn't going to be all that useful. It's possible to generate all axiomatically correct math with a for loop until the machine OOMs

Physics is not math and math is not physics.

by yabutlivnWoods

4/14/2026 at 4:44:10 AM

You just failed the Turing test.

by djsjajah

4/14/2026 at 5:38:20 AM

Maybe he passed the Turing test with 88.2% which is 1.8% higher than the competition.

by keyle

4/14/2026 at 5:20:54 AM

The Turing test just failed you. I'll go one better, physics isn't reality, it's a model of reality utilizing math.

by goatlover

4/14/2026 at 6:09:05 PM

And I'll go one better, you haven't said anything here at all, you've just left a representation of what you understand to be saying.

by dugidugout