alt.hn

2/22/2026 at 7:54:14 PM

The Tears of Donald Knuth (2015)

https://cacm.acm.org/opinion/the-tears-of-donald-knuth/

by todsacerdoti

2/22/2026 at 11:43:44 PM

Imagine Knuth's heartbreak when he sees how LLMs have perverted the practical application of the art of computer programming. ("The LLM understands so I don't have to.") It's sad it happened during his lifetime. Has he commented on the topic? Anyone have a link?

by atomic128

2/23/2026 at 12:08:36 AM

https://cs.stanford.edu/~knuth/chatGPT20.txt is a conversation between Knuth and Wolfram about GPT-4.

> I find it fascinating that novelists galore have written for decades about scenarios that might occur after a "singularity" in which superintelligent machines exist. But as far as I know, not a single novelist has realized that such a singularity would almost surely be preceded by a world in which machines are 0.01% intelligent (say), and in which millions of real people would be able to interact with them freely at essentially no cost.

> I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy. And I hope you do the same.

by simonw

2/23/2026 at 1:16:50 AM

> not a single novelist has realized that such a singularity would almost surely be preceded by a world in which machines are 0.01% intelligent (say), and in which millions of real people would be able to interact with them freely at essentially no cost.

Aren't Asimov's Multivac stories basicaly this? Humans build a powerful computer with a conversational interface helping them doing all kind of science and stuff, then before they know they become Multivac's pets.

by ghssds

2/23/2026 at 12:40:00 AM

I don't know why but it makes me smile that he did this experiment by having a grad student type the questions for chatgpt and copy the results.

by sethev

2/23/2026 at 12:58:31 AM

That's related. Thank you for posting it.

But what does Knuth think of "vibe coding" or "agentic coding"?

What does he think of "The Dawn of the Dark Ages of Computer Programming"?

by atomic128

2/23/2026 at 1:07:56 AM

I don't think Knuth needs to stoop that low. He actually knows what he's doing.

by jacquesm

2/23/2026 at 1:50:55 AM

That link is great!

Knuth has a beautiful way of writing systematically (as can be expected of the inventor of "Literate Programming").

by rramadass

2/22/2026 at 11:50:44 PM

While I can't speak for Knuth, I have been reflecting on the fact that developing with a modern LLM seems to be an evolution of the concept of Literate Programming that Knuth has long been a proponent of.

What is the rationale behind the assertion that Knuth would be so fundamentally opposed to the use of LLMs in development?

by johngunderman

2/22/2026 at 11:59:46 PM

I don't see the connection.

In literate programming you meticulously write code (as usual) but present it to a human reader as an essay: as a web of code chunks connected together in a well-defined manner with plenty of informal comments describing your thinking process and the "story" of the program. You write your program but also structure it for other humans to read and to understand.

LLM software development tends to abandon human understanding. It tends to abandon tight abstractions that manage complexity.

by atomic128

2/23/2026 at 2:22:26 AM

Have you ever tried literate programming? In literate programming you do not write the code then present it to a human reader. You describe your goal, assess various ideas and justify the chosen plan (and oftentimes change your mind in the process), and only after, once the plan is clear, you start to write any code.

Thus the similarity with using LLM. Working with LLMs is quicker though, not only because you do not write the code but you don't care much about the style of the prose. On the other hand, the code has to be reviewed, debugged and polished. So, Ymmv.

by rixed

2/23/2026 at 4:33:55 AM

> In literate programming you do not write the code then present it to a human reader. You describe your goal, assess various ideas and justify the chosen plan (and oftentimes change your mind in the process), and only after, once the plan is clear, you start to write any code.

This is not literate programming. The main idea behind literate programming is to explain to a human what you want a computer to do. Code and literate explanations are developed side by side. You certainly don't change your mind in the process (lol).

> Working with LLMs is quicker though

Yes, because you neither invest time into understanding the problem nor conveying your understanding to other humans, which is the whole point of literate programming.

But don't take my word, just read the original.[1]

[1] https://www.cs.tufts.edu/~nr/cs257/archive/literate-programm...

by phba

2/23/2026 at 1:08:34 AM

It couldn't be further away from Literate programming. If anything we should call it illiterate programming.

by jacquesm

2/23/2026 at 1:59:20 AM

The irony is that if we had been writing literate programs instead of "normal" programs, from 1984 to 2026, then LLMs may actually have been much better at programming in 2026, than they turned out to be. Literate programs entwine the program code with prose-explanations of that code, while also cross-referencing all dependent code of each chunk. In some sense they make fancy IDEs and editors and LSPs unnecessary, because it is all there in the PDF. They also separate the code from the presentation of the code, meaning that you don't really have to worry about the small layout-details of your code. They even have aspects of version control (Knuth advocates keeping old code inside the literate program, and explaining why you thought it would work and why it does not, and what you replaced it with).

LLMs do not bring us closer to literate programming any more than version-control-systems or IDEs or code-comments do. All of these support-technologies exist because the software industry simply couldn't be disciplined enough to learn how to program in the literate style. And it is hard to want to follow this discipline when 95% of the code that you write, is going to be thrown away, or is otherwise built on a shaky foundation.

Another "problem" with literate programming is that it does not scale by number of contributors. It really is designed for a lone programmer who is setting out to solve an interesting yet difficult problem, and who then needs to explain that solution to colleagues, instead of trying to sell it in the marketplace.

And even if literate programming _did_ scale by number of contributors, very few contributors are good at both programming _and_ writing (even the plain academic writing of computer scientists). In fact Bentley told Knuth (in the 80s) that, "2% of people are good at programming, and 2% of people are good at writing -- literate programming requires a person to be good at both" (so only about 0.04% of the adult population would be capable of doing it).

By the way, Knuth said in a book (Coders at Work, I believe): "If I can program it, then I can understand it." The literate paradigm is about understanding. If you do not program it, and if _you_ do not explain the _choices_ that _you_ made during the programming, then you are not understanding it -- you are just making a computer do _something_, that may or may not be the thing that you want (which is fine, most people use computers in this way: but that makes you a user and not a programmer). When LLMs write large amounts of code for you, you are not programming. And when LLMs explain code for you, you are not programming. You are struggling to not drown in a constantly churning code-base that is being modified a dozen times per day by a bunch of people, some of whom you do not know, many of whom are checked out and are trying to get through their day, and all of whom know that it does not matter because they will hop jobs in one or two or three years, and all their bad decisions become someone else's problem.

Just because LLMs can translate one string of tokens into a different string of tokens, while you are programming does not make them "literate". When I read a Knuthian literate program, I see, not a description of what the code does, but a description what it is supposed to do (and why that is interesting), and how a person reasoned his/her way to a solution, blind-alleys and all. The writer of the literate program anticipates the next question, before I even have it, and anticipates what might be confusing, and phrases it in a few ways.

As the creator of the Axiom math software said: the goal of Literate Programming, is to be able to hire an engineer, give him a 500 page book that contains the entire literate program, send him on a 2 week vacation to Hawaii, and have him come back with whole program in his head. If anything LLMs are making this _less_ of a possibility.

In an industry dominated by deadline-obsessed pseudo-programmers creating for a demo-obsessed audience of pseudo-customers, we cannot possibly create software in a high-quality literate style (no, not even with LLMs, even if they got 10x better _and_ 10x cheaper).

Lamport (of Paxos, Byzantine Generals, Bakery Algo, TLA+), made LaTeX and TLA+, with the intent that they be used together, in the same way that CWEB literate programs are. All of these tools (CWEB, TeX, LaTeX, TLA+), are meant to encourage clear and precise thinking at the level of _code_ and the level of _intent_. This is what makes literate programs (and TLA+ specs) conceptually crisp and easily communicable. Just look at the TLA+ spec for OpenRTOS. Their real time OS is a fraction of the size that it would have been if they had implemented it in the industry-standard way, and it has the nice property of being correct.

Literate Programming, by design, is for creating something that _lasts_, and that has value when executed on the machine and in the mind. LLMs (which are being slowly co-opted by the Agile consulting crowd), are (currently) for the exact opposite: they are for creating something that is going to be worthless after the demo.

by nz

2/23/2026 at 12:40:48 PM

I'm only discovering Literate Programming today, but you seem very familiar so I might as well ask: what is the fundamental difference with abundant comments? Is it the linearity of it? I mean documentation type comments at the top of routines or at "checkpoints".

I'm particularly intrigued by your mention of keeping old code around. This is something I haven't found a solution for using git yet; I don't want to pollute the monorepo with "routine_old()"s but, at the same time, I'd like to keep track of why things changed (could be a benchmark).

by MITSardine

2/23/2026 at 2:01:08 PM

An article and previous discussion; Literate programming is much more than just commenting code - https://news.ycombinator.com/item?id=30760835

Wikipedia has a very nice explanation - https://en.wikipedia.org/wiki/Literate_programming

A good way to think about it is {Program} = {set of functional graphs} X {set of dataflow pipelines}. Think cartesian product of DAG/Fan-In/Fan-Out/DFDs/etc. Usually we write the code and explain the local pieces using comments. The intention in the system-as-a-whole is lost. LP reverses that by saying don't think code; think essay explaining all the interactions in the system-as-a-whole with code embedded in as necessary to implement the intention. That is why it uses terms like "tangle", "weave" etc. to drive home the point that the program is a "meshed network".

To study actual examples of LP see the book C Interfaces and Implementations: Techniques for Creating Reusable Software by David Hanson - https://drh.github.io/cii/

by rramadass

2/23/2026 at 2:18:51 AM

> LLMs do not bring us closer to literate programming...

Without saying that I agree with the person you're responding to, and without claiming to really know what he was saying, I'll say what I think he was suggesting: That a human could do the literate part of literate programming, and the LLM could do the computing part. When (inevitably) the LLM doesn't write bug-free code snippets, the human revises the literate part, followed by the LLM revising the code part.

And of course there would be a version control part of this, too, wherein both the changes to the literate part and the changes to the code parts are there side-by-side, as documentation of how the program evolved.

by mcswell

2/23/2026 at 4:36:48 AM

This is meta so sorry about not actually responding, but thank you for a very well written comment. In this time of slop and rage it's really refreshing to see someone take the time to write (long form for a comment) about something they are clearly knowledgeable and passionate about.

by WD-42

2/23/2026 at 1:25:30 PM

A very well articulated comment on LP !

Thanks for writing it up.

by rramadass

2/23/2026 at 1:16:46 AM

You might enjoy this video:

https://www.youtube.com/watch?v=Y65FRxE7uMc

The connection to knuth is tangential to the actual video subject, but it does contrast knuth to LLMs as a framing device

by asddubs

2/23/2026 at 12:29:06 PM

An hour later. Wow that was quite a rabbit hole, I return a fan of Tom 7, surfing the edge of mad genius. http://tom7.org/

by lioeters

2/22/2026 at 11:46:14 PM

He is still alive (I think?) you could just ask him. I doubt he is sad as much as he is excited. Computer scientists are not SWEs worried about losing their careers.

by seanmcdirmid

2/23/2026 at 12:04:06 AM

He’s still here. In fact, in December he gave his annual Christmas lecture, and last month he was a guest at a Computer History Museum event.

by linguae

2/22/2026 at 11:49:17 PM

Excited? I doubt that. I'm guessing you haven't read his books.

by atomic128

2/23/2026 at 12:07:35 AM

He seems pretty fascinated with the possibilities.

https://cs.stanford.edu/~knuth/chatGPT20.txt

by CharlesW

2/23/2026 at 12:11:06 AM

"I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy. And I hope you do the same. Best regards, Don"

by atomic128

2/23/2026 at 12:33:35 AM

There's more than one cherry to pick if one needs Mr. Knuth to have a purely-negative opinion about LLMs, but naturally any fascination is offset by the same concerns that any sane technologist has. In any case, it's all in his post.

by CharlesW

2/22/2026 at 11:52:11 PM

The techno pessimists on HN are probably not PhDs in computer science. I don’t think they understand what it takes to get there, and how it shapes your thinking afterwards.

by seanmcdirmid

2/23/2026 at 12:47:14 AM

Neither Wolf nor Knuth are PhDs in Computer Science, yet many would agree that both understand "what it takes to get there" as do many others who else live sans a PhD in Comp. Sci.

by defrost

2/23/2026 at 1:35:17 AM

Needlessly pedantic.

Knuth's PhD is in mathematics, like Alan Turing, and many other significant computer scientists.

by robotresearcher

2/23/2026 at 3:27:32 AM

> Needlessly pedantic.

You don't have to pre warn readers about your comments here, we're all needlessly pedantic.

That aside, the guts of this sub branch is the correlation between {techno pessimists on HN} and {people qualified to understand LLM's (workings and implications)}.

Personally I wouldn't limit set two to "PhDs in computer science" or even accept that {all PhD's in Comp Sci} is a subset of set two, as I made clear with my comment, nor would I argue a lack of overlap between sets one and two.

I'm interested to hear where you stand.

by defrost

2/23/2026 at 12:02:59 AM

Hopefully some are visionary enough to be dismayed that the endgame of their field is the acceleration of slop and fraud, the end of customer service, and the end of the reading of full, original documents.

I can't imagine being excited about any of that unless I was trying to make money from it.

by add-sub-mul-div

2/23/2026 at 1:47:39 AM

> the end of the reading of full, original documents

That's one that always gets me: people who use LLMs to summarize everything. It's like, bro, how lazy are you that you can't be bothered to read a handful of paragraphs of text? That takes all of 30 seconds. I can understand trying to get a computer to summarize a document which is dozens of pages long (though I would be concerned about hallucinations), but a lot of the tasks people use LLMs for are really easy already.

by bigstrat2003

2/23/2026 at 12:34:13 AM

> …LLMs have perverted the practical application of the art of computer programming. ("The LLM understands so I don't have to.") It's sad it happened during his lifetime.

If you see magazines articles or TV shows and ads from the 1980s (a fun rabbit hole on YouTube, like the BBC Archive), the general promise was that "Computers can do anything, if you just program them."

Well, nobody could figure out how to program them. (except the outcasts like us who went on to suffer for the rest of our lives for it :')

OS makers like Microsoft/Apple/etc all had their own ideas about how we should make apps and none of them wanted to work together and still don't.

With phones & "AI" everywhere we are actually closer to that original promise of everyone having a computer and being able to do anything with it, that isn't solely dictated by corporations and their prepackaged apps:

Ideally ChatGPT etc should be able to create interactive apps on the fly on your iPhone etc. Imagine having a specific need and just being able to say it and get a custom app right away just for you on your device.

by Razengan

2/23/2026 at 12:45:27 AM

Past progress in software engineering is a tower of well-defined abstractions.

Compilers for languages that make specific guarantees about the semantics of their translation to machine code.

Libraries with well-defined interfaces that let you stand on the shoulders of others by understanding said interfaces and ignoring the internals.

This is how concrete progress is made. You build on solid blocks.

That era is ending.

by atomic128

2/23/2026 at 2:28:48 AM

That era ended 20 years ago. It's called "industrialization", a process that has happened to many other crafts in the past. AI is just the latest blow.

by rixed

2/23/2026 at 6:10:40 AM

...is that comment written by an LLM?

Human programmers are frequently hamstrung by human politics and economies.

Hell, even major developers like Google and Facebook still fight against letting iPhone apps run on iPad, for example. YouTube still doesn't support Picture-in-Picture on iPad.

It took YEARs for some big apps to just adopt Dark Mode. The best paid programmers on the planet, wtf?

If the power of AI isn't artificially crippled I could be able to just say "Make me a native app for browsing {DumbWebsiteThatRefusesToProvideAnApp}" or "Fix HN's crap formatting" and just get on with my life the way I want without having to beg or fight our Corpo Gods.

by Razengan

2/23/2026 at 12:40:37 AM

What a weird, bitchy article. Knuth might be wrong but I gave up.

by oh_my_goodness

2/23/2026 at 1:22:10 AM

Same reaction. I can't even say the author is correct/wrong because I couldn't get through it.

by the-grump

2/23/2026 at 3:11:24 AM

[flagged]

by benreesman

2/23/2026 at 6:11:12 AM

[flagged]

by Razengan

2/23/2026 at 6:20:58 AM

A PhD caliber thesis with devastating epistenics about failures that have claimed conservatively a dozen lives is going gray on an 18 year old account but this shit is fine.

@dang, I'm now threatening to buy a massive oppo campaign with immaculate data and OpenAI's fundraise hanging by a thread.

Fix it or I'll fix it.

by benreesman

2/23/2026 at 6:29:05 AM

[flagged]

by benreesman

2/23/2026 at 7:07:04 AM

Would you please stop posting like this?

by dang

2/23/2026 at 7:55:56 AM

No, I will continue to raise the alarm bells until YC affiliated companies and executives stop getting sued for manslaughter or it's moral equivalent.

Profanity is not ugly, ugly is ugly and you back Insta cart slave labor practices with bipartisan objections of disgust.

by benreesman

2/23/2026 at 8:52:07 AM

Dan's offline now. I'm the other moderator here. We can't allow commenting like this to continue without taking any action. It has nothing to do with the targets of your attacks, and everything to do with keeping HN healthy. HN is not the venue for campaigns against specific individuals or organizations. The people you're referring to are not on HN, and haven't been for a long time. The people here are Dan and I and ordinary HN users. These ongoing outbursts have no effect on the companies and people you're talking about, and serve only to make HN worse for the people who are here. You're welcome to take whatever action you’re legally able to against the people and companies you've mentioned. But we can't have you continually venting this stuff in unrelated HN threads.

by tomhow

2/23/2026 at 12:47:43 PM

Be precise in such serious accusations.

Be precise about the harm done to the community, which I've been part of for longer than it's newer members have been alive. A community in which my accurate forecasting of "risk of ruin" type outcomes has error bars between 60-90%.

Be precise about what a healthy HN means, because that's not written down anywhere, the guidelines such as they are? A masterclass in selective enforcement of blank-check norms for money.

You've got the same dataset I do, and exactly the same access to legitimate authority as opposed to self-arrogated police powers on behalf of public benefit corporations which have neither benefited the public nor a shareholder.

I was here long before you or Dan, and if you ban me, it will be the wedge I need to move this conversation somewhere else.

Let's dance.

edit:

and one more thing, quote a primary source once in a while.

i have better citations ranting than you do larping adult:

https://www.wired.com/story/instacart-delivery-workers-still...

by benreesman

2/23/2026 at 2:19:58 PM

[dead]

by benreesman

2/23/2026 at 7:39:55 AM

If you read this and mount a credible objection that can't be addressed by tweaks to methodology, then I will leave the site forever.

But the asymmetry of the power of selective participation is tyrannical: you engage when you like, your silence is a moral victory by default, and I'm the senior community member by a lot.

https://github.com/b7r6/cassandra-dissertation

6 kids dead, not counting Suchir.

Engage constructively, substantial ly, and in public, or deal with my press releases.

The data shows black holes in comments and submissions that correlate with Altman. I ran it on myself to not fix anyone. There are other search parameters that are worse, it's open source, proven in lean4 to a growing degree, and you win by making an argument, not being an unclected apparatchik.

by benreesman

2/23/2026 at 7:50:18 AM

This time silence looks guilty, because this time I brought the corpus and the math.

The burden is on you now to show you're not a parrot for goons.

by benreesman

2/23/2026 at 6:16:00 PM

Ok, this is not good for anyone, so I've banned the account until we have some reason to believe that things have stabilized.

I know it may be hard to believe right now, but we appreciate you and your contributions. We can't have users going on tilt on Hacker News though. As I said, experience has taught us that it's not good for anyone.

by dang

2/23/2026 at 2:34:59 PM

[dead]

by benreesman

2/23/2026 at 2:36:48 AM

Here is my TL;DR and interpretation:

1. Knuth laments the lack of technical ("internal") history of computing, which traces the evolution of technology and ideas, and should be of great interest and benefit to practitioners.

2. Historians typically focus on their domains of expertise - social history, culture, economics, politics, personalities, etc. - and tend to write non-technical ("external") history of computing.

3. The people who have the relevant technical expertise - practioners, researchers, and scholars within the computing field - are qualified (in terms of technical understanding at least) to write this technical history, but have basically zero economic incentive to do so. There is no reward for industry practitioners to write the technical history of computing, and there is little to no reward for computing researchers or scholars either. And of course if one is (or becomes) an expert in computing, there is no economic incentive to become (or remain) a historian.

4. Nonetheless, there is in fact a small (and hopefully growing) group of scholars who seem to be interested in investigating the technical history of computing (and according to the author "holistic" history which includes multiple aspects.)

by musicale

2/23/2026 at 3:09:56 AM

I tend to agree with Knuth - technical history is extremely valuable to both practitioners and researchers in computing, and there isn't enough of it.

While it is understandable that computing practitioners and researchers want to look forward to the next "new" thing rather than backward to "old" things, ignoring computing history means that we are often reinventing the wheel, repeating old mistakes, etc., all while lacking an understanding of how and why things are the way they are today. And perhaps missing out on a great deal of fun and intellectual engagement as well.

Fortunately is some activity in terms of writing up and analyzing the technical history of computing, and I certainly appreciate the work of the CHM, journals like Annals of the History of Computing, the work of retrocomputing hobbyists, and the work of the scholars mentioned in the article. But (as the article notes) there are few economic and career incentives - in history or in computing - to produce this important work.

The article validates Knuth with these statements:

> For different reasons, outlined below, neither group has shown much interest in supporting work of the kind favored by Knuth. That is why it has rarely been written.

> Most of this new work is aimed primarily at historians, philosophers, or science studies specialists rather than computer scientists

> Work of the particular kind preferred by Knuth will flourish only if his colleagues in computer science are willing to produce, reward, or commission it.

The second part of this last sentence isn't wrong, but sidesteps the first point. One might similarly criticize history departments for failing to reward or commission technological literacy.

by musicale

2/23/2026 at 1:36:17 PM

I also agree with Knuth and for me it has been extremely valuable to know the history of various technologies, and especially knowing the reasons why the optimum solutions have been replaced from time to time and the causal connections between various discoveries.

I see expressed frequently opinions that old scientific and technical publications are obsolete, but in my opinion this is very naive.

The optimum technology or algorithm for solving a certain problem changes when improvements are done in some different domains. However the range of kinds of solutions for a given problem is usually finite, so when the optimum solution changes in time it may necessarily change to a kind of solution that has already been used in the past.

Because of this, it is very frequent to see claims about the discovery of "new" things, where the so-called "new" things were well known and widely used some decades ago, or even much earlier.

The worst is not the time wasted with the rediscovery of old things, but the fact that the rediscoveries are usually incomplete, without also rediscovering the finer points about which are their most efficient variants and which are their limitations, which may make them non-applicable in certain contexts.

Knowing a detailed technical and scientific history avoids such cases.

by adrian_b

2/23/2026 at 8:19:10 AM

*practitioners - too late to fix typo

by musicale

2/23/2026 at 2:22:54 PM

Yep, a very badly written article.

I made a full pass but was annoyed with the author's contention that "History of History of Software" (WTF does this even mean?) should be treated as seriously as "Computer Science" itself. While there is some logic in saying "Computing" involves "Computer Science + all its various usages in various domains" focusing on the latter (ancillary) and not the former (primary) is certainly "dumbing down" as Knuth correctly takes issue. A good parallel would be a "Scientific Theory" and its "various realizations in various domains".

This, for me, was the final validation that this article is not to be taken seriously;

In his paper, Campbell-Kelly offered a “biographical mea culpa” for his own early work that he now reads with a “mild flush of embarrassment.” He came to see his erstwhile enthusiasm for technical history as a youthful indiscretion and his conversion to business history as an act of redemption,

by rramadass

2/23/2026 at 3:09:29 AM

[flagged]

by benreesman

2/23/2026 at 1:48:02 AM

LinkedIn version of the history of CS:

In the beginning was assembly language, then we got C, followed by C++ bringing in OOP and Java making it safe, ...

by kazinator

2/23/2026 at 12:05:46 AM

Title should note that this is a 2015 post.

by smitty1e