alt.hn

2/12/2026 at 7:52:49 AM

65 Lines of Markdown, a Claude Code Sensation

https://tildeweb.nl/~michiel/65-lines-of-markdown-a-claude-code-sensation.html

by roywashere

2/13/2026 at 10:35:06 PM

I have been thinking a lot about the use of AI and how to use it. Part of my process has been watching others, namely the people who I thought were incompetent at their job before AI.

I have found the following, but I suspect as AI gets better this will change.

1) those who where incompetent before still are, but AI hides it.

2) those who were competent before AI do vastly more with AI. They seem to apply it in away that simply overshadows what the incompetent are doing.

3) the incompetent seem to be fascinated with things like skills, pre prompts and, setting policies and guidelines, and workshops. The competent seem to need none of this, are not going to workshops, already have their own and simply are more productive.

by mbrumlow

2/12/2026 at 8:42:59 AM

LLMs are the eternal September for software, in that the sort of people who couldn’t make it through a bootcamp can now be “programming thought leaders”. There’s no longer a reliable way to filter signal from noise.

Those 3000 early adopters who are bookmarking a trivial markdown file largely overlap with the sort of people who breathlessly announce that “the last six months of model development have changed everything!”, while simultaneously exhibiting little understanding of what has actually changed.

There’s utility in these tools, but 99% of the content creators in AI are one intellectual step above banging rocks together, and their judgement of progress is not to be trusted.

by timr

2/12/2026 at 9:07:17 AM

Sometimes I just bookmark things because I think to myself “Maybe I’ll try this out, when I have time” which then likely never happens.

So I wouldn’t give anything on 3k stars at all.

by adjfasn47573

2/12/2026 at 9:22:04 AM

> Sometimes I just bookmark things because I think to myself “Maybe I’ll try this out, when I have time” which then likely never happens.

For me that’s 100% of the time. I only bookmark or star things I don’t use (but could be interesting). The things I do use, I just remember. If they used to be a bookmark or star, I remove it at that point.

by latexr

2/12/2026 at 10:07:57 AM

I agree.

I'm sure i'll piss off a lot of people with this one but I don't care any more. I'm calling it what it is.

LLMs empower those without the domain knowledge or experience to identify if the output actually solves the problem. I have seen multiple colleagues deliver a lot of stuff that looks fancy but doesn't actually solve the prescribed problem at all. It's mostly just furniture around the problem. And the retort when I have to evaluate what they have done is "but it's so powerful". I stopped listening. It's a pure faith argument without any critical reasoning. It's the new "but it's got electrolytes!".

The second major problem is corrupting reasoning outright. I see people approaching LLMs as an exploratory process and let the LLM guide the reasoning. That doesn't really work. If you have a defined problem, it is very difficult to keep an LLM inside the rails. I believe that a lot of "success" with LLMs is because the users have little interest in purity or the problem they are supposed to be solving and are quite happy to deliver anything if it is demonstrable to someone else. That would suggest they are doing it to be conspicuous.

So we have a unique combination of self-imposed intellectual dishonesty, mixed with irrational faith which is ultimately self-aggrandizing. Just what society needs in difficult times: more of that! :(

by dgxyz

2/12/2026 at 3:31:30 PM

> stuff that looks fancy but doesn't actually solve the prescribed problem at all."

Exactly - we are in the age of "AI-posers".

by mentalgear

2/13/2026 at 4:37:47 PM

Do you ever filter signal from noise by the quality of the code? The code written by the Google founders was eventually rewritten by others, and it was likely worse than what a fresh grad produces today. Still, that initial search engine is the most influential thing they ever built, and it's something the modern Bay Area will probably never create again.

by pikachu0625

2/12/2026 at 9:37:32 AM

Is Andrej Karpathy the guy who 'couldn't make it through a [coding] bootcamp' in this description?

by OJFord

2/12/2026 at 10:18:30 AM

Andrej Karpathy named the pitfalls and did't make the markup file

by croes

2/12/2026 at 11:44:06 AM

Ugh right sorry, 'inspired'.

by OJFord

2/12/2026 at 9:31:18 AM

> LLMs are the eternal September for software, in that the sort of people who couldn’t make it through a bootcamp can now be “programming thought leaders”

The democratization of programming (derogatory)

by sph

2/12/2026 at 9:34:47 AM

And vendor locking

by podgorniy

2/12/2026 at 10:22:21 AM

Free compilers were the democratization of programming.

This is the banalization of software creation by removing knowledge as a requirement. That's not a good thing.

You wouldn't call the removal of a car's brakes the democratization of speed, would you?

by croes

2/12/2026 at 9:29:21 AM

The Markdown file looks like it’s written for people who either haven’t discovered Plan mode, or who can’t be bothered to read a generated plan before running with it.

by EdNutting

2/12/2026 at 9:47:51 AM

[flagged]

by squidbeak

2/12/2026 at 10:29:38 AM

>> LLMs are the eternal September for software

>Cliche.

Too early to tell, so let's wait and see before we brush that off.

>> in that the sort of people who couldn’t make it through a bootcamp can now be “programming thought leaders”

>Snobbery.

Reality and actually a selling point of AI tools. I see pretty often ads for making apps without any knowledge of programming

>> the sort of people who breathlessly announce

> Snobbery / Cliche.

Reality

>> There’s no longer a reliable way to filter signal from noise.

> Cliche.

Reality, or do you destinguish a well programmed app from unaudited BS

>> There’s utility in these tools, but 99% of the content creators in AI are one intellectual step above banging rocks together

>Cliche / Snobbery.

99% is to high, maybe 50%

>> their judgement of progress is not to be trusted

> Tell me, timr, how much judgement is there is in snotty gatekeeping and strings of cliches?

We have many security issues in software coded by people who have experience in coding, how much do you trust software ordered by people who can't jusge if the program they get is secure or full of security flaws? Don't forget these LLMs are trained on pre existing faulty code.

by croes

2/12/2026 at 8:33:34 AM

With AI, it feels like deterministic outcomes are not valued as experience taught us it should.

The absence of means to measure outcomes of these prompt documents makes me feel like the profession is regressing further into cargo culting.

by pyrale

2/12/2026 at 9:36:25 AM

It's particularly puzzling because until a few months ago the unmistakable consensus at the fuzzy borderlands between development and operations was:

1. Reproducibility

2. Chain of custody/SBOM

3. Verification of artifacts of CI

All three of which are not simply difficult but in fact by nature impossible when using an LLM

by bandrami

2/12/2026 at 9:31:35 AM

That might be because AI is being pushed largely by leaders that do not have the experience you’re referring to. Determinism is grossly undervalued - look at how low the knowledge of formal verification is, let alone its deployment in the real world!

by EdNutting

2/12/2026 at 9:08:39 AM

That is because nothing in the world is deterministic, they are just all varying degrees of probability.

by XenophileJKO

2/12/2026 at 9:21:51 AM

This rings hollow to me.

When my code compiles in the evening, it also compiles the next morning. When my code stops compiling, usually I can track the issue in the way my build changed.

Sure, my laptop may die while I'm working and so the second compilation may not end because of that, but that's not really comparable to a LLM giving me three different answers when given the same prompt three times. Saying that nothing is deterministic buries the distinction between these two behaviours.

Deterministic tools is something the developper community has worked very hard for in the past, and it's sad to see a new tool giving none of it.

by pyrale

2/12/2026 at 9:43:14 AM

That is called a deepity: a statement which sounds profound but is ultimately trivial and meaningless.

https://rationalwiki.org/wiki/Deepity

Determinism concerns itself with the predictability of the future from past and present states. If nothing were deterministic, you wouldn’t be able to set your clock or plan when to sow and when to harvest. You wouldn’t be able to drive a car or rest a glass on a table. You wouldn’t be able to type the exact same code today and tomorrow and trust it to compile identically. The only reason you can debug code is determinism, it is because you can make a prediction of what should happen and by inspecting what did happen you can can deduce what went wrong several steps before.

by latexr

2/13/2026 at 1:23:27 AM

Can you predict when solar radiation hits a memory cell or when given server node will die? Not really but you can model the probability of it happening. My point was all the systems we work with have failure modes and non-determinate output at some rate. That rate might be really small.. but at what point does the rate of non-deterministic behavior make something "non-deterministic. A language model can be deterministic in that you can get the same output from the same input if you so desire (again baring systemic failures and mitigating for out of order floating point operations).

I think just philosophically it is interesting because any real system is not fully predictable.. we just choose some arbitrary threshold of accuracy to define it as such.

by XenophileJKO

2/13/2026 at 6:31:51 AM

Right but for an LLM non-determinism is the success mode, not the failure mode. By design you get different outputs when running the same inputs. Which, again, really contradicts the consensus that was widely held even just a year ago about how to do CI.

by bandrami

2/13/2026 at 7:15:08 AM

I mean you don't have to. With 0 temp, the only source of non-determinism is really the associative math chaning on accumulation of the floating point multiplication. That could be eleminated if we wanted to buy restricting the ordering.

by XenophileJKO

2/13/2026 at 7:50:07 AM

True but at that point you're not really using "an LLM" in the usual sense, you're using a very resource-hungry database.

by bandrami

2/12/2026 at 9:06:10 AM

surely, 4,000 developers can’t be wrong

Apparently almost half of all the websites on the internet run on WordPress, so it's entirely possible for developers to be wrong at scale.

by onion2k

2/12/2026 at 1:28:47 PM

I've got a few clients using a Wordpress setup. They like being able to edit their own blogs, add new job listings, and rearrange the leadership page when people are promoted. All of that can happen without bugging the developer (me) and all the changes are fully auditable and reversible.

What FOSS solution would you recommend instead?

by joenot443

2/13/2026 at 4:16:25 AM

Astro has a list of options in their docs, for example: https://docs.astro.build/en/guides/cms/

Some are FOSS, self-hostable, or keep your content in a form which you can easily carry over to another service.

by bryanhogan

2/12/2026 at 7:09:42 PM

PayloadCMS is FOSS (until Figma changes their mind), but doesn’t have the huge amount of plugins / extensions / themes wordpress does.

I think there’s another FOSS project, but there really isn’t anything with the same breadth as Wordpress.

by lasre

2/12/2026 at 9:46:44 AM

Why is that a bad thing?

by bananaflag

2/12/2026 at 10:02:34 AM

WordPress as a CMS is fine, but 90% of websites (e.g. the bit that lands in your browser) don't need the complexity of runtime generation and pointlessly run an application with a huge attack surface that's relatively easy to compromise. If sites used WordPress as a backend tool with a static site generator to bake the content there'd be far fewer compromised websites.

WordPress's popularity is mostly adding a huge amount of complexity, runtime cost, and security risk for every visitor for the only benefit of a content manager being able to add a page more easily or to configure a form without needing a developer. That is optimizing the least important part of the system.

by onion2k

2/12/2026 at 9:47:41 AM

and there is a crap-ton of "apps" that repackage the entire world^W^W excuse me, Chromium, hog RAM, and destroy any semblance of native feel - all to write "production-ready" [sic] cross-platform code in JavaScript, a language more absurd than C++[0] but so easy to start with.

[0]: https://jsdate.wtf

by bpavuk

2/13/2026 at 6:53:08 PM

13 Markdown files have erased 285 billion dollars of wealth in the stock market - https://martinalderson.com/posts/wall-street-lost-285-billio...

Why are you surprised that it takes only 65 lines of Markdown to create the next Linus Torvalds or Donald Knuth!

I’m learning Haskell by Thinking before Coding, without working through the difficult exercises. I finally understand Monads because of Thinking first.

by sathish316

2/12/2026 at 8:45:50 AM

I know some people that would also benefit from these 65 lines of markdown. Even without using AI.

by john_owl

2/12/2026 at 9:15:23 AM

This! It feels like most of these markdown files floating around can bear the label _“stuff you struggled to have a sloppy coworker understand”_.

by crousto

2/12/2026 at 9:33:35 AM

Possibly because the people creating them _are_ the sloppy coworkers and they’re now experiencing (by using an AI tool) a reflection of what it’s like to work with themselves.

Even if this is complete nonsense, I choose to believe it :’)

by EdNutting

2/12/2026 at 10:30:49 AM

Line one would be a good start

>Think Before Coding

by croes

2/12/2026 at 9:25:02 AM

My probably incorrect, uninformed hunch is that users convinced of how AI should act actually end up nerfing its capabilities with their prompts. Essentially dumbing it down to their level, losing out on the wisdom it's gained through training.

by xlbuttplug2

2/12/2026 at 9:44:09 AM

I am with you on this one.

I've experienced in times of gpt 3, and 3.5 that existence of any, even 1-word system message changed output drastically in the worse direction. I did not verify this behaviour with recentl models.

Since then I don't impose any system prompts on users of my tg bot. This is so unusual and wild in relation to what others do that very few actually appreciate it. I'm happy I don't need to make money for living with this project thus I and can keep it ideologically clean: user's control over system prompts, temperature, top_p, giving selection of the top barebones LLMs.

by podgorniy

2/13/2026 at 3:35:16 AM

I often wonder this as well, things are moving so quickly that unless you want to keep chasing the next best prompt/etc then you are better running as close to vanilla as you can IMHO.

Similar for MCP/Skills/Prompts, I’m not saying they can’t/don’t help but I think you can shoot yourself in the foot very easily and spend all your time trying to maintain those things and/or try to force the agent to use your Skill/MCP. That or having your context eaten up with bad MCP/Skills.

I read a comment the other day about sometime talking about Claude Code getting dumber then they went on to explain switching would be hard due to their MCP/Skills/Skill router setup. My dude, maybe _that’s_ the problem?

by joshstrange

2/12/2026 at 9:35:33 AM

So we reached a point where the quality of a `piece` of software is decided based on stars on GitHub.

The exact same thing happened with xClaw where people where going "look at this app that got thousands of stars on GitHub in only a few days!".

How is that different than the followers/likes counts on the usual social networks?

Given how much good it did to give power to strangers based on those counts, it's hard not to think that we're going in the completely wrong direction.

by wiether

2/13/2026 at 3:56:33 AM

Funny. The author is not sure if his (and original) extension improves Claude output, but since the original project has 4k+ start on Github, "surely, 4,000 developers can’t be wrong". So, "Please try for yourself! Install the extension, don’t forget to star my repository and see the results". No matter if it's good, just star my repo.

by quiet35

2/12/2026 at 8:54:58 AM

All good advice in general. Could add others, like x-y problems etc.

This feels like a handbook for a senior engineer becoming a first level manager talking to junior devs. Which is exactly what it should be.

However, this will go horribly wrong if junior devs are thus “promoted “ to eng managers without having cut their teeth on real projects first. And that’s likely to happen. A lot.

by golem14

2/12/2026 at 8:41:42 AM

Bro science is rampant in the AI world. Every new model that comes out is the best there ever was, every trick you can think of is the one that makes all the other users unsophisticated, "bro, you are still writing prompts as text? You have to put them into images so the AI can understand them visually as well as textually".

It isn't strange that this is the case, because you'd be equally hard pressed to compare developers at different companies. Great to have you on the team Paul, but wouldn't it be better if we had Harry instead? What if we just tell you to think before you code, would that make a difference?

by mosselman

2/12/2026 at 9:47:03 AM

(surprised smiley) - wait, this is for real? Can I feed my whiteboard scribbles as prompt?

That would be game changing!

by ccozan

2/12/2026 at 8:56:59 AM

That's just how it is in the LLM world. We've just gotten started. Once upon a time, the SOTA prompting technique was "think step by step".

by arjie

2/12/2026 at 9:33:58 AM

You know, it's good old prompt/context engineering. To be fair, markdowns actually can be useful because of LLM's (Transformer's) gullible/susceptible nature... At least that's what I discovered developing a prompting framework.

Of course it's hilarious a single markdown got 4000 starts, but it looks like just another example of how people chase a buzzing x post in tech space.

by 3371

2/12/2026 at 9:39:09 AM

If exatly such markdown was written by some Joe from the internet no one would notice it. So these stars exist not because of quality or utility of the text.

by podgorniy

2/12/2026 at 9:43:52 AM

Maybe I'm just really lucky but reading those instructions it's basically how I find Claude Code behaves. That repo with 4k stars is only 2 weeks old as well, so it's obviously not from a much less competent model.

by mcintyre1994

2/12/2026 at 9:48:37 AM

Same here, I find most of these skills/prompts a bit redundant. Some people argue that in including these in the conversation, one is doing latent space management of sorts and bringing the model closer to where one wants it to be.

I wonder what will happen with new LLMs that contain all of these in their training data.

by cobolexpert

2/12/2026 at 9:46:03 AM

Next inevitable step is LLM alchemy. People will be writing crazy-ass prompts which ununderstandable text which somehow get system work better than the straight-human-text prompts.

by podgorniy

2/12/2026 at 8:50:54 AM

> But the original repository has almost 4,000 stars, and surely, 4,000 developers can’t be wrong?

This is such a negative messaging!

Let's check star history: https://www.star-history.com/#forrestchang/andrej-karpathy-s...

1. Between Jan 27th and Feb 3rd stars grew quickly to 3K, project was released at that time.

2. People star it to be on top of NEW changes, people wanted to learn more about what's coming - but it didn't come. Doesn't mean people are dumb.

3. If OP synthesized the Markdown into a single line: "Think before coding" - why did he went through this VS Code extension publishing? Why can't they just share learnings and tell the world, "Add 'Think before coding' before your prompt and Please try for yourself!"

PS: no I haven't starred this project, I didn't know about it. But I disagree with the authors "assumptions" about stars and correlating it to some kind of insight revelation

by throwaw12

2/12/2026 at 8:55:01 AM

I just packaged the extension for the fun of it! And I do want people to try for themselves, that is the point. About people that are not dumb; surely many people are not dumb; many people are very smart indeed. But that does not prove there are no dumb or gullible people!

by roywashere

2/12/2026 at 8:59:41 AM

thanks for responding and sharing your perspective.

What I would say, you could have omitted some negativity or judgement from your post about 4k devs starring something because it looks simple, because they might have different intentions for starring.

Here is another great example of 65K "not wrong" developers: https://github.com/kelseyhightower/nocode - there is no code, long before AI was a trend, released 9 years ago, but got 65K stars! Doesn't mean devs "not wrong", it means people are curious and saving things "just in case" to showcase somewhere

by throwaw12

2/12/2026 at 9:14:49 AM

Nocode is obviously a banter repo and people starred it because made them laugh.

by joe_fishfish

2/12/2026 at 9:23:51 AM

I wonder if someone could explore creating a standalone product out of that markdown, just for the fun of it.

by zihotki

2/12/2026 at 9:38:53 AM

Perhaps a cool wall art sticker? "In this house we don't assume. We don't hide confusion. We surface tradeoffs."

by exitb

2/12/2026 at 9:32:39 AM

Let's start a company and raise money from investors!!

by roywashere

2/13/2026 at 3:44:04 AM

This is so incredibly depressing. As of the time of me posting this comment this has 73 upvotes. I'm sorry, but this is absurd. People put real effort in real posts that don't see half this many votes but AI Slop on top of AI Slop? Upvote!

This post is about a prompt that has 4K+ stars (that matters to people?) so they wrapped it up in an extension for Cursor and they ask you to try it out and star their repo (Please clap?).

And this "Sensation"?

> Was the result better? I’m not really sure.

I cannot even...

A Slop article about a Slop prompt that a bunch of Slop people starred, "I don't know if it helps but here is an extension that you should use", just the laziest of everything.

I'd bet that this prompt, if it is even helpful at all, will stop be helpful or be redundant in a couple weeks at most time.

I'm _far_ from anti-LLM, I use them quite a bit daily, I run multiple Claude Code instances (which I review, gasp!), but this is reaching a fever pitch and this article was the straw that broke this camel's back.

by joshstrange

2/12/2026 at 8:45:44 AM

These hopeful incantations are a kind of cargo cult… But when applied to programming, the wild thing is that the cult natives actually built the airports and the airplanes but they don’t know what makes them fly and where the cargo comes from.

by pavlov

2/12/2026 at 12:07:32 PM

The document under discussion:

https://github.com/forrestchang/andrej-karpathy-skills/blob/...

I find the whole premise of writing some vague instructions, feeding them to a stochastic parrot and expecting a solid engineering process to materialize out of the blue quite ridiculous.

Any sufficiently advanced "AI" technology is indistinguishable from bullshit.

by ciconia

2/12/2026 at 10:00:50 AM

There will come a day soon where “hello world” will be typed by sentient hands for the last time.

by brador

2/12/2026 at 9:47:34 AM

I found that "Make no mistakes, or you go to jail" improves claude-code's performance by about 43%

by rootlocus

2/12/2026 at 9:51:05 AM

These days I genuinely can't tell if articles are satire or not.

by csomar