2/17/2026 at 10:49:30 PM
>Requirements>A will to live (optional but recommended)
>LLVM is NOT required. BarraCUDA does its own instruction encoding like an adult.
>Open an issue if theres anything you want to discuss. Or don't. I'm not your mum.
>Based in New Zealand
Oceania sense of humor is like no other haha
The project owner strongly emphasize the no LLM dependency, in a world of AI slope this is so refreshing.
The cheer amount of knowledge required to even start such project, is really something else, and prove the manual wrong on the machine language level is something else entirely.
When it comes to AMD, "no CUDA support" is the biggest "excuse" to join NVIDIA's walled garden.
Godspeed to this project, the more competition the less NVIDIA can continue destroying the PC parts pricing.
by h4kunamata
2/17/2026 at 10:59:10 PM
> The project owner strongly emphasize the no LLM dependency, in a world of AI slope this is so refreshing.The project owner is talking about LLVM,a compiler toolkit, not an LLM.
by querez
2/18/2026 at 9:50:28 AM
They also said "hand written", implying that no LLMs whirred, slopped and moonwalked all over the project.by nurettin
2/18/2026 at 2:05:31 PM
I mean.. I'm one of the staunchest skeptics of LLMs as agents, but they're amazing as supercharged autocomplete and I don't see anything wrong with them in that role. There's a position between handwritten and slopped that's pareto.by jorvi
2/18/2026 at 5:39:45 PM
When I want to autocomplete my code with IP scraped from github with all licensing removed, nothing beats an LLM.by pklausler
2/18/2026 at 4:25:41 PM
They can take away our jobs, but by god they cannot take away our autism!by nurettin
2/18/2026 at 12:08:40 AM
It's actually quite easy to spot if LLMs were used or not.Very few total number of commits, AI like documentation and code comments.
But even if LLMs were used, the overall project does feel steered by a human, given some decisions like not using bloated build systems. If this actually works then that's great.
by kmaitreys
2/18/2026 at 12:13:47 AM
Since when is squashing noisesum commits an AI activity instead of good manners?by butvacuum
2/18/2026 at 7:45:49 AM
The first commit was 17k lines. So this was either developed without using version control or at least without using this gh repo. Either way I have to say certain sections do feel like they would have been prime targets for having an LLM write them. You could do all of this by hand in 2026, but you wouldn't have to. In fact it would probably take forever to do this by hand as a single dev. But then again there are people who spend 2000 hours building a cpu in minecraft, so why not. The result speaks for itself.by sigmoid10
2/18/2026 at 10:06:32 AM
> The first commit was 17k lines. So this was either developed without using version control or at least without using this gh repo.Most of my free-time projects are developed either by my shooting the shit with code on disk for a couple of months, until it's in a working state, then I make one first commit. Alternatively, I commit a bunch iteratively, but before making it public I fold it all into one commit, which would be the init. 20K lines in the initial commit is not that uncommon, depends a lot on the type of project though.
I'm sure I'm not alone with this sort of workflow(s).
by embedding-shape
2/18/2026 at 10:27:23 AM
Can you explain the philosophy behind this? Why do this, what is the advantage? Genuinely asking, as I'm not a programmer by profession. I commit often irrespective of the state of the code (it may not even compile). I understand git commit as a snapshot system. I don't expect each commit to be pristine, working version.Lot of people in this thread have argued for squashing but I don't see why one would do that for a personal project. In large scale open source or corporate projects I can imagine they would like to have clean commit histories but why for a personal project?
by kmaitreys
2/18/2026 at 11:33:19 AM
I do that because there's no point in anyone seeing the pre-release versions of my projects. They're a random mess that changed the architecture 3 times. Looking at that would not give anyone useful information about the actual app. It doesn't even give me any information. It's just useless noise, do it's less confusing if it's not public.by viraptor
2/18/2026 at 12:39:49 PM
I don't care about anyone seeing or not seeing my unfinished hobby projects, I just immediately push to GitHub as another form of backup.by panzi
2/19/2026 at 4:41:04 AM
I literally keep this in my bash history so i can press up once and hit enter to commit: `git add *; git commit -m "changes"; git push origin main;`I use git as backup and commit like every half an hour... but make sure to give proper commit message once a certain milestone have been reached.
Im also with the author on this on squashing all these commits into a new commit and then pushing it in one go as init commit before going public.
by freakynit
2/18/2026 at 4:17:13 PM
I don't care about backing up unfinished hobby projects, I just write/test until arbitrarily sharing, or if I'm completely honest, potentially abandoning it. I may not 'git init' for months, let alone make any commits or push to any remotes.Reasoning: skip SCM 'cost' by not making commits I'd squash and ignore, anyway. The project lifetime and iteration loop are both short enough that I don't need history, bisection, or redundancy. Yet.
Point being... priorities vary. Not to make a judgement here, I just don't think the number of commits makes for a very good LLM purity test.
by bravetraveler
2/18/2026 at 1:47:08 PM
you should push to a private working branch- and freqently. But, when merging your changes to a central branch you should squash all the intermediate commits and just provide one commit with the asked for change.Enshrining "end of day commits", "oh, that didn't work" mistakes, etc is not only demoralizing for the developer(s), but it makes tracing changes all but impossible.
by butvacuum
2/18/2026 at 10:54:03 AM
> I don't expect each commit to be pristine, working version.I guess this is the difference, I expect the commit to represent a somewhat working version, at least when it's in upstream, locally it doesn't matter that much.
> Why do this, what is the advantage?
Cleaner I suppose. Doesn't make sense to have 10 commits whereas 9 are broken half-finished, and 10 is the only one that works, then I'd just rather have one larger commit.
> they would like to have clean commit histories but why for a personal project?
Not sure why it'd matter if it's personal, open source, corporate or anything else, I want my git log clean so I can do `git log --short` and actually understand what I'm seeing. If there is 4-5 commits with "WIP almost working" between each proper commit, then that's too much noise for me, personally.
But this isn't something I'm dictating everyone to follow, just my personal preference after all.
by embedding-shape
2/18/2026 at 5:39:40 PM
> If there is 4-5 commits with "WIP almost working" between each proper commit, then that's too much noise for me, personally.Yep, no excuse for this, feature branches exist for this very reason. wip commits -> git rebase -i master -> profit
by TuxSH
2/18/2026 at 11:13:42 AM
Fair enough. Thanks for the clarification. Personally, I think, everything before a versioned release (even something like 0.1) can be messy. But from your point I can see it that a cleaner history will have advantages.Further, I guess if author is expecting contributions to the code in the future, it might be more "professional" for the commits to only the ones which are relevant.
My own projects, I consider, are just for my own learning and understanding so I never cared about this, but I do see the point now.
Regardless, I think it still remains a reasonable sign of someone doing one-shot agent-driven code generation.
by kmaitreys
2/18/2026 at 11:16:54 AM
One point I missed, that might be the most important, since I don't care about it looking "professional" or not, only care about how useful and usable something is: if you have commits with the codebase being in a broken state, then `git bisect` becomes essentially useless (or very cumbersome to use), which will make it kind of tricky to track down regressions unless you'd like to go back to the manual way of tracking those down.> Regardless, I think it still remains a reasonable sign of someone doing one-shot agent-driven code generation.
Yeah, why change your perception in the face of new evidence? :)
by embedding-shape
2/18/2026 at 11:44:23 AM
I see the point.Regarding changing the perception, I think you did not understand the underlying distrust. I will try to use your examples.
It's a moderate size project. There are two scenarios: author used git/some VCS or they did not use it. If they did not use it, that's quite weird, but maybe fine. If they did use git, then perhaps they squashed commits. But at certain point they did exist. Let's assume all these commits were pristine. It's 16K loc, so there must be decent number of these pristine commits that were squashed. But what was the harm in leaving them?
So these commits must have been made of both clean commits as well as broken commits. But we have seem this author likes to squash commits. Hmm, so why didn't they do it before and only towards the end?
Yes, I have been introduced to a new perception but it's the world does not work "if X, then not Y principles." And this is a case where the two things being discussed are not mutually exclusive like you are assuming. But I appreciate this conversation because I learnt importance and advantages of keeping clean commit history and I will take that into account next time reaching to the conclusion that it's just another one-shot LLM generated project. But nevertheless, I will always consider the latter as a reasonable possibility.
I hope the nuance is clear.
by kmaitreys
2/18/2026 at 3:16:24 PM
> I guess this is the difference, I expect the commit to represent a somewhat working version,On a solo project I do the opposite: I make sure there is an error where I stopped last. Typically I put in in a call to the function that is needed next so i get a linker error.
6 months later when I go back to the project that link error tells me all I need to know about what comes next
by lelanthran
2/18/2026 at 4:10:42 PM
How does that work out if you want to use `git bisect` to find regressions or similar things?by embedding-shape
2/18/2026 at 6:10:55 PM
I dont do bisects on each individual branch. I'll bisect on master instead and find the offending merge.From that point bisect is not needed.
by lelanthran
2/18/2026 at 10:12:58 AM
Or first thousand commits were squashed. First public commit tells nothing about how this was developed. If I were to publish something that I have worked on my own for a long time, I would definitely squash all early commits into a single one just to be sure I don't accidentally leak something that I don't want to leak.by pheis
2/18/2026 at 10:40:06 AM
>leak whatFor example when the commits were made. I would not like to share publicly for the whole world when I have worked with some project of mine. Commits themselves could also contain something that you don't want to share or commit messages.
At least I approach stuff differently depending if I am sharing it with whole world, with myself or with people who I trust.
Scrubbing git history when going from private to public should be seen totally normal.
by pheis
2/18/2026 at 10:53:09 AM
Hmm I can see that. Some people are like that. I sometimes swear in my commit messages.For me it's quite funny to sometimes read my older commit messages. To each of their own.
But my opinion on this is same as it is with other things that have become tell-tale signs of AI generated content. If something you used to do starts getting questioned as AI generated content, it's better to change that approach if you find it getting labelled as AI generated, offensive.
by kmaitreys
2/18/2026 at 10:27:52 AM
Leak what?by kmaitreys
2/18/2026 at 2:04:08 PM
If you have for example a personal API key or credentials that you are using for testing, you throw it in a config file or hard code it at some point. Then you remove them. If you don't clean you git history those secrets are now exposed.by ecshafer
2/18/2026 at 11:02:43 AM
Timestampsby snovv_crash
2/18/2026 at 9:13:59 AM
a lot of ppl dont use git. and just chuck stuff in there willynilly when they want to share it.people are to keen to say something was produced with an LLM if they feel its something they cannot produce themselves readily..
by saidnooneever
2/18/2026 at 9:49:18 AM
I would be very concerned about someone working on a 16k loc codebase without a VCS.by kmaitreys
2/18/2026 at 6:55:15 AM
Can you prove that this is what happened?by kmaitreys
2/18/2026 at 5:12:02 AM
this type of project is the perfect project for an llm, llvm and cuda work as harnesses, easy to compare.by luckydata
2/18/2026 at 6:59:50 AM
What do you mean by harnesses?by kmaitreys
2/18/2026 at 10:04:37 AM
agentic ai harness for harness (ai)by formerly_proven
2/18/2026 at 12:42:15 AM
Says the clawdbotby natvert
2/18/2026 at 6:59:12 AM
It's quite amusing the one time I did not make an anti-AI comment, I got called a clanker myself.I'm glad the mood here is shifting towards the right side.
by kmaitreys
2/17/2026 at 11:28:59 PM
This project very most definitely has significant AI contributions.Don't care though. AI can work wonders in skilled hands and I'm looking forward to using this project
by wild_egg
2/18/2026 at 12:11:15 AM
Hello! I didn't realise my project was posted here but I can actually answer this.I do use LLM's (specifically Ollama) particularly for test summarisation, writing up some boilerplate and also I've used Claude/Chatgpt on the web when my free tier allows. It's good for when I hit problems such as AMD SOP prefixes being different than I expected.
by ZaneHam
2/18/2026 at 10:07:10 AM
I looked through several of the source files and if you had said it's 100% handrolled I would have believed you too.It looks like a project made by a human and I mean that in a good way.
by blensor
2/18/2026 at 4:48:26 AM
since nobody else seems to have said it, this is exciting! keep up the fun work!by 8note
2/17/2026 at 11:47:17 PM
> Oceania sense of humor is like no other hahaReminded me of the beached whale animated shorts[1].
[1]: https://www.youtube.com/watch?v=ezJG0QrkCTA&list=PLeKsajfbDp...
by magicalhippo
2/18/2026 at 1:30:09 AM
LLVM, nothing to do with LLMsby ekianjo
2/17/2026 at 11:55:29 PM
> >LLVM is NOT required. BarraCUDA does its own instruction encoding like an adult.> The project owner strongly emphasize the no LLM dependency, in a world of AI slope this is so refreshing.
"Has tech literacy deserted the tech insider websites of silicon valley? I will not beleove it is so. ARE THERE NO TRUE ENGINEERS AMONG YOU?!"
by samrus
2/18/2026 at 4:55:56 PM
i loled hard in public transportby typh00n
2/18/2026 at 10:05:05 AM
> /* 80 keywords walk into a sorted bar */https://github.com/Zaneham/BarraCUDA/blob/master/src/lexer.c...
by deeringc
2/18/2026 at 9:01:19 AM
> LLVM is NOT required. BarraCUDA does its own instruction encoding like an adult.This is not an advantage since you will now not benefit from any improvements in LLVM.
by amelius
2/18/2026 at 11:43:54 AM
Nor will they be restricted by the LLVM design. That project is huge and generic trying to be everything and target everything (and takes ages to rebuild if you need some changes). Sometimes it's better to go simple and targeted - time will tell if that's the right choice.Zluda used LLVM and ended up bundling a patched version to achieve what they wanted https://vosen.github.io/ZLUDA/blog/zluda-update-q4-2025/
> Although we strive to emit the best possible LLVM bitcode, the ZLUDA compiler simply is not an optimizing, SSA-based compiler. There are certain optimizations relevant to machine learning workloads that are beyond our reach without custom LLVM optimization passes.
by viraptor
2/18/2026 at 12:17:55 PM
That's a much better argumentation than "we did it because we are adults".(except that it applies to Zluda, not necessarily this project)
by amelius
2/18/2026 at 1:41:37 PM
I think Zig is trying to get rid of it as well, much harder to debug iirc.by pezgrande
2/18/2026 at 10:10:22 AM
> in a world of AI slopeThe scientific term for this is “gradient descent”.
by renewiltord
2/18/2026 at 4:36:49 PM
The Descent of (artificial) Man.by HPsquared
2/18/2026 at 1:03:44 AM
I'm still blown away that AMD hasn't made it their top priority. I've said this for years. If I was AMD I would spend billions upon billions if necessary to make a CUDA compatibility layer for AMD. It would certainly still pay off, and it almost certainly wouldn't cost that much.by colordrops
2/18/2026 at 3:12:17 AM
They've been doing it all the time and it's called HIP. Nowadays it works pretty well on a few supported GPUs (CDNA 3 and RDNA 4).by woctordho
2/18/2026 at 5:29:42 AM
Please. If HIP worked so well they would be eating into Nvidia's market share.First, it's a porting kit, not a compatibility layer, so you can't run arbitrary CUDA apps on AMD GPUs. Second, it only runs on some of their GPUs.
This absolutely does not solve the problem.
by colordrops
2/18/2026 at 7:52:55 AM
HIP is just one of many examples of how utterly incompetent AMD is at software development.GPU drivers, Adrenalin, Windows chipset drivers...
How many generations into the Ryzen platform are they, and they still can't get USB to work properly all the time?
by KennyBlanken
2/18/2026 at 10:06:15 AM
AMD doesn't do USB, they source the controller IP from ASMedia, who also developed most of their chipsets.by formerly_proven
2/18/2026 at 11:37:14 PM
Yeah but AMD could absolutely work on a joint venture to get something like moving along solidly.by dripdry45
2/18/2026 at 5:28:56 AM
it's astounding to me how many people pop off about "AMD SHOULD SUPPORT CUDA" not knowing that HIP (and hipify) has been around for literally a decade now.by mathisfun123
2/18/2026 at 5:30:31 AM
Please explain to me why all the major players are buying Nvidia then? Is HIP a drop in replacement? No.You have to port every piece of software you want to use. It's ridiculous to call this a solution.
by colordrops
2/18/2026 at 6:51:44 AM
Major players in China don't play like that. MooreThreads, Lisuan, and many other smaller companies all have their own porting kits, which are basically copied from HIP. They just port every piece of software and it just works.If you want to fight against Nvidia monopoly, then don't just rant, but buy a GPU other than Nvidia and build on it. Check my GitHub and you'll see what I'm doing.
by woctordho
2/18/2026 at 6:59:37 AM
> Is HIP a drop in replacement? No.You don't understand what HIP is - HIP is AMD's runtime API. it resembles CUDA runtime APIs but it's not the same thing and it doesn't need to be - the hard part of porting CUDA isn't the runtime APIs. hipify is the thing that translates both runtime and kernels. Now is hipify a drop-in replacement? No of course but because the two vendors have different architectures. So it's absolutely laughable to imagine that some random could come anywhere near "drop-in replacement" when AMD can't (again: because of fundamental architecture differences).
by mathisfun123
2/18/2026 at 7:05:49 AM
Who said "some random"? Read the whole thread. I was suggesting AMD invest BILLIONS to make this happen. You're aguing with a straw man.by colordrops
2/18/2026 at 7:30:06 AM
I think you misunderstand what's fundamentally possible with AMD's architecture. They can't wave a magic wand for a CUDA compatibility layer any better than Apple or Qualcomm can, it's not low-hanging fruit like DirectX or Win32 translation. Investing billions into translating CUDA on raster GPUs is a dead end.AMD's best option is a greenfield GPU architecture that puts CUDA in the crosshairs, which is what they already did for datacenter customers with AMD Instinct.
by bigyabai
2/18/2026 at 7:51:24 AM
This is a big part of AMD still not having a proper foothold in the space: AMD Instinct is quite different from what regular folks can easily put in their workstation. In Nvidia-land I can put anything from mid-range gaming cards, over a 5090 to an RTX 6000 Pro in my machine and be confident that my CUDA code will scale somewhat acceptably to a datacenter GPU.by KeplerBoy
2/18/2026 at 8:07:21 AM
This is where I feel like Khronos could contribute, making a Compute Capability-equivalent hardware standard for vendors to implement. CUDA's versioning of hardware capabilities plays a huge role in clarifying the support matrix....but that requires buy-in from the rest of the industry, and it's doubtful FAANG is willing to thread that needle together. Nvidia's hedged bet against industry-wide cooperation is making Jensen the 21st century Mansa Musa.
by bigyabai
2/18/2026 at 8:32:07 AM
I do not misunderstand.Let's say you put 50-100 seasoned devs on the problem, and within 2-3 years, probably get ZLUDA to the point where most mainstream CUDA applications — ML training/inference, scientific computing, rendering — run correctly on AMD hardware at 70-80% of the performance you'd get from a native ROCm port. Even if its not optimal due to hardware differences, it would be genuinely transformative and commercially valuable.
This would give them runway for their parallel effort to build native greenfield libraries and toolkits and get adoption, and perhaps make some tweaks to future hardware iterations that make compatibility easier.
by colordrops
2/18/2026 at 2:05:53 PM
Before the "ZLUDA" project completion, they would be facing a lawsuit for IP infringement, since CUDA is owned by NVIDIA.by zvr
2/18/2026 at 4:45:19 PM
They would win, compatibility layers are not illegal.by colordrops
2/18/2026 at 6:21:53 PM
Win against who? AMD is the one that asked them to take it down: https://www.tomshardware.com/pc-components/gpus/amd-asks-dev...And while compatibility layers aren't illegal, they ordinarily have to be a cleanroom design. If AMD knew that the ZLUDA dev was decompiling CUDA drivers to reverse-engineer a translation layer, then legally they would be on very thin ice.
by bigyabai
2/18/2026 at 6:14:27 PM
ROCm is supported by the minority of AMD GPUs, and is accelerated inconsistently across GPU models. 70-80% of ROCm's performance is an unclear target, to the point that a native ROCm port would be a more transparent choice for most projects. And even then, you'll still be outperformed by CUDA the moment tensor or convolution ops are called.Those billions are much better-off being spent on new hardware designs, and ROCm integrations with preexisting projects that make sense. Translating CUDA to AMD hardware would only advertise why Nvidia is worth so much.
> it would be genuinely transformative and commercially valuable.
Bullshit. If I had a dime for every time someone told me "my favorite raster GPU will annihilate CUDA eventually!" then I could fund the next Nvidia competitor out of pocket. Apple didn't do it, Intel didn't do it, and AMD has tried three separate times and failed. This time isn't any different, there's no genuine transformation or commercial value to unlock with outdated raster-focused designs.
by bigyabai
2/18/2026 at 3:02:52 PM
No I'm arguing with someone who clearly doesn't understand GPUs> invest BILLIONS to make this happen
As I have already said twice, they already have, it's called hipify and it works as well as you'd imagine it could (ie poorly because this is a dumb idea).
by mathisfun123
2/18/2026 at 7:49:50 AM
Wow you're so very smart! You should tell all the llm and stablediffusion developers who had no idea it existed! /sHIP has been dismissed for years because it was a token effort at best. Linux only until the last year or two, and even now it only supports a small number of their cards.
Meanwhile CUDA runs on damn near anything, and both Linux and Windows.
Also, have you used AMD drivers on Windows? They can't seem to write drivers or Windows software to save their lives. AMD Adrenalin is a slow, buggy mess.
Did I mention that compute performance on AMD cards was dogshit until the last generation or so of GPUs?
by KennyBlanken
2/18/2026 at 1:41:50 AM
AMD did hire someone to do this and IIRC he did, but they were afraid of Nvidia lawyers and he released it outside of the company?by ddtaylor
2/18/2026 at 9:23:22 AM
Just allow me to doubt that one (1) programmer is all AMD would need to close up the software gap to NVIDIA...by fransje26
2/18/2026 at 8:13:53 PM
Are you suggesting that CUDA is the entirety of the "software gap", because it's a lot more than that. That seems like a strawman argument.Andrzej Janik.
Starter at Intel working on it, they passed because there was no business there.
AMD picked it up and funded it from 2022. They stopped in 2024, but his contract allowed the release of the software in such an event.
Now it's ZLUDA.
by ddtaylor
2/18/2026 at 5:33:52 AM
Surely they could hire some good lawyers if that means they make billions upon billions? AFAIK there's nothing illegal about creating compatibility layers. Otherwise WINE would have shut down long ago.by colordrops
2/18/2026 at 12:43:41 PM
Depends on what code they wrote. If they used LLMs to write it, it could contain proprietary nvidia parts. Someone would then have to review that, but can't, because maybe the nvidia code that came from the LLM isn't even public.So the strategy to publish independently, wait and see if nvidia lawyers have anything to say about it, would be a very smart move.
by M95D
2/18/2026 at 4:03:21 PM
The Oracle case was about the stubs of the API being considered copyrighted I believe. The argument wasn't that Google used any of their code, it was that by using the same functional names they were making a thought crime.by ddtaylor
2/18/2026 at 2:09:10 PM
The last time something similar happened (Google vs Oracle), the legal battles lasted more than a decade. It would be a very bold decision by ARM to commit to this strategy (implement "CUDA" and fight it out in courts).by zvr
2/18/2026 at 3:51:06 PM
That means Google got to use Java for a decade. A decade-long legal battle is great news for whoever seems to be in the wrong, as long as they can still afford lawyers. Remember, they don't claw back dividends or anything.by gzread
2/18/2026 at 1:09:14 AM
Moving target, honestly just get PyTorch working fully (loads of stuff just doesn’t work on AMD hardware) and also make it work on all graphics cards from a certain generation. The matrix of support needed GFX cards, architectures and software together is quite astounding but still yes that should have at least that working and equivalent custom kernels.by andy_ppp
2/18/2026 at 10:34:08 AM
The headline that PyTorch has full compatibility on all AMD GPUs would increase their stock by > $50 billion overnight. They should do it even if it takes 500 engineers and 2 years.by spacebanana7
2/18/2026 at 12:07:03 PM
Does anybody really understand why this hasn‘t been done? I know about ongoing efforts but is it really THAT difficult?by DonThomasitos
2/18/2026 at 6:36:46 PM
You know it’s probably a combination of things but mostly that AMD do not have a capable software team… probably not the individuals but the managers likely don’t have a clue.by andy_ppp
2/19/2026 at 6:08:48 PM
Aaaaaaand torch is not a simple easy target. You don't just want support but high-performance optimized support on a pretty-complex moving target... maybe better/easier than CUDA but not that much it seems.by touisteur
2/18/2026 at 5:34:38 AM
That would be a great start.by colordrops
2/18/2026 at 8:06:32 AM
it's so funny to watch the people who pearl clutch over AI expose that they don't even know the difference between LLVM and LLM roflby dirasieb
2/18/2026 at 6:03:36 PM
>A will to live (optional but recommended)Ah I'm glad it's just optional, I was concerned for a second.
by moffkalast
2/18/2026 at 1:03:11 AM
Unrelated: just returned from a month in NZ. Amazing people.by dboreham
2/18/2026 at 4:55:22 AM
Hope you enjoyed it!!by ZaneHam
2/18/2026 at 4:08:11 AM
"If this doesn't work, your gcc is broken, not the Makefile." ... bruh.. the confidence.by freakynit
2/18/2026 at 4:20:13 AM
> The project owner strongly emphasize the no LLM dependency, in a world of AI slope this is so refreshing.Huh? This is obvious AI slop from the readme. Look at that "ASCII art" diagram with misaligned "|" at the end of the lines. That's a very clear AI slop tell, anyone editing by hand would instinctively delete the extra spaces to align those.
by lambda
2/18/2026 at 4:46:18 AM
Hello!Didn't realise this was posted here (again lol) but where I originally posted, on the R/Compilers subreddit I do mention I used chatgpt to generate some ascii art for me. I was tired and it was 12am and I then had to spend another few minutes deleting all the Emojis it threw in there.
I've also been open about how I use AI use to people who know me, and I work with in the OSS space. I have a lil Ollama model that helps me from time to time, especially with test result summaries (if you've ever seen what happens when a Mainframe emulator explodes on a NIST test you'd want AI too lol, 10k lines of individual errors aint fun to walk through) and you can even see some Chatgpt generated Cuda in notgpt.cu which I mixed and mashed a little bit. All in all, I'm of the opinion that this is perfectly acceptable use of AI.
by ZaneHam
2/18/2026 at 4:27:37 AM
>No LLVM. No HIP translation layer. No "convert your CUDA to something else first." Just ......Another obvious tell.
https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing#...
https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing#...
by RockRobotRock
2/18/2026 at 4:55:02 AM
Oh gosh, Emdashes are already ruined for me and now I can't use that to? I've already had to drop boldface in some of my writings because it's become prolific too.This is also just what I intentionally avoided when making this by the way. I don't really know how else to phrase this because LLVM and HIP are quite prolific in the compiler/GPU world it seems.
by ZaneHam
2/18/2026 at 8:33:29 AM
My mistake, sorry. HN has me feeling extra paranoid lately.by RockRobotRock
2/18/2026 at 5:55:19 AM
For what it's worth - for people who're in this space - this project is awesome, and I hope you keep going with it! The compiler space for GPUs really needs truly open-source efforts like this.Code/doc generators are just another tool. A carpenter uses power tools to cut or drill things quickly, instead of screwing everything manually. That doesn't mean they're doing a sloppy job, because they're still going to obsessively pore over every detail of the finished product. A sloppy carpenter will be sloppy even without power tools.
So yeah, I don't think it's worth spending extra effort to please random HN commenters, because the people who face the problem that you're trying to solve will find it valuable regardless. An errant bold or pipe symbol doesn't matter to people who actually need what you're building.
by cmdr2
2/18/2026 at 5:47:26 AM
TIL: I'm an LLM.by m-schuetz
2/18/2026 at 5:37:48 AM
> This is obvious AI slop from the readmeI keep hoping that low-effort comments like these will eventually get downvoted (because it's official HN policy). I get that it's fashionable to call things AI slop, but please put some effort into reading the code and making an informed judgment.
It's really demeaning to call someone's hard work "AI slop".
What you're implying is that the quality of the work is poor. Did you actually read the code? Do you think the author didn't obsessively spend time over the code? Do you have specific examples to justify calling this sloppy? Besides a misaligned "|" symbol?
And I doubt you even read anything because the author never talked about LLMs in the first place.
My beef isn't with you personally, it's with this almost auto-generated trend of comments on HN calling everyone's work "AI slop". One might say, low-effort comments like these are arguably "AI slop", because you could've generated them using GPT-2 (or even simple if-conditionals).
by cmdr2
2/18/2026 at 7:24:59 AM
While I would not call this AI slop, the probability that LLMs were used is high.> It's really demeaning to call someone's hard work "AI slop".
I agree. I browsed through some files and found AI-like comments in the code. The readme and several other places have AI-like writing. Regarding author not spending time on this project, this is presumably a 16k loc project that was commited in a single commit two days ago. So the author never commited any draft/dev version in the time. I find that quite hard to believe. Again my opinion is that LLMs were used, not that the code is slop. It may be. It may not be.
Yes this whole comment chain is the top comment misreading LLVM as LLMs which is hilarious.
> My beef isn't with you personally, it's with this almost auto-generated trend of comments on HN calling everyone's work "AI slop".
Now this doesn't necessarily is about this particular project but if you post something on a public forum for reactions then you are seeking the time of the people who will read and interact with it. So if they encounter something that the original author did not even bother to write, why should they read it? You're seeing many comments like that because there's just a lot of slop like that. And I think people should continue calling that out.
Again, this project specifically may or may not be slop. So here the reactions are a bit too strong.
by kmaitreys
2/19/2026 at 8:58:06 PM
Most of the stuff I published anywhere was moved into a new git repo with a fresh commit history, often in one commit. Don't want to worry about some overeager hiring manager or client to assume my sloppy commits and git push -f use in personal projects is representative of my paid work. Seems to be quite the common practice, though I have no numbers to back this up of course.I can empathise with the short fuse when it comes to suspecting AI slopped stuff though. Even some of the few people I still look up to push a lot of takes that reek of being slopped. Guess we'll have to wait this out.
by jasonvorhe
2/18/2026 at 8:46:31 AM
Hello, I’m the project author. I don’t think In any of this and some of the criticisms I’ve received on this forum have people realised I’m not the original poster. I posted this on R/compilers and as of now that’s pretty much it. In terms of the comments. I use intellisense from time to time, I put my own humour into things and because that’s who I am. I’m allowed to do these things.I’m self taught in this field. I was posting on R/compilers and shared this around with some friends who work within this space for genuine critique. I’ve been very upfront with people on where I use LLMs. It’s actually getting a bit “too much” with the overwhelming attention.
by ZaneHam
2/18/2026 at 9:30:18 AM
I understand your position. If I were in your place where someone else posted my personal (?)/hobby project on a public forum where it got discussed more on the point if it was LLM generated or not rather than the more interesting technical bits, I would also be frustrated.Regarding the writing style, it's unfortunate that LLMs have claimed a lot of writing styles from us. My personal opinion is to avoid using these AI-isms but I completely get that for people who wrote like that from the start, it's quite annoying that their own writing is now just labelled as LLM generated content.
by kmaitreys
2/18/2026 at 8:19:35 AM
> this is presumably a 16k loc project that was commited in a single commit two days ago. So the author never commited any draft/dev version in the timeIt's quite common to work locally and publish a "finished" version (even if you use source control). The reasons can vary, but I highly doubt that Google wrote Tilt Brush in 3 commits - https://github.com/googlevr/tilt-brush
All I'm saying is assuming everyone one-shots code (and insulting them like people do on HN), is unnecessary. I'm not referring to you, but it's quite a common pattern now, counter to HN's commenting guidelines.
> found AI-like comments in the code
Sure, but respectfully, so what? Like I posted in a [separate comment](https://news.ycombinator.com/item?id=47057690), code generators are like power tools. You don't call a carpenter sloppy because they use power tools to drill or cut things. A sloppy carpenter will be sloppy regardless, and a good carpenter will obsess over every detail even if they use power tools. A good carpenter doesn't need to prove their worth by screwing in every screw by hand, even if they can. :)
In some cases, code generators are like sticks of dynamite - they help blow open large blocks of the mountain in one shot, which can then be worked on and refined over time.
The basic assumption that annoys me is to assume that anyone who uses AI to generate code is incompetent and that their work is of poor quality. Because that assumes that people just one-shot the entire codebase and release it. An experienced developer will mercilessly edit code (whether written by an AI or by a human intern), and edit it until it fits the overall quality and sensibility. And large projects have tones of modules in them, it's sub-optimal to one-shot them all at once.
For e.g. with tests, I've written enough tests in my life that I don't need to type every character from scratch each time. I list the test scenarios, hit generate, and then mercilessly edit the output. The final output is exactly what I would've written anyway, but I'm done with it faster. Power tool. The final output is still my responsibility, and I obsessively review every character that's shipped in the finished product - that is my responsibility.
Sure plenty of people one-shot stuff, just like plenty of Unity games are asset flips, and plenty of YouTube videos are just low-effort slop.
But assuming everything that used AI is crap is just really tiring. Like [another commenter said](https://news.ycombinator.com/item?id=47054951), it's about skilled hands.
> something that the original author did not even bother to write
Again, this is an assumption. If I give someone bullet points (the actual meat of the content), and someone else puts them into sentences. Did the sentences not reflect my actual content? And is the assumption that the author didn't read what was finally written, and edit it until it reflected the exact intent?
In this case, the author says they used AI to generate the ASCII art in question. How does that automatically mean that the author AI-generated the entire readme, let alone the entire project? I agree, the knee-jerk reactions are way out of proportion.
Where do you draw the line? Will you not use grammar tools now? Will you not use translation tools (to translate to another language) in order to communicate with a foreign person? Will that person argue back that "you" didn't write the text, so they won't bother to read it?
Should we stop using Doxygen for generating documentation from code (because we didn't bother with building a nice website ourselves)?
Put simply, I don't understand the sudden obsession with hammering every nail and pressing every comma by hand, whereas we're clearly okay with other tools that do that.
Should we start writing assembly code by hand now? :)
by cmdr2
2/18/2026 at 9:46:53 AM
I mostly I agree with what you said. Comparison with a google project is bad though. That's a corporate business with a lot of people that might touch that codebase. Why are you comparing that to someone's personal project?Also I can see you and I both agree that it's disingenuous to call all LLM generated content slop. I think slop has just become a provocative buzzword at this point.
Regarding drawing the line, at the end, it comes down to the person using the tools. What others think as these tools become more and more pervasive will become irrelevant. If you as a person outsourced your thinking than it's you who will suffer.
In all my comments, I personally never used the word slop for this project but maintained that LLMs were used significantly. I still think that. Your other comparison of LLMs with things like doxygen or translation tools is puzzling to me. Also points about hammering every nail and every comma are just strawman. 5-6 years ago from today people used these things and nobody had any issues. There's a reason why people dislike LLM use though. If you cannot understand why it frustrates people, then I don't know what to say.
Also people do write assembly by hand when it is required.
by kmaitreys
2/18/2026 at 2:18:52 PM
> If you as a person outsourced your thinking than it's you who will suffer.Using a code generator != outsourcing your thinking. I know that's the popular opinion, and yes, you can use it that way. But if you do that, I agree you'll suffer. It'll make sub-optimal design decisions, and produce bloated code.
But you can use code generators and still be the one doing the thinking and making the decisions in the end. And maintain dictatorial control over the final code. It just depends on how you use it.
In many ways, it's like being a tech lead. If you outsource your thinking, you won't last very long.
It's a tool, you're the one wielding it, and it takes time, skill and experience to use it effectively.
I don't really have much more to say. I just spoke up because someone who built something cool was getting beat up unnecessarily, and I've seen this happen on HN way too many times recently. I wasn't pointing fingers at you at any point, I'm glad to have had this discussion :)
by cmdr2
2/18/2026 at 7:35:49 PM
I was responding to the person I was replying to, who confused LLVM with LLM, and who had brought up the slop term. I was surprised that they didn't think it was slop, because of the obvious tells (even with the fixed diagram formatting, there's a lot about that README and ASCII art that say that it was generated by or formatted by an LLM).One of the reasons that slop gets such an immediate knee-jerk reaction, is that it has become so prolific online. It is really hard to read any programming message boards without someone posting something half baked, entirely generated by Claude, and asking you to spend more effort critiquing it than they ever did prompting for it.
I glanced through the code, but I will admit that the slop in the README put me off digging into it too deeply. It looked like even if it was human written, it's a very early days project.
Yeah, calling something slop is low effort. It's part of a defense mechanism against slop; it helps other folks evaluate if they want to spend the time to look at it. It's an imperfect metric, especially judging if it's slop based only on the README, but it's gotten really hard to participate in good faith in programming discussions when so many people just push stuff straight out of Claude without looking at it and then expect you to do so.
by lambda
2/18/2026 at 8:52:56 AM
> anyone editing by hand would instinctively delete the extra spaces to align thoseI think as a human I am significantly more likely to give up on senseless pixelpushing like this than an LLM.
by hobofan
2/18/2026 at 6:28:15 AM
Parent confused LLVM with LLMby croes
2/17/2026 at 11:40:10 PM
> and prove the manual wrong on the machine language levelI'll be the party pooper here, I guess. The manual is still right, and no amount of reverse-engineering will fix the architecture AMD chose for their silicon. It's absolutely possible to implement a subset of CUDA features on a raster GPU, but we've been doing that since OpenCL and CUDA is still king.
The best thing the industry can do is converge on a GPGPU compute standard that doesn't suck. But Intel, AMD and Apple are all at-odds with one another so CUDA's hedged bet on industry hostility will keep paying dividends.
by bigyabai
2/18/2026 at 1:20:27 PM
[dead]by kittbuilds