alt.hn

2/27/2026 at 12:01:36 PM

PostmarketOS in 2026-02: generic kernels, bans use of generative AI

https://postmarketos.org/blog/2026/02/26/pmOS-update-2026-02/

by pantalaimon

2/27/2026 at 6:34:33 PM

There are some really baffling takes here. And it doesn't really matter how good or bad coding agents are.

Coding agents greatly reduce the barrier to contributing something that at least looks okay at the surface, so reviewing contributions will quickly become even more of a bottleneck. Manual contributions used to filter away most low effort attempts, or at least they could easily be identified and rejected.

That dynamic is now different and the maintainers risk being swarmed with low effort contributions, that will take a lot of time to review and respond to. Some AI contributions might be reviewed and revised and overall be of acceptable quality, but how can the maintainers know which without reviewing everything, good and bad alike.

I think we will see multiple attempts like this to shift things back to the old dynamic, by rejecting things that can be identified as AI generated a glance, but I suspect over time it will be difficult to do so, so my prediction is that we will soon see more open source repos stop accepting outside contributions entirely.

Even if LLMs one day will be good enough to quickly produce code that is on par with humans (which I strongly doubt), why would the contributors have any incentive to have someone else do that (the easy part), rather than just doing it themselves?

by ptnpzwqd

2/27/2026 at 1:11:26 PM

Very happy to see PostmarketOS take an uncompromising stance and also providing justification for it.

by jonathrg

2/27/2026 at 2:35:32 PM

Feels pretty Luddite to me.

I remember when people were crying about how much power a google search uses. This is the same thing all over again and it is as pointless now as it was back then.

https://arstechnica.com/ai/2025/08/google-says-it-dropped-th...

> Google says it dropped the energy cost of AI queries by 33x in one year. The company claims that a text query now burns the equivalent of 9 seconds of TV.

by fartfeatures

2/27/2026 at 3:01:00 PM

The audacity to call an organisation that works on making mobile phones and other small PCs work with free software Luddite is impressive.

That's like calling a person going for seconds a conservative (in the USA political sense).

by kruffalon

2/27/2026 at 3:30:57 PM

[flagged]

by bitwize

2/27/2026 at 4:14:39 PM

I don't think you understand what the job entails, if you think these are the best tools

by fruitworks

2/27/2026 at 2:45:21 PM

No, it's entirely justified when quality of code matters. They don't want a thousand gallons of unreviewable slop. They want a reasonable amount of code that can be sensibility reviewed.

by idiotsecant

2/27/2026 at 2:54:13 PM

There are ways to achieve that without a blanket ban, if you read their AI policy it seems more "ethically" motivated. They certainly address this first, with many more words and 7 references.

They do go on to address code quality but it is more of an after thought with 0 references, less words and appears lower down the page.

The timing is also suspicious, shortly after publication of this report: https://www.reuters.com/business/media-telecom/smartphone-ma... which forecasts declining smartphone sales meaning less devices for this OS to run on.

by fartfeatures

2/27/2026 at 3:56:48 PM

> The timing is also suspicious, shortly after publication of this report: https://www.reuters.com/business/media-telecom/smartphone-ma... which forecasts declining smartphone sales meaning less devices for this OS to run on.

Why would declining sales of new smartphones have anything to do with PostMarketOS, which only supports phones more than half a decade old?

by mftrhu

2/27/2026 at 6:47:54 PM

PostmarketOS doesn't exist in a vacuum. It’s the final stage of a device's life cycle. If the initial sales of new devices decline, the pool of available hardware for enthusiasts to tinker with in five years will be significantly smaller.

by fartfeatures

2/27/2026 at 7:49:23 PM

Yes. In five years, once the PMOS devs manage to get a 2025 device in working state, they might have less devices to play around with, so there could be an indirect effect on the project.

What I struggle to believe - what I don't believe - is that there any sort of connection between the report about likely declining sales and PMOS' announcement.

by mftrhu

2/27/2026 at 3:58:04 PM

pmOS does support recent phones, provided that they can be bootloader-unlocked - and that's only a few brands these days.

by zozbot234

2/27/2026 at 4:11:03 PM

Right now, their wiki page on device support [0] lists zero actual devices as "fully supported":

> These are the most supported devices, maintained by at least 2 people and have the functions you expect from the device running its normal OS, such as calling on a phone, working audio, and a functional UI.

> Besides QEMU devices, this is currently empty. The ports we had here earlier weren't as reliable as we would have liked. We plan to add new devices here with a higher standard.

The most recent smartphone in the Community section of that page is the Fairphone 4, released half a decade ago, in 2021. Pixel devices can trivially be bootloader unlocked, but that doesn't make the work that goes into supporting them much easier: the latest device in Testing is the 6a/6 Pro, from 2022, and its device page lists all the features but the most basic (touchscreen, flash, internal storage) as "Untested".

[0] https://wiki.postmarketos.org/wiki/Devices

by mftrhu

2/27/2026 at 2:54:53 PM

[flagged]

by UqWBcuFx6NV4r

2/27/2026 at 10:19:20 PM

This is incredibly simple. If a project doesn't want machine generated code, don't force machine generated code into the project. This isn't anything here that warrants multiple paragraphs of freakout.

by idiotsecant

2/27/2026 at 4:35:08 PM

Agreed. I would have chosen differently, but I appreciate the policy is unambiguous and explained succinctly with references.

Some people enjoy the outcome, others enjoy the process.

I find the criticism interesting. It's like one restaurant saying they'll use only electric stoves for the climate, then chefs all over the world calling them stupid naive for it.

It's like ethical arguments rationalizing local behavior are automatically interpreted as a global attack that has to be rejected.

by randusername

2/27/2026 at 2:18:23 PM

I wish more projects would take the same stance.

by LaSombra

2/27/2026 at 2:55:35 PM

[flagged]

by UqWBcuFx6NV4r

2/27/2026 at 2:33:04 PM

You say "uncompromising stance" with "justification", I say stubborn prejudice. They simply state the same weak, nonsensical complaints that apply to many other technologies that they undoubtedly don't have issues with and are happy with the use of.

by GaryBluto

2/27/2026 at 1:16:08 PM

I do not understand why Lineage insists on waiting for eBPF back ports when PostmarketOS has a far newer kernel running on the same hardware.

by chasil

2/27/2026 at 1:22:24 PM

Core Android functionality relies on eBPF in a way that PostmarketOS does not. PostmarketOS is much more of a linux distro than Android is. They are not very comparable.

by 9cb14c1ec0

2/27/2026 at 1:59:41 PM

AOSP patched kernels still include some features that are not in the mainline version. The LineageOS folks are working on support for mainline kernels, but AIUI it's not there yet.

by zozbot234

2/27/2026 at 4:06:57 PM

It's not the sky that's falling; it's the value of an SWE's labor.

Fun while it lasted, huh?

by flammafex

2/27/2026 at 1:05:49 PM

[flagged]

by mono442

2/27/2026 at 1:28:42 PM

Yes, the famously useless PosmarketOS.

Why don't you share the list of very useful things you created instead, mono442?

by surgical_fire

2/27/2026 at 1:43:08 PM

Never ask a woman her age or a vibe coder to show you an useful program they've written.

by nananana9

2/27/2026 at 3:02:40 PM

[flagged]

by UqWBcuFx6NV4r

2/27/2026 at 1:49:20 PM

I don't work on open source stuff but I work at a financial institution and genai has been a huge productivity boost. I can easily write 2x - 5x more code than before genai.

by mono442

2/27/2026 at 2:17:56 PM

Do you bring home 2x-5x more money every month then? Does your company make 2x 5x more profits?

The vibecoder paradox, everyone is 10x as productive, no one can show even a 1.2x increase in anything (besides bot generated comments, traffick and other background noise)

by lm28469

2/27/2026 at 4:22:32 PM

2x productivity will never result in 2x profit unless you somehow monopolize the productivity gain. Better tools can even result in negative profit gains. It's pretty much econ 101.

by raincole

2/27/2026 at 3:04:59 PM

[flagged]

by UqWBcuFx6NV4r

2/27/2026 at 1:52:16 PM

And as we all know, more lines of code always produces better results. That's why we call it "technical wealth".

by jsheard

2/27/2026 at 1:55:35 PM

So do you review all that code as well?

by qsera

2/27/2026 at 2:21:28 PM

I use other models to do the code review.

by mono442

2/27/2026 at 2:25:43 PM

At least, you are honest.

by qsera

2/27/2026 at 2:04:23 PM

Is the software you're working on useful? Care to share the link so we can take a look?

by hakube

2/27/2026 at 3:06:46 PM

[flagged]

by UqWBcuFx6NV4r

2/27/2026 at 1:24:48 PM

No one is stopping you from vibe-coding a POSIX-compatible mobile OS.

by ForHackernews

2/27/2026 at 1:47:32 PM

Not parent commenter but this is bound to happen.

And I highly doubt iOS and Android are free from LLM assisted code at this point.

by hu3

2/27/2026 at 2:14:05 PM

Yes and? Let's suppose your statement is 100% true, I genuinely don't see the point of these kinds of comments.

Why every time some person/group of people enact an anti-LLM policy in their project, other people feel the personal need to stress how useful LLMs are and how that project is bound to fail if they don't use it?

Postmarketos clearly exists and works, EVEN if LLMs were absolutely perfect for speeding up development ten folds, is there any absolute moral necessity to use them?

Also isn't this just moving the goalpost that LLM fanatics love to point out?

by imadr

2/27/2026 at 3:34:52 PM

> Postmarketos clearly exists and works, EVEN if LLMs were absolutely perfect for speeding up development ten folds, is there any absolute moral necessity to use them?

There's no moral necessity, but if you want to survive as a project moving forward, you'll have less and less velocity compared to projects using LLMs, so you'll eventually shrink and die as a project, because less people will contribute to a project that gets less features and bug fixes.

I don't understand why these projects have such a strong "moral" stance of "no AI ever", and instead they don't deploy LLMs to automatically review PRs based on their own guidelines, so that if the contribution is valuable, it gets through no matter if it was written by an LLM or not.

by plqbfbv

2/27/2026 at 4:27:24 PM

compared to what other projects?

by fruitworks

2/28/2026 at 12:10:05 AM

Mine was more a generic argument against the "ban all AI" stance that I've recently seen pop up more often.

At the moment, there isn't another project (that I know of) like PostmarketOS filling the same niche. If a new project were to appear, and were using LLMs, it'd likely progress faster.

Regardless, I've had success with LLMs and while I understand the maintainers' concern, if used properly they're a powerful tool to quickly iterate on huge amounts of information. They could be used to automate reviews of the spam of low-quality PRs, for instance (if they were to materialize).

But having read their policy page, their stance is more on ethical grounds, not moral: https://docs.postmarketos.org/policies-and-processes/develop... . So while I still stand by my argument in the general case, here it's not applicable, and while I see their ethical concerns, one project boycotting a tool doesn't really fix the systemic issues they mention.

by plqbfbv

2/28/2026 at 4:38:38 AM

I'm not sure it would progress faster for this project

by fruitworks

2/27/2026 at 2:29:10 PM

I'm pointing out that their expectation of AI-free OS is pointless.

Because AI-assisted code is most probably already present in devices they use.

And I dare say that even for PostmarktOS:

1) There's no way they can prevent AI-assisted code to reach their codebase.

2) They will most probably change this policy in the future lest other forks/projects outpace them in terms of utility and they get reduced to a carriage in a car world.

by hu3

2/27/2026 at 2:34:24 PM

The stance is not to 'prevent AI-assisted code to reach their codebase.' It's not like AI-assisted code is literally poisonous and their codebase dies if touched.

The stance is to deter random vibe-coders trying to resume-max by submitting PRs to known open source projects. There are so many of them rn. Hopefully by making it clear (some of) them will realize doing that is just wasting their tokens.

by raincole

2/27/2026 at 2:43:54 PM

I understand there's an avalanche of vibe slop PRs.

But to be clear their AI instance is as clear-cut as can be. Their instance IS INDEED to "prevent AI-assisted code to reach their codebase".

> The following is not allowed in postmarketOS:

> Submitting contributions fully or in part created by generative AI tools to postmarketOS.

source: https://docs.postmarketos.org/policies-and-processes/develop...

by hu3

2/27/2026 at 3:01:43 PM

[flagged]

by UqWBcuFx6NV4r

2/27/2026 at 3:07:42 PM

No need to call names. And I don't understand your point. Are you calling their rules impossible to enforce?

As per their rules, their instance is not only against entire PRs but any AI assisted code.

by hu3

2/27/2026 at 2:05:18 PM

Could AI write a highly specific camera driver or GPU driver, without any documentation at all?

by mpol

2/27/2026 at 2:30:40 PM

Probably not and why would it need such constraint?

Not even humans can do that. Documentation needs to at least be reverse-engineered and understood before implementation.

by hu3

2/27/2026 at 4:25:12 PM

Because these are the major types of problems that pmOS solves?

by fruitworks

2/27/2026 at 2:21:30 PM

I'm sure it could generate a decent device tree

by pantalaimon

2/27/2026 at 3:04:11 PM

Can you?

by fartfeatures

2/27/2026 at 1:17:33 PM

Whoever needs more slop faster can easily find it elsewhere, if PostmarketOS doesn't want to follow the trend, that's well and good.

by MonkeyClub

2/27/2026 at 1:21:42 PM

Weird stance to take.

I can understand "untested AI-genned code is bad, and thus anything that reeks of AI is going to be scrutinized" - especially given that PostmarketOS deals a lot with kernel drivers for hardware. Notoriously low error margins. But they just had to go out of their way and make it ideological rather than pragmatic.

by ACCount37

2/27/2026 at 1:25:41 PM

It's fine for a project to have moral/ideological leanings, it's only weird if you insist that project teams should be entirely amoral.

by jonathrg

2/27/2026 at 1:31:12 PM

The main reason open source projects exist at all is because of people who started them with quite often fringe ideological leanings. Just look at the GNU project.

by trollbridge

2/27/2026 at 1:40:24 PM

And fringe economical leanings, too. Just look at the GNU project: the firmware in printers is still of subpar quality, and GNU didn't really help to change that... and why on Earth would it, anyway?

by Joker_vD

2/27/2026 at 3:09:19 PM

[flagged]

by UqWBcuFx6NV4r

2/28/2026 at 3:07:36 AM

This doesn’t make any sense?

by trollbridge

2/27/2026 at 1:38:14 PM

> It's fine for a project to have moral/ideological leanings

As long as they align with the correct (i.e. yours) values, of course. When they adopt the wrong values, it's not fine.

by Joker_vD

2/27/2026 at 1:40:36 PM

But it is fine. If I disagree with a project's values I'm not going to contribute to it, and they wouldn't want me there either.

by jonathrg

2/27/2026 at 2:06:13 PM

There's still a line between values I disagree with and values that directly attack me as a person. The former is how many of us feel about some of our dependencies and most proprietary software we use, so it's clearly fine to some degree.

by debugnik

2/27/2026 at 1:25:21 PM

as a kernel developer, I use LLMs for some tasks, but can say it is not there yet to write real kernel space code

by yehoshuapw

2/27/2026 at 2:25:52 PM

Absolutely.

But at the same time I cannot imagine reverting to code with no help of LLMs. Asking stackoverflow and waiting for hours to get my question closed down instead of asking LLM? No way.

by egorfine

2/27/2026 at 3:11:49 PM

> But at the same time I cannot imagine reverting to code with no help of LLMs.

And doesn't that bother you a little?

If you listen to podcasts, check out this podcast episode: https://www.pushkin.fm/podcasts/cautionary-tales/flying-too-...

It is about Air France 447, but also draws parallels to AI and self-driving cars

by cuu508

2/27/2026 at 4:16:40 PM

Look, every medicine is a poison as well. Every single byte of code I commit I fully understand. I am strongly against slop.

However I'm not going back to asking stackoverflow and pretend that I have nowhere else to find answers.

by egorfine

2/27/2026 at 5:54:40 PM

> However I'm not going back to asking stackoverflow and pretend that I have nowhere else to find answers.

That's not your only option.

What you're meant to do is understand the tools you're using well enough to not need to ask for help from anyone or anything else. Stack Overflow is useful, but it's a learning tool. If all you were doing before AI was copying and modifying other people's code, it's no wonder that you have taken to AI, because it's just a slightly more convenient form of that.

by ZenoArrow

2/27/2026 at 5:58:50 PM

> If all you were doing before AI was copying and modifying other people's code

Aren't we all in a sense?

by egorfine

2/27/2026 at 9:46:00 PM

> Aren't we all in a sense?

Once you get good enough at a programming language, you can code a lot from memory and logic. As in, you can think of a design and how to build it without having to look up someone else's code. It's still useful to keep notes to refer back to, and look up information online to jog your memory, but it's not always a question of finding other people's code to modify.

by ZenoArrow

2/27/2026 at 6:53:35 PM

StackOverflow was also full of knowledgeable but objectionable people. I'm very glad not to have that energy in my life any more. Those that hate LLMs are welcome to continue using StackOverflow but I shan't be.

by fartfeatures

2/27/2026 at 1:35:48 PM

Exactly, you can use it for some tasks. But why "explicitly forbid generative AI".

If you use AI to make repetitive tasks less repetitive, and clean up any LLM-ness afterwards, would they notice or care?

I find blanket bans inhibitive, and reeks of fear of change, rather than a real substantive stance.

by crimsonnoodle58

2/27/2026 at 2:02:16 PM

> and clean up any LLM-ness afterwards

That never happens. It's actually easier to write the code from scratch and avoid LLMness altogether.

by zozbot234

2/27/2026 at 3:10:59 PM

[flagged]

by UqWBcuFx6NV4r

2/28/2026 at 3:09:23 AM

There are lots of tools that aren’t worthwhile to learn to use, and in particularly learning to use poor quality output of subpar tools is not something I’m interested in learning.

by trollbridge

2/27/2026 at 3:17:20 PM

The skill of cleaning up LLM-written slop to bring it to the human-like quality that any sane FLOSS maintainer would demand to begin with? It's just not worth it.

by zozbot234

2/27/2026 at 1:38:44 PM

They explain why in their AI policy. It's an ethical stance. Of course they wouldn't notice if there aren't clear signs of LLM-ness, but that's not the main reason why they forbid it.

https://docs.postmarketos.org/policies-and-processes/develop...

by jonathrg

2/27/2026 at 1:44:31 PM

Thanks for the clarification. Not that I agree with their stance (the exact same could have been said at the start of the industrial revolution) but I respect it nonetheless.

by crimsonnoodle58

2/27/2026 at 2:46:27 PM

> the exact same could have been said at the start of the industrial revolution

The pollution caused by said revolution is currently putting humanity at a serious risk of world war and maybe even extinction so... maybe they had a point? I'm not taking a strong stance either way here, but worth thinking about the downsides from the industrial revolution, too.

by coldpie

2/27/2026 at 1:40:53 PM

> But why "explicitly forbid generative AI".

The AI policy linked from the OP explains why. It's half not wanting to deal with slop, and half ethical concerns which still apply when it's used judiciously.

by jsheard

2/27/2026 at 1:30:09 PM

Same.

Having an LLM helps, especially when you're facing a new subsystem you're not familiar with, and trying to understand how things are done there. They still can't do the heavy duty driver work by themselves - but are good enough for basic guidance and boilerplate.

by ACCount37

2/27/2026 at 1:56:43 PM

My reading of their AI statement says your kernel contributions are no longer welcome in PostmarketOS, and also, since you're encouraging others in their space to use such tools, you're in violation of their code of conduct.

This applies to the person you're replying to too.

I think their policy is poorly thought out, and that little good will come of it. At best, it'll cause drama in the project, and discourage useful contributions. It's a shame, since we desperately need an alternative to the phone duopoly.

by hedora

2/27/2026 at 1:31:34 PM

Guidance and boilerplate... in other words, documentation.

by trollbridge

2/27/2026 at 3:13:46 PM

No, dude.

Do you genuinely think that people don’t know what documentation is? That’s insulting.

An LLM can help surface relevant information, taking your intent / goals into account, summarising vast quantities of code and indeed other documentation. That’s, like, their single most effective use.

by UqWBcuFx6NV4r

2/28/2026 at 3:06:34 AM

I am aware - I use the all day.

But I think it’s a poor substitute for effective, well written documentation, tutorials, and examples. A lot of open source code (and commercial, too) is moving in the direction of nearly zero documentation and they just provide an LLM prompt to pipe to | claude -p.

This is not a great change. For starters, LLMs often get confused when a project is internally reorganised and mix different versions together. They will also often recommend using a module’s internal structures.

(I spent half a day fighting datamodel-code-generate with this exact problem, so it’s fresh on my mind.)

EDIT: the account I’m responding to appears to be some kind of AI and appears to be prompted with “make snarky, argumentative comments”. If you are a real human, please accept my apology for this accusation.

by trollbridge

2/27/2026 at 2:57:43 PM

The licensure of the code generated by LLMs is not a settled matter in all jurisdictions; this is a very valid pragmatic concern they address.

by xantronix

2/27/2026 at 2:22:38 PM

> Submitting contributions fully or in part created by generative AI tools to postmarketOS.

So, autocomplete done by deterministic algorithms in IDEs are okay but autocomplete done by LLM algorithms - no, that's banned? Ok, surely everybody agrees with that, it's policy after all.

How it is possible to distinguish between the two in the vast majority of cases where the hand written code and autocompleted code is byte-by-byte identical?

Are we supposed to record video of us coding to show that we did type letters one by one?

> 2. Recommending generative AI tools to other community members for solving problems in the postmarketOS space.

Is searching for pieces of code considered parts of solving problems?

Then how do we distinguish between finding a a required function by grepping code or by asking LLM to search for it?

Can we ask LLM questions about postmarketOS? Like, "what is the proper way to query kernel for X given Z"?

If a community members asks this question and I already know the answer via LLM, then am I now banned from giving the correct answer?

--

Don't get me wrong. I am sick and tired of the vomit-inducing AI bullshit (as opposed to the tremendous help that LLMs provide to experienced devs).

I fail to see how a policy like this is even enforceable let alone productive and sane.

On the other hand, I absolutely see where is this policy coming from. It seems that projects are having a hard time navigating the issue and looking for ways to eliminate the insurmountable amount of incoming slop.

I think we still haven't found a right way to do it.

by egorfine

2/27/2026 at 2:32:55 PM

> So, autocomplete done by deterministic algorithms in IDEs are okay but autocomplete done by LLM algorithms - no, that's banned? Ok, surely everybody agrees with that, it's policy after all.

Because autocomplete still requires heavy user input and a SWE at the top of the decision making tree. You could argue that using Claude or Codex enables you to do the same thing, but there's no guarantee someone isn't vibecoding and then not testing adequately to ensure, firstly, that everything can be debugged, and secondly, that it fits in with the broader codebase before they try to merge or PR.

Plenty of people use Claude like an autocomplete or to bounce ideas off of, which I think is a great use case. But besides that, using a tool like that in more extreme ways is becoming increasingly normalized and probably not something you want in your codebase if you care about code quality and avoiding pointless bugs.

Every time I see a post on HN about some miracle work Claude did it's always been very underwhelming. Wow, it coded a kernel driver for out of date hardware! That doesn't do anything except turn a display on... great. Claude could probably help you write a driver in less time, but it'll only really work well, again, if you're at the top of the hierarchy of decision making and are manually reviewing code. No guarantees of that in the FOSS world because we don't have keyloggers installed on everybody's machine.

by kunai

2/27/2026 at 2:36:36 PM

Fully agree with you on all points.

But again: how do we distinguish between manual code input and sophisticated autocomplete?

by egorfine

2/27/2026 at 2:47:22 PM

The project is simply saying what they want. If you choose to ignore that for some weird reason congratulations for being a jerk, I guess.

by idiotsecant

2/27/2026 at 2:53:09 PM

Can you confirm that continuing to use autocomplete in a code base against the policy of the project does make the person a jerk?

by egorfine

2/27/2026 at 3:59:58 PM

Yes, actually. Knowingly violating the policies of a project while pretending you aren't, so you can continue participating in the fully voluntary project, does make you a jerk.

If you don't like the policies they set, just leave.

I'm willing to bet that every single person on here complaining has zero contributions to PostmarketOS.

by Blackthorn

2/27/2026 at 2:59:52 PM

[flagged]

by UqWBcuFx6NV4r

2/27/2026 at 2:48:27 PM

If it's crap then it's ai. If it's okay, then we pretend that is just sophisticated auto complete.

by aboardRat4

2/27/2026 at 2:51:54 PM

It's pretty much obvious but the policy specifically argues against it and stands on moral grounds.

by egorfine

2/27/2026 at 2:57:03 PM

This sounds impractical and like they will probably not keep the ban

AI use should be able to accelerate the development of ports on currently unsupported or undersupported devices which would directly support the project

I guess I wouldn't worry about the policy, they will probably naturally switch it if / when AI becomes more useful in practice

by erelong

2/27/2026 at 2:33:39 PM

> bans use of generative AI

that ship has sailed with codex 5.3 in 90% SWE jobs, unfortunately. I expect the next 9% won't survive the following 12 months and the last 1% is done within 5 years.

it isn't even about principles - projects not using gen AI will become basically irrelevant, the pace of gen AI allowed competitors will be too great.

by baq

2/27/2026 at 2:47:19 PM

Alright, let's see Codex 5.3 create a competitor to postmarketOS (without just copying the homework of other devs). If you believe in the technology so much, put it to the test, see what it can really do.

by ZenoArrow

2/27/2026 at 2:55:06 PM

Reminds me how one year ago people were saying "sure, GPT-4o can write a function, but try to make it write a whole application"

by dist-epoch

2/27/2026 at 3:01:49 PM

Sure, AI has developed quickly, but let's see it take on a real engineering challenge, rather than regurgitating boilerplate code.

Writing device drivers from incomplete specs is much harder than "writing a whole application" where the specs are clearly defined and there's a lot more example code to reference. If you believe in AI so much, and believe that it's unreasonable for postmarketOS to not want to use it, put it to the test, prove the doubters wrong, what have you got to lose?

by ZenoArrow

2/27/2026 at 3:08:22 PM

I don't have anything to win either.

What does a developer who writes a driver from incomplete specs do? Writes some values in some registers, sees how the device behaves, updates the spec. Rinse and repeat. Sounds exactly the kind of stuff coding agents thrive at - a verifiable loop. And they can do it 24x7 until done.

by dist-epoch

2/27/2026 at 3:35:19 PM

> I don't have anything to win either.

Sure you do, you can prove those that doubt your views wrong.

> Sounds exactly the kind of stuff coding agents thrive at - a verifiable loop. And they can do it 24x7 until done.

Go for it then, you're not putting in any work into it other than giving it a task to do.

by ZenoArrow

2/27/2026 at 3:38:24 PM

I'm sure you know what opportunity cost is

by dist-epoch

2/27/2026 at 4:32:44 PM

Haha, are you trying to suggest you'll have lost much by putting an AI tool to the test? You seem to think it's powerful enough to do the work of porting Alpine Linux (or equivalent) to new hardware without human intervention (beyond the initial prompt), what exactly are you losing by trying this out? It's not your time, as you would have spent less time on giving a simple instruction to an AI tool than you spent in talking to me.

Perhaps the reality is that you know AI needs more hand-holding than this, and the tools aren't up to the task you're thinking of setting them.

by ZenoArrow

2/27/2026 at 9:03:58 PM

I never said it requires zero hand holding.

You are also strangely fixated on today's capabilities, completely missing the exponential we are on.

In a few months will have posts here from device driver writers explaining how they hooked up a phone to an Arduino and a video camera and how the AI is automatically writing device drivers.

by dist-epoch

2/27/2026 at 9:49:55 PM

> You are also strangely fixated on today's capabilities

I am talking about today's capabilities because this comment thread started with the suggestion that the benefits of AI for coding was no longer avoidable after the launch of Codex 5.3.

> In a few months will have posts here from device driver writers explaining how they hooked up a phone to an Arduino and a video camera and how the AI is automatically writing device drivers.

A few months? Almost zero chance. If it happens in the next 5 years I'd be less surprised, but I suspect it'll take longer.

by ZenoArrow

2/27/2026 at 2:59:34 PM

Fun that you had to caveat it with some hand wavy homework bull. Gives you a nice get out of jail free clause when inevitably an AI writes an OS.

by fartfeatures

2/27/2026 at 3:04:51 PM

> Fun that you had to caveat it with some hand wavy homework bull.

Not really. If AI is just copying someone else's code, it's not really designing it is it. If you want it to truly design something, it needs to be designing it using the same constraints that the human engineers would face, which means it doesn't get the luxury of copying from others, it has to design things like device drivers with the same level of information that human engineers get (e.g. device specifications and information gathered through trial and error).

by ZenoArrow

2/27/2026 at 3:05:51 PM

Are you suggesting that a human being writes an OS in a vacuum without seeing any other OS or looking into how it is built. That feels a little facetious, no?

by fartfeatures

2/27/2026 at 3:32:52 PM

> Are you suggesting that a human being writes an OS in a vacuum without seeing any other OS or looking into how it is built. That feels a little facetious, no?

No, I'm suggesting in order for it to be a fair test, you need to impose the same restrictions that a human engineer would face.

For example, consider the work done by the Nouveau team in building a set of open source GPU drivers for NVIDIA GPUs. When they started out the specs were not so widely available. They could look at how GPU drivers were developed for other GPUs, but that is not going to be a substitute for exploratory work. Let's see how well AI does at that exploratory work. I think you'll find it's a lot harder than common uses for AI today.

by ZenoArrow

2/27/2026 at 2:52:56 PM

This stat is grossly inflated. I don't disagree with the general point but adoption isn't that high yet and certainly not for codex specifically.

by surajrmal

2/27/2026 at 2:53:48 PM

sure, but how do you make irrelevant something which is already irrelevant (PostmarketOS)?

by dist-epoch