alt.hn

4/8/2026 at 3:26:03 PM

What is RISC-V and why it matters to Canonical

https://ubuntu.com/blog/risc-v-101-what-is-it-and-what-does-it-mean-for-canonical

by fork-bomber

4/10/2026 at 10:40:23 PM

Will RISC-V end up with the same (or even worse) platform fragmentation as ARM? Because of absence of any common platform standard we have phones that are only good for landfill once their support lifetime is up, drivers never getting upstreamed to Linux kernel (or upstreaming not even possible due to completely quixotic platforms and boot protocols each manufacturer creates). RISC-V allows even higher fragmentation in the portions of instruction sets each CPU supports, e.g. one manufacturer might decide MUL/DIV are not needed for their CPU etc. ("M" extension).

by storus

4/11/2026 at 10:06:02 AM

PC/x86 was an extreme outlier, sadly, and it was because of Microsoft/Intel business model. The architecture details was historically mostly decided on by Wintel, yet the system integration was done by many vendors, whose best interest was to stay as compatible as possible. Its unlikely that another platform would be able to reach this state, the PC architecturing was subsidized from the M$ software monopoly that nobody would have wanted to suffer thru again.

by tliltocatl

4/10/2026 at 10:50:59 PM

RVA23 is the standard target for compilers now. If you support newer stuff, it’ll take a while before software catches up (just like SVE in ARM or AVX in x86).

If you try to make your own extensions, the standard compiler flags won’t be supporting it and it’ll probably be limited to your own software. If it’s actually good, you’ll have to get everyone on board with a shared, open design, then get it added to a future RVA standard.

by hajile

4/11/2026 at 3:36:22 AM

Compiling the code is not the issue. The hard part is the system integration. Most notably the boot process and peripherals. It's not actually hard to compile code for any given ARM or x86 target. Even much less open ecosystems like IBM mainframes have free and open source compilers (eg GCC). The ISA is just how computation happens. But you have to boot the system, and get data in and out for the system to be actually useful, and pretty much all of that contains vendor specific quirks. Its really only the x86 world where that got so standardized across manufacturers, and that was mostly because people were initially trying to make compatible clones of the IBM PC.

by MobiusHorizons

4/10/2026 at 11:28:31 PM

Thanks, that however addresses only a part of the problem. ARM is also suffering from no boot/initialization standard where each manufacturer does it their own way instead of what PC had with BIOS or UEFI, making ARM devices incompatible with each other. I believe the same holds with RISC-V.

by storus

4/10/2026 at 11:50:22 PM

There is a RISC-V Server Platform Spec [0] on the way supposed to standardise SBI, UEFI and ACPI for server chips, and it is expected to be ratified next month. (I have not read it myself yet)

[0]: https://github.com/riscv-non-isa/riscv-server-platform

by Findecanor

4/10/2026 at 11:40:51 PM

There has been concerted effort to start working on these kinds of standards, but it takes time to develop and reach a consensus.

Some stuff like BRS (Boot and Runtime Services Specification)and SBI (Supervisor Binary Interface) already exist.

by hajile

4/10/2026 at 10:58:01 PM

The answer is unequivocally yes: RISC-V is designed to be customizable and a vendor can put whatever they like into a given CPU. That being said, profiles and platform specs are designed to limit fragmentation. The modular design and core essential ISA also makes fat binaries much more straight-forward to implement than other ISAs.

by indolering

4/10/2026 at 11:27:48 PM

You can choose to develop proprietary extensions, but who’s going to use them?

A great case study is the companies that implemented the pre-release vector standard in their chips.

The final version is different in a few key ways. Despite substantial similarities to the ratified version, very few people are coding SIMD for those chips.

If a proprietary extension does something actually useful to everyone, it’ll either be turned into an open standard or a new open standard will be created to replace it. In either case, it isn’t an issue.

The only place I see proprietary extensions surviving is in the embedded space where they already do this kind of stuff, but even that seems to be the exception with the RISCV chips I’ve seen. Using standard compilers and tooling instead of a crappy custom toolchain (probably built on an old version of Eclipse) is just nicer (And cheaper for chip makers).

by hajile

4/11/2026 at 3:47:40 AM

Yes, extensions are perfect for embedded. But not just there.

Extensions allow you to address specific customer needs, evolve specific use cases, and experiment. AI is another perfect fit. And the hyperscaler market is another one where the hardware and software may come from the same party and be designed to work together. Compatibility with the standard is great for toolchains and off-the-shelf software but there is no need for a hyperscaler or AI specific extension to be implemented by anybody else. If something more universally useful is discovered by one party, it can be added to a future standard profile.

by LeFantome

4/11/2026 at 6:47:11 AM

Just like everything else outside PC thanks to clones becoming a thing.

One reason UNIX became widely adopted, besides being freely available versus the other OSes, was that allowed companies to abstract their hardware differences, offering some market differentiation, while keeping some common ground.

Those phones common ground is called Android with Java/Kotlin/C/C++ userspace, folks should stop seeing them as GNU/Linux.

by pjmlp

4/11/2026 at 3:18:10 AM

> Will RISC-V end up with the same (or even worse) platform fragmentation as ARM?

Sadly, yes. RISC-V vendors are repeating literally every single mistake that the ARM ecosystem made and then making even dumber ones.

by bsder

4/10/2026 at 9:32:51 PM

> Enabling new business models

This is true, but only for the bigger players. The nature of hardware still fundamentally favors scale and centralization. Every hyper-scalar eventually gets to a size that developing in-house CPU talent is just straight up better (Qcom and Ventana + Nuvia, Meta and Rivos, Google's been building their own team, Nvidia and Vera-Rubin, God help Microsoft though). This does not bode well for RISC-V companies, who are just being used as a stepping stone. See Anthropic, who does currently license but is rumored to develop their own in-house talent [1].

> Extensibility powers technology innovation

>> While this flexibility could cause problems for the software ecosystem...

"While" is doing some incredible heavy lifting. It is not enough to be able to run Ubuntu, as may be sufficient for embedded applications, but to also be fast. Thusly, there are many hardcoded software optimizations just for a CPU, let alone ARM or x86. For RISC-V? Good luck coding up every permutation of an extension that exists, and even if it's lumped as RVA23, good luck parsing through 100 different "performance optimization manuals" from 100 different companies.

> How mature is the software ecosystem?

10 years ago, when RISC-V was invented, the founders said 20 years. 10 years later, I say 30 years.

The nature of hardware as well, is that the competition (ARM) is not stationary as well. The reason for ARM's dominance now is the failure of Intel, and the strong-arming of Apple.

I have worked in and on RISC-V chips for a number of years, and while I am still a believer that it is the theoretical end state, my estimates just feel like they're getting longer and longer.

[1]: https://www.reuters.com/business/anthropic-weighs-building-i...

by ljhsiung

4/11/2026 at 6:12:04 AM

> good luck parsing through 100 different "performance optimization manuals" from 100 different companies.

Imo this is pretty misguided. If you're writing above assembly level, you can read the performance optimization manual for Intel, and that code will also be really fast on AMD (or even apple/graviton). At the assembly level, compilers need to know a little bit more, but mostly those are small details and if they get roughly the right metrics, the code they produce is pretty good.

by adgjlsfhk1

4/10/2026 at 10:05:35 PM

I stopped listening to what Canonical says. They often get involved in things and disturb the ecosystem then abandon stuff or dig a "not invented here" hole.

Unity, Bazaar, Mir, Upstart, Snap, etc.

All of them had existing well established projects they attempted to uproot for no purpose other than Canonical wanted more control but they can't actually operate or maintain that control.

by ddtaylor

4/11/2026 at 3:38:20 AM

Ubuntu Touch... I was so excited about it that I bought one of the phones with it preloaded. I even used it as my sole daily driver for months, until I learned that I was not receiving all calls made to me. Even after that I kept hoping it would keep developing so that I could pick it up again one day. But then Canonical abandoned it instead. That's when they became as good as dead to me.

by popcornricecake

4/11/2026 at 3:48:47 AM

Sadly, KDE and Gnome each spent a lot of time on the same things. Plasma Mobile has ate more time that could have went into making Plasma a better desktop.

by ddtaylor

4/11/2026 at 8:32:55 AM

That's a strange argument. Open source software including Plasma Mobile is developed by volunteers who choose to spend their time on a given project. I am quite happy with the pace of Plasma Desktop and the progress made in the past 3 years on its 6th iteration.

by kombine

4/11/2026 at 12:16:22 AM

The project bzr was trying to uproot may not be the one you’re thinking of. First release of Bzr predates git by about a month.

by loloquwowndueo

4/11/2026 at 3:50:57 AM

Correct, and I used bzr quite a bit during that time. It was interesting in some ways, but Canonical pushed it for many years after git was obviously the better choice.

Even to this day there is a complex and archaic process of using Launchpad where git is tacked on because they stuck with Bazaar for so long.

by ddtaylor

4/10/2026 at 10:48:26 PM

Also LXD → Incus: https://linuxcontainers.org/lxd/

by justinclift

4/10/2026 at 11:10:10 PM

Or ansible/chef/etc -> Juju. There's a lot of NIH to pick from at Canonical.

by duskwuff

4/11/2026 at 10:17:01 AM

Or Debian -> Ubuntu

by goodpoint

4/10/2026 at 11:43:00 PM

In a way it's really sad how many swings and misses Canonical has taken in its history.

by Redoubts

4/11/2026 at 10:24:05 AM

I'm fine with a company getting things wrong from time to time. What I don't like is the attitude where they walk into the room and start moving the furniture around while smugly dismissing or ignoring talented and established people. Then after a bit of milling around they just give up and leave the room and everyone has to clean up the mess.

by ddtaylor

4/11/2026 at 5:45:00 AM

I miss Ubuntu One, their Dropbox alternative which came with a wee integrated Linux client. IIRC, their free tier was also more generous in comparison.

by unmole

4/11/2026 at 8:40:43 AM

I was honestly wishing Ubuntu would keep upstart alive. I preferred it as init system.

by lukaslalinsky

4/11/2026 at 10:27:26 AM

That is half the problem. They often introduce neat ideas, but then fail or refuse to integrate them with he rest of the FOSS ecosystem. Then anyone who subscribes to their experiment is left cleaning up the mess and trying to migrate the features or ideas they like to the remaining projects that should have been extended in the first place.

by ddtaylor

4/10/2026 at 11:43:06 PM

Snap is definitely not abandoned.

by maxloh

4/11/2026 at 10:13:09 AM

Snap is a terrible. It's the reason why I stopped using Debian based distros after decades for desktop usage.

Lying to users and turning apt install commands into shims for a barely functional replacement was disrespectful. Flatpak was and still is better, but even then if I say I want a system package you give me a system package. If you have infrastructural reasons why you cannot continue to provide that package then remove it, Debian based systems have many ways to provide such things.

Canonical did it because they wanted to boost Snap usage and if failed while sending a clear message they don't respect their user base.

by ddtaylor

4/10/2026 at 11:54:08 PM

Sadly

by esperent

4/11/2026 at 7:43:07 AM

> Snap is definitely not abandoned.

You seem to be say it like it's a good thing?

Can't wait for that thing to explode and die.

by ur-whale

4/10/2026 at 10:14:38 PM

Not sure on the timelines, but snap, upstart and Mir were all attempts at evolving Linux ecosystem that lost to RedHat-backed systems. Unity was legit abandoned, and bazaar... Not sure what they were trying to solve there with git and forges already existing.

by unethical_ban

4/11/2026 at 12:25:54 AM

> bazaar... Not sure what they were trying to solve there with git and forges already existing.

You are mistaken here. Bazaar, Mercurial, and Git appeared at about the same time, and I think Bazaar was released first.

IIRC, Bazaar tried to distinguish itself by handling renames better than other version control systems. In practice, this turned out not to be very important to most people.

(Tangent: It wasn't clear at the time whether Mercurial or Git was the better pick. Their internal design was very similar. Mercurial offered a more pleasant user interface, superior cross-platform support, and a third advantage that I'm forgetting at the moment. Git had unbeatable author recognition. Eventually, Git's improved Windows support and the arrival of GitHub sealed its victory in the popularity contest. But all of that came to pass well after Bazaar was released.)

by foresto

4/11/2026 at 6:30:22 AM

Lightweight branch model of git mapped so much better to the way that actual development processes of medium to large projects really work(ed).

Named branches vs bookmarks in hg just means bike shedding about branching strategy. Bookmarks ultimately work more like lightweight git style branches, but they came later, and originally couldn't even be shared (literally just local bookmarks). Named branches on the other hand permanently accumulate as part of the repository history.

Git came out with 1 cohesive branch design from day 1.

by omcnoe

4/11/2026 at 7:52:10 AM

I work on a mercurial hosted project right now. What ticks me off is all those unnamed heads you need to handle every time you pull other people's changes. Yes they're more flexible. Most of the time that just means extra operations for no good reason.

by nottorp

4/11/2026 at 8:35:55 AM

Yeah, agreed. I liked the idea of Mercurial branches better than git's — in principle I prefer more rather than less metadata in history — but they genuinely had a scaling problem. I can't recall the numbers, this being more than a decade ago, but I tested with a realistic number of branches for a team of developers using short-lived branches for a while and you could easily see Mercurial slowing down.

Back when I was testing bookmarks were available, but Bitbucket was pretty much the only forge that supported Mercurial and their tooling didn't support bookmarks, so that made them a non-starter for many users.

by jurip

4/11/2026 at 7:57:12 AM

That is very different from my experience with git. I know that the kernel uses branches a lot, but that's probably because of git's history with the project. At every company I worked git is used exactly the same way as CVS or SVN was used many years ago: you make some local changes, you push these local changes to the central store, you forget about it. Branches make local switching between tasks easier, but apart from that nobody cares about branches and they're definitely not treated as an important part of the repo. In fact, they're usually deleted immediately after the change is merged.

by usrnm

4/11/2026 at 8:09:53 AM

I think you have it swapped around. This is exactly the kind of workflow that git provided better support for - lightweight branches, not integral part of master history, deleted after merge.

by omcnoe

4/10/2026 at 11:35:03 PM

Wayland was created in 2008. Mir was created in 2013.

Bazaar and Git were created around the exact same time.

Unity was abandoned after a failed attempt to circumvent Gnome 3. I was actually involved with the development of Compiz and they hired Sam to work on Unity, as he was one of the masterminds behind Compiz, but again they just didn't have the vision or execution to make it work.

by ddtaylor

4/11/2026 at 7:52:31 AM

Unity was great, after it was abandoned I tried yet again GNOME 3, me that in the past have collaborated with Gtkmm, ended up moving into XFCE, and nowadays I am fully on macOS/Windows anyway.

If I ever go back to GNU/Linux full time, GNOME certainly won't be it.

by pjmlp

4/11/2026 at 10:21:42 AM

Things improved a lot with Gnome over the years, but as a fellow Gnome 2 user the initial release of 3 and the following years were a real kick in the teeth.

Things have improved, but the overall Gnome Foundation attitude hasn't improved. They are still very stubborn and remove basic features. This seemed to start when they did their infamous "focus groups" where they claim users can't understand basic things.

I get the desire to provide a cohesive experience, but I think you can do that while also giving people control.

KDE is shaping up to be much better and it's likely because Valve is providing commercial support and exposing it to a larger audience.

Cosmic is the new kid backed by system76 and its pretty nice too and may rescue Gnome in some ways in due time.

by ddtaylor

4/11/2026 at 12:17:43 AM

> Not sure what they were trying to solve there with git and forges already existing.

What?

Bzr predates git (by a few days but still). Launchpad predated GitHub by a lot. canonical just played those cards horribly and lost.

by loloquwowndueo

4/11/2026 at 10:29:16 AM

I still maintain some Launchpad packages and recipes. It's an insanely archaic system and borderline non-functional. I wouldn't wish it upon most.

by ddtaylor

4/10/2026 at 11:43:48 PM

It’s canonically fucked

by sharts

4/10/2026 at 8:56:52 PM

Not my area of expertise but what exactly is the difference between RISC-V and Power PC? Didn't Power-PC get a good run in the 90s and 2000s? Just wondering why there's renewed interest in RISC-like architectures when industry already had a good exploration of that area.

by stuxnet79

4/10/2026 at 11:00:13 PM

The interest is BECAUSE it's well explored territory. The concept is proven and works fine.

On the low end where RISC-V currently lives, simplicity is a virtue.

On the high end, RISC isn't inherently bad; it just couldn't keep up on with the massive R&D investment on the x86 side. It can go fast if you sink some money into it like Apple, Qualcomm, etc have done with ARM.

by invalidator

4/11/2026 at 3:58:21 AM

ARM is RISC and dominates x86 in most markets.

In 2026, RISC-V is not what I would call “low end”. Look up the P870-D, or Ascalon, it C950.

Do you think Apple spends more money than Intel on chip design?

by LeFantome

4/11/2026 at 6:23:40 AM

> Do you think Apple spends more money than Intel on chip design?

Absolutely. Apple's R&D budget for 2025 was 34 Billion to Intel's ~18 Billion (and the majority of Intel's R&D budget goes to architecture, while for Apple, that is all TSMC R&D and Apple pays TSMC another ~$20 billion a year, of which, something like 8 billion is probably TSMC R&D that goes into apple's chips).

Sure not all of Apple's 34B is CPU R&D, but on a like-for-like basis, Apple probably has at least 50% more chip design budget (and they only make ~10-20 different chips a year compared to Intel who make ~100-200)

by adgjlsfhk1

4/11/2026 at 7:57:20 AM

ARM is mostly RISC, and doesn't dominate x86 in desktop and servers.

Apple business is vertical integration, they have zero presence in the chip market.

by pjmlp

4/10/2026 at 9:15:05 PM

It is Chinese companies looking for ARM alternative that push this otherwise mediocre ISA.

It is possible that ARM based CPUs will start eating x86 market slowly. See snapdragon X2 and upcoming Nvidia CPU. Maybe in 10 years new computers will be ARM based and a lot of IoT will run on risc-5.

by Chyzwar

4/10/2026 at 9:23:50 PM

Why "mediocre"? I've written production assembly language for a half-dozen different processor architectures and RISC-V is my favorite by far.

by aappleby

4/10/2026 at 10:13:04 PM

You should write an article on that explaining why you like it to the common man

by mikestorrent

4/11/2026 at 8:26:59 AM

Silly opinion that has no relevance to building competitive CPUs, but I like that RISC-V is modular and you can pick and choose which extensions to adopt.

Makes writing a simulator so easy (just have to focus on RV32I to get started), and also makes RISC-V a great bytecode alternative for a homegrown register-based virtual machine: chances are RV32I covers all the operations you will need on any Turing-complete VM. No need to reinvent the wheel. In a weekend I implemented all of RV32IM, passing all the official tests, and now I have target my VM with any major compiler (GCC, Rust) with no effort.

If there is any architecture that scales linearly from the most minimal of low-energy cores to advanced desktop hardware is RISC-V.

Disclaimer: I don't know much about ARM, but 1) it isn't as open and 2) it's been around enough to have accumulated as much historical cruft as x86.

by sph

4/10/2026 at 9:23:07 PM

"It is Chinese companies looking for ARM alternative"

The V in RISC-V represents iteration of the ISA, over the last 46 years, most of which occurred in the US, mainly at Berkeley.

by topspin

4/11/2026 at 2:27:02 AM

They push it to save a couple nickels per core on the ARM licenses, not out of nationalistic fervor.

And it is the Chinese doing it because virtually 100% of all chips are made in China and Taiwan.

by avadodin

4/11/2026 at 3:49:42 AM

That's not really how it works. There are only a few companies on the planet that are licensed to create their own cores that can run ARM instructions. This is an artificial constraint, though and at present China is (as far as I know) cut off from those licenses. Everyone else that makes ARM chips is taking the core design directly from ARM integrating it with other pieces (called IP) like IO controllers, power management, GPU and accelerators like NPUs to make a system on a chip. But with RISC-V lots of Chinese companies have been making their own core designs, that leads to flexibility with design that is not generally available (and certainly not cost effective) on ARM.

by MobiusHorizons

4/11/2026 at 2:30:55 AM

Your comment is appealing to fallacies such as it being old so it's good or that it was made by a prestigious university. It's not like those early iterations were commercially produced and they learned off of real world usage. For the people who criticize the ISA, saying that it is old will not change their mind.

by charcircuit

4/11/2026 at 2:43:35 AM

Maybe. People are free to partake in whatever cognitive misadventures they wish. I merely cite the incontrovertible fact that Berkeley RISC predates essentially all of the modern economic history of China, and also the rise of ARM. It came from academe in the US, for better or worse, whether it's crap or the finest ISA ever, and for whatever purpose these US academics had or or have. That is all anyone can truthfully say about its pedigree. The rest is just bullshit from the internet.

by topspin

4/11/2026 at 4:00:53 AM

SiFive, Tenstorrent, and other big RISC-V firms are not Chinese.

by LeFantome

4/11/2026 at 6:24:50 AM

you realize that every WD hdd and every nvidia gpu from the past couple years has a Risc-v in it?

by adgjlsfhk1

4/11/2026 at 12:48:48 AM

Really? Didn't China pirate the entire ARM China company and start spamming cores like Star1

by bobmcnamara

4/11/2026 at 3:53:41 AM

There are many more RISC chips than not. Apple Silicon is RISC. All ARM is RISC (eg. Raspberry Pi).

by LeFantome

4/10/2026 at 10:12:30 PM

x86_64 machines are RISC under the hood and have been for ages, I believe; microcode is translating your x64 instructions to risc instructions that run on the real CPU, or something akin to that. RISC never died, CISC did, but is still presented as the front-facing ISA because of compatibility.

by mikestorrent

4/10/2026 at 11:05:39 PM

That's a common factoid that's bandied about but it's not really accurate, or at least overstated.

To start, modern x86 chips are more hard-wired than you might think; certain very complex operations are microcoded, but the bulk of common instructions aren't (they decode to single micro-ops), including ones that are quite CISC-y.

Micro-ops also aren't really "RISC" instructions that look anything like most typical RISC ISAs. The exact structure of the microcode is secret, but for an example, the Pentium Pro uses 118-bit micro-ops when most contemporary RISCs were fixed at 32. Most microcoded CPUs, anyway, have microcodes that are in some sense simpler than the user-facing ISA but also far lower-level and more tied to the microarchitecture.

But I think most importantly, this idea itself - that a microcoded CISC chip isn't truly CISC, but just RISC in disguise - is kind of confused, or even backwards. We've had microcoded CPUs since the 50s; the idea predates RISC. All the classic CISC examples (8086, 68000, VAX-11) are microcoded. The key idea behind RISC, arguably, was just to get rid of the friendly user-facing ISA layer and just expose the microarchitecture, since you didn't need to be friendly if the compiler could deal with ugliness - this then turned out to be a bad idea (e.g. branch delay slots) that was backtracked on, and you could argue instead that RISC chips have thus actually become more CISC-y! A chip with a CISC ISA and a simpler microcode underneath isn't secretly a RISC chip...it's just a CISC chip. The definition of a CISC chip is to have a CISC layer on top, regardless of the implementation underneath; the definition of a RISC chip is to not have a CISC layer on top.

by wk_end

4/11/2026 at 5:40:11 AM

I think you are conflating microcode with micro-ops. The distinction into the fundamental workings of the CPU is very important. Microcode is an alternative to a completely hard coded instruction decoder. It allows tweaking the behavior in the rest of the CPU for a given instruction without re-making the chip. Micro-ops are a way to break complex instructions into multiple independently executing instructions and in the case of x86 I think comparing them to RISC is completely apt.

The way I understand it, back in the day when RISC vs CISC battle started, CPUs were being pipelined for performance, but the complexity of the CISC instructions most CPUs had at the time directly impacted how fast that pipeline could be made. The RISC innovation was changing the ISA by breaking complex instructions with sources and destinations in memory to be sequences of simpler loads and stores and adding a lot more registers to hold the temporary values for computation. RISC allowed shorter pipelines (lower cost of branches or other pipeline flushes) that could also run at higher frequencies because of the relative simplicity.

What Intel did went much further than just microcode. They broke up the loads and stores into micro-ops using hidden registers to store the intermediates. This allowed them to profit from the innovations that RISC represented without changing the user facing ISA. But internal load store architecture is what people typically mean by the RISC hiding inside x86 (although I will admit most of them don't understand the nuance). Of course Intel also added Out of Order execution to the mix so the CPU is no longer a fixed length pipeline but more like a series of queues waiting for their inputs to be ready.

These days high performance RISC architectures contain all the same architectural elements as x86 CPUs (including micro-ops and extra registers) and the primary difference is the instruction decoding. I believe AMD even designed (but never released) an ARM cpu [1] that put a RISC instruction decoder in front of what I believe was the zen 1 backend.

[1]: https://en.wikipedia.org/wiki/AMD_K12

by MobiusHorizons

4/11/2026 at 3:24:16 AM

That's an excellent rebuttal to this common factoid.

Recently I encountered a view that has me thinking. They characterized the PIO "ISA" in the RPi MCU as CISC. I wonder what you think of that.

The instructions are indeed complex, having side effects, implied branches and other features that appear to defy the intent of RISC. And yet they're all single cycle, uniform in size and few in number, likely avoiding any microcode, and certainly any pipelining and other complex evaluation.

If it is CISC, then I believe it is a small triumph of CISC. It's also possible that even characterizing it as and ISA at all is folly, in which case the point is moot.

by topspin

4/10/2026 at 11:41:45 PM

Thanks for the detail, that's very clarifying

by mikestorrent

4/10/2026 at 11:09:37 PM

I think that this is something of a misunderstanding. There isn't a litteral RISC processor inside the x86 processor with a tiny little compiler sitting in the middle. Its more that the out-of-order execution model breaks up instructions into μops so that the μops can separately queue at the core's dozens of ALUs, multiple load/store units, virtual->physical address translation units, etc. The units all work together in parallel to chug through the incoming instructions. High-performance RISC-V processors do exactly the same thing, despite already being "RISC".

by samsartor

4/10/2026 at 10:28:51 PM

Ah, PowerPC. For a RISC processor it surely had a lot of instructions, most of them quite peculiar. But hey, it had fixed-length instruction encoding and couldn't address memory in instructions other than "explicit memory load/store", so it was RISC, right?

by Joker_vD

4/11/2026 at 12:48:01 AM

Also load/store backwards, but no reverse the register instructions

by bobmcnamara

4/10/2026 at 7:48:16 PM

I’m looking forward to using a RISC-V computer in 20 years

by mcdow

4/10/2026 at 9:24:39 PM

You're probably already using a RISC-V computer, it's just embedded as a supervisor in some other gadget (or vehicle) you own.

by aappleby

4/11/2026 at 12:17:48 AM

I look forward to running my _own_ software on a RISC-V computer.

by themafia

4/10/2026 at 8:25:22 PM

While its current performance is not competitive, there are currently interesting options. I got the orange pi riscv version, mainly to test riscv while it's slow compared to other arm socs, it's still better than I expected. There are even risc v TPUs now.

by 3abiton

4/10/2026 at 8:45:15 PM

This underestimates the will of governments and companies Europe and especially China to reduce their dependency on US-controlled technology.

by ninth_ant

4/10/2026 at 9:50:26 PM

ARM isn't US controlled, is it? British and also now Japanese since it's owned by SoftBank.

Meanwhile, wouldn't China be more heavily invested in Longsoon?

by wk_end

4/10/2026 at 11:13:50 PM

ARM is British (America’s closest ally) and proprietary. If you’re swapping, just eliminate the risk and cost entirely.

LoongArch is 32-bit instructions only. This means no MCUs due to poor code density. That forces them into RISCV anyway at which point, you might as well pour all your money and dev time into one ISA instead of two. RISCV has way more worldwide investment meaning LoongArch looks like a losing horse in the long term when it comes to software.

by hajile

4/11/2026 at 1:18:12 AM

Quite the contrary, the fragmented ecosystem is holding RISC-V back.

There are currently 3 variants of LoongArch ISA. The reduced 32-bit version targets MCUs. And LoongArch64 ATX/MATX motherboards with UEFI support is readily available. This makes it far more easier to develop with LoongArch.

by gggmaster

4/10/2026 at 9:23:40 PM

I hope our complacent companies get a shot of competition.

by Tostino

4/10/2026 at 9:46:52 PM

I already have one! (But it's technically a soldering iron...)

by bityard

4/11/2026 at 4:36:22 AM

Pinecil?

by LeFantome

4/10/2026 at 9:39:02 PM

I think 10 years is a more realistic estimate. Probably first in servers and Android phones.

by IshKebab

4/10/2026 at 11:35:39 PM

They are everywhere already in microcontrollers like ESP32.

by ThatMedicIsASpy

4/11/2026 at 8:59:45 AM

Yeah but op was talking about directly using a RISC-V computer. The embedded RISC-V CPUs are effectively black boxes.

by IshKebab

4/10/2026 at 8:01:21 PM

unironically, this.

i've been hearing about arm computer for almost twenty years and only just recently general-purpose decently-priced arm laptops have been released (qualcomm laptops, the macbook neo).

and arm desktop are still not a thing, in practice.

by znpy

4/10/2026 at 8:05:12 PM

Well, Apple M1/M2/etc. are, technically, ARMv8, and they're available as desktops.

by Joker_vD

4/10/2026 at 8:13:40 PM

Also the Acorn Archimedes is, technically, an ARM / RISC desktop.

by Joeboy

4/10/2026 at 10:12:34 PM

Distant memories of a 1980s London classroom.

by bluebarbet

4/11/2026 at 10:01:08 AM

they're not general-purpose in the sense that you can run any operating system nor they're decently priced.

by znpy

4/10/2026 at 10:34:35 PM

> arm desktop are still not a thing

The desktop market is not the only product space anymore.

Apple has had brilliant success with its ARM processors, proving that ARM is more than capable. Before Apple's switch, Chromebooks had been using ARM since 2011.

Android is the dominant operating system in mobile and most Android devices use the ARM platform. Many of these devices have desktop capability -- they are a viable convergence platform.

by heresie-dabord

4/10/2026 at 8:10:02 PM

I think the Surface Laptops (2018?) count, and arguably the previous models (2012+) sorta-kinda count (tablet + keyboard).

Side note: It's kinda funny to me that "the keyboard is detachable, the screen is glass and you can touch/write on it" makes it "lesser" than a laptop rather than being an upgrade.

But yeah, definitely happy to see more in this space. Now we just need e-Paper laptops to take off as well :)

by andai

4/10/2026 at 9:40:10 PM

Donald Trump might make it five.

by wg0

4/11/2026 at 7:41:10 AM

I've played with a bunch of RISC-V platforms, mostly SBCs in the raspi class

Beyond the potential platform fragmentation due to the variability of the ISA (a very unfortunate design choice IMO), mentioned elsewhere in this thread, what I find most frustrating is the boot process / equivalent of BIOS in that world.

My impression: complete lack of standardization, a ton of ad-hoc tools native to each vendor, a complete mess, especially when it comes to get the board to boot from devices the vendor didn't target (eg SSDs).

Until two things happen:

1. a CPU with a somewhat competitive compute power appears (so far, all the SBC's I've tried are way behind ARM and x86)

2. a unified BOOT environment which supports a broad standard of devices to boot from (SSD, network, SD-Card, hard-drives, etc...)

the whole RISC-V thing will remain a tiny niche thing, especially because when a vendor loses interest in the platform, all of the SW that is native to the platform goes to rot immediately (not that it was particularly good quality in the first place).

by ur-whale

4/10/2026 at 8:59:37 PM

Huh? that link returned:

    Your submission was sent successfully! Close

    Thank you for contacting us. A member of our team will be in touch shortly. Close

    You have successfully unsubscribed! Close

    Thank you for signing up for our newsletter!
    In these regular emails you will find the latest updates about Ubuntu 
    and upcoming events where you can meet our team. Close

    Your preferences have been successfully updated. Close notification

    Please try again or file a bug report. Close

by Animats

4/10/2026 at 9:16:58 PM

There's an email signup box on the right side on desktop, or bottom of the page on mobile. Maybe you somehow managed to hit it, or see it during some component update.

by shakna