alt.hn

4/25/2025 at 6:32:23 PM

Next-Gen GPU Programming: Hands-On with Mojo and Max Modular HQ

https://www.youtube.com/live/uul6hZ5NXC8?si=mKxZJy2xAD-rOc3g

by solarmist

4/25/2025 at 6:55:18 PM

I'm really hoping Modular.ai takes off. GPU programming seems like a nightmare, I'm not surprised they felt the need to build an entire new language to tackle that bog.

by solarmist

4/25/2025 at 7:54:46 PM

There are already plenty of languages in CUDA world, that is one reasons it is favoured.

The problem isn't the language, rather how to design the data structures and algorithms for GPUs.

by pjmlp

4/25/2025 at 8:29:16 PM

Not sure I fully understand your comment, but I'm pretty sure the talk addresses exactly that.

The primitives and pre-coded kernels provided by CUDA (it solves for the most common scenarios first and foremost) is what's holding things back and in order to get those algorithms and data structures down to the hardware level you need something flexible that can talk directly to the hardware.

by solarmist

4/25/2025 at 9:29:04 PM

C, C++, Fortran, Python JIT from NVidia, plus Haskell, .NET, Java, Futuhark, Julia from third parties, and anything else that can bother to create a backend targeting PTX, NVVM IR, or now cuTile.

The pre-coded kernels help a lot, but you don't have to use them necessarly.

by pjmlp

4/25/2025 at 7:48:01 PM

GPU programming isn't really that bad. I am a bit skeptical this is the way to solve it. The issue is that details do matter when you're writing stuff on the GPU. How much shared memory are you using? How is it scheduled? Is it better to inline or run multiple passes etc. Halide is the closest I think.

by mirsadm

4/25/2025 at 8:34:48 PM

What are you skeptical of? I believe the problem this is solving is a framework that's not CUDA that allows low level access to the hardware, makes it easy to write kernels, and is not Nvidia only. If you watch the video you can write directly in asm if you need to. You have full control if you want it. But it provides primitives and higher level objects that handle common cases.

I'm a novice in the area, but Chris is well respected in this area and cares a lot of about performance.

by solarmist

4/25/2025 at 8:37:02 PM

[dead]

by varelse

4/25/2025 at 7:50:20 PM

It is a noble cause. I've spent ten years of my life using CUDA professionally, outside the AI domain mind you. Most of these years, there was a strong desire to break off of CUDA and the associated Nvidia tax on our customers. But one thing we didn't want was to move from depending on CUDA to depending on another intermediary which would also mean financial drain, like the enterprise licensing these folks want to use. Sadly, open source alternatives weren't fostering much confidence, either with their limited feature coverage or just not knowing if they will be supported in the long term (support for new hardware, fixes, etc.).

by diabllicseagull

4/26/2025 at 6:41:11 AM

Also while as language nerd I find Mojo cool, given NVidia's going full speed ahead with Python support in CUDA as announced at GTC 2025, to the point of designing a new IR as basis for their JIT, very few researchers will bother with Mojo.

Also what NVIDIA is doing has full Windows support, while Mojo support still isn't there, other than having to make use of WSL.

by pjmlp

4/25/2025 at 7:51:08 PM

My mistake completely, but I thought this was going to be something to do with a new scheme or re-thinking of graphics programming APIs, like Metal, Vulkan or OpenGL. Now I'm kind of bummed that it is what it is, because I got really excited for it to be that other thing. =(

by catapart

4/25/2025 at 7:56:41 PM

That is already taking place with work graphs, and making shader languages more C++ like.

by pjmlp

4/25/2025 at 8:39:23 PM

Seems like with it you will be able to compile and execute one code on multiple GPU targets though

by ttoinou

4/25/2025 at 8:04:43 PM

There is a "hush-hush open secret" between minutes 31 and 33 of the video :)

by ashvardanian

4/25/2025 at 8:54:37 PM

TL;Dr same binary runs on Nvidia and ATI today, but not announced yet

by refulgentis

4/25/2025 at 11:22:08 PM

> Other Accelerators (e.g. Apple Silicon GPUs): free for <= 8 devices

From their license.

It's not obvious what happens when you have >8 users, with one GPU each (typical laptop users).

by Archit3ch

4/25/2025 at 8:27:09 PM

They desperately need to disable whatever noise cancellation they're using on the audio. Keeps cutting out, sounds terrible.

by throwaway314155

4/25/2025 at 8:29:57 PM

Yeah, the mic quality was terrible.

by solarmist