alt.hn

4/1/2026 at 8:38:24 PM

Training mRNA Language Models Across 25 Species for $165

by maziyar

4/4/2026 at 6:49:42 PM

The problem with models like this is they're built on very little actual training data we can trace back to verifiable protein data. The protein data back, and other sources of training data for stuff like this, has a lot of broken structures in them and "creative liberties" taken to infer a structure from instrument data. It's a very complex process that leaves a lot for interpretation.

On top of that, we don't have a clear understanding on how certain positions (conformations) of a structure affect underlying biological mechanisms.

Yes, these models can predict surprisingly accurate structures and sequences. Do we know if these outputs are biologically useful? Not quite.

This technology is amazing, don't get me wrong, but to the average person they might see this and wonder why we can't go full futurism and solve every pathology with models like these.

We've come a long way, but there's still a very very long way to go.

by seamossfet

4/4/2026 at 9:17:22 PM

How do we get more verifiable protein data? So even if we had better data, we don't yet understand how the structure impacts the biology?

by stardust2

4/1/2026 at 8:38:33 PM

full article: https://huggingface.co/blog/OpenMed/training-mrna-models-25-...

by maziyar

4/4/2026 at 3:18:34 PM

What makes this dataset or problem worth solving compared to other health datasets? Would the results on this task be broadly useful to health?

by xyz100

4/4/2026 at 4:44:57 PM

What other "datasets" are you talking about? How do you "solve a dataset" ?

by CyberDildonics

4/5/2026 at 5:39:49 AM

You solve a dataset when you learn what there is to learn about the phenomenon of interest. The limit of such phenomenon is “cure all disease”, and clearly this is not solving that.

by xyz100

4/5/2026 at 1:43:56 PM

What are you talking about? "the phenomenon of interest"? There is nothing you wrote in either comment that makes sense.

What is a "dataset" that has been "solved" and what did the program do that 'solved' it?

by CyberDildonics

4/5/2026 at 10:21:09 PM

MNIST (the number classification task) has been “solved” a billion times and it is hard to imagine any subsequent advances there as scores using a variety of methods have hit the saturation point of accuracy. Any further improvements are likely overfitting to noise. Therefore, we know that it is easy to detect handwritten numbers. However, we may not know how to detect other things as well, like reading an MRI. Those datasets/tasks are clearly different and require different techniques. Training an LLM is likewise different.

by xyz100

4/5/2026 at 11:16:06 PM

has been “solved” a billion times

If it was really solved, wouldn't it just need to happen once?

You think classifying handwriting of 10 numbers is the same as this that took 55 hours of GPU time for someone to go through?

I have no idea what point you're trying to make and I can't tell if you do either. You were talking about "solving" other "health datasets" but you can't even come up with one or what that means.

by CyberDildonics

4/6/2026 at 6:43:11 PM

If you want to be literal with language, then do you ever really “solve” anything? Even tying your shoes is not solved. One day you may tie them better, but for practical purposes we can say it is solved.

Likewise, you can spend 55 hours of GPU time to produce very different things. Can those 55 hours cure cancer? Definitely not. Can it pick up correlations with a small subset of proteins that are perhaps not representative of practical problems? Probably. Can it learn a pattern to tie your shoes, given all your life experiences tying them? Sure.

I asked the question to determine what is the impact of the task and dataset. Curing cancer is huge, tying shoes is not. What are the strengths and limitations?

by xyz100

4/6/2026 at 7:35:46 PM

If you want to be literal with language, then do you ever really “solve” anything?

You are the one who said it and you can't even explain what you meant, you just get mad that anyone would ask.

by CyberDildonics

4/6/2026 at 8:03:37 PM

Since I am hitting the reply depth: You “solve” a dataset or task when you translate some model into actual real world problems by creating a model that actually “works” (not just high accuracy). What is otherwise the point of training the model other than writing blog posts? Second to that, you can train a model that performs well on the dataset but is less useful in the real world.

This is a health dataset, there are many inputs and outputs to health (e.g., cell level, protein level, tumors, organs, etc.). In this case, it is mRNA focused, which is a broad category that translates to potentially immune responses like vaccines (exactly what kind of therapy, I’m not sure other than “25 species”). Once the model is trained, you can use it to solve real problems, perhaps to develop a therapy that makes its way to clinical trials and eventually actually treats some disease. The model by itself is useless without the ability to have that impact.

So for other examples, take any disease (e.g., Covid19), create a dataset to mirror that problem using some technique (e.g., Covid19 mRNA prediction of some sort), and solve it to create a treatment (e.g., get a safe and effective vaccine). Obviously, you can say the vaccine can be improved so it is not “solved”, but most people would be quite happy with a “almost cure for cancer” even if it wasn’t literally optimal (we don’t even know if a cure for cancer is possible).

My suggestion and question to the author is to outline what is the implications of the work rather than focusing on accuracy statistics that are meaningless without such context.

by xyz100

4/6/2026 at 3:25:35 AM

yeah lol no shit. lets not get bothered by reactionaries...

by basyt

4/5/2026 at 3:57:46 AM

"Complete results, architectural decisions, and runnable code below."

This is a weird post, there doesn't seem to be any "below" here. Another comment linked the article: https://huggingface.co/blog/OpenMed/training-mrna-models-25-...

by nradclif

4/5/2026 at 5:00:30 PM

Yeah. Things like "Complete results, architectural decisions, and runnable code below." is literally how AI outputs stuff, so I'd expect the post was AI written too. :(

by justinclift

4/4/2026 at 4:11:19 PM

Can someone explain what one might use this model for? As a developer with a casual interest in biology it would be fun to play with but honestly not sure what I would do

by rubicon33

4/4/2026 at 4:25:41 PM

You can get your feet wet with genetic engineering for surprisingly little money.

This guy shows a lot of how it's done: https://www.youtube.com/@thethoughtemporium

Basically you can design/edit/inject custom genes into things and see real results spending on the scale of $100-$1000.

by colechristensen

4/5/2026 at 12:41:44 AM

We actually did this in my highschool genetics class back in 1999! We made bacteria change color by splicing in a gene. Awesome stuff.

The (public!) school had a grant from one of Seattle's biotech boom companies.

by com2kid

4/4/2026 at 4:40:02 PM

Is there something like this in text/readable format?

by someuser54541

4/4/2026 at 7:12:11 PM

My main concern is using fungi. If it ends up in my lungs I'm most likely screwed, right?

by _zoltan_

4/4/2026 at 8:02:00 PM

Yes, but most students produce their best work while infected.

by nurettin

4/4/2026 at 8:20:27 PM

This is the classic meme https://www.reddit.com/r/labrats/comments/mmv2ig/lab_strains...

Lab strains of things tend to be extremely sensitive and not human adapted. You shouldn't study and modify human-infecting organisms in your basement anyway. While you shouldn't ignore protective equipment and proper procedure... paranoia about infecting yourself with a lab leak isn't warranted.

by colechristensen

4/5/2026 at 7:04:24 PM

I'd love to experiment with this stuff, just literally have no idea how it would be safe to start.

by _zoltan_

4/5/2026 at 6:36:26 AM

A Codon-based model is cool. I know NVIDIA is building quite a large one.

At GTC they showed an SAE they built on a smaller version of it, allowing you to see what their model learned: https://research.nvidia.com/labs/dbr/blog/sae/

by jazzpush2

4/5/2026 at 3:39:42 AM

Interesting work - Looks like AI for science is having it's day right now.

by dhruv3006

4/4/2026 at 3:28:22 PM

> In Progress: CodonJEPA

JEPA is going to break the whole industry :D

by khalic

4/4/2026 at 3:50:21 PM

Can you explain this? I haven't heard of JEPA, and from a quick search it seems to be vision/robotics based?

by digdugdirk

4/4/2026 at 4:42:16 PM

It’s a self supervised learning architecture, and it’s pretty much universal. The loss function runs on embeddings, and some other smart architectural choices allover. Worth diving into for a few hours, Yann LeCun gives some interesting talks about it

by khalic

4/4/2026 at 8:07:59 PM

HN's blindspots never cease to amaze me.

I am a structural biologist working in pharmaceutical design and this type of thing could be wildly useful (if it works).

by colingauvin

4/5/2026 at 5:07:15 PM

Blind spot?

by justinclift

4/4/2026 at 3:28:27 PM

What makes these Domain specific models work when we don’t have good domain models for health care, chemistry, economics and so on

by simianwords

4/4/2026 at 4:26:20 PM

>we don’t have good domain models for health care, chemistry, economics and so on

Who says we don't?

by colechristensen

4/4/2026 at 4:38:31 PM

Examples please?

by simianwords

4/4/2026 at 5:13:43 PM

No, it's really simple to search for domain specific models being used "in production" all over the place

by colechristensen

4/4/2026 at 5:16:42 PM

I didn’t find a single one that outperforms a general model.

by simianwords

4/4/2026 at 5:56:53 PM

Ok, alphafold.

by colechristensen

4/4/2026 at 6:01:21 PM

It’s not a large language model

by simianwords

4/4/2026 at 4:12:41 PM

Distributing the load on this will probably be infinitely more useful than “folding at home”

by yieldcrv

4/6/2026 at 4:10:56 AM

[flagged]

by agenexus

4/4/2026 at 3:15:53 PM

gray goo of the future

by HocusLocus

4/4/2026 at 6:14:37 PM

hmmmm seems like some fake hype.

by skyskys