alt.hn

4/16/2026 at 5:37:20 PM

Qwen3.6-35B-A3B on my laptop drew me a better pelican than Claude Opus 4.7

https://simonwillison.net/2026/Apr/16/qwen-beats-opus/

by simonw

4/16/2026 at 6:40:45 PM

Going to have to disagree on the backup test. Opus flamingo is actually on the pedals and seat with functional spokes and beak. In terms of adherence to physical reality Qwen is completely off. To me it's a little puzzling that someone would prefer the Qwen output.

I'd say the example actually does (vaguely) suggest that Qwen might be overfitting to the Pelican.

by ericpauley

4/16/2026 at 7:52:33 PM

Qwen's flamingo is artistically far more interesting. It's a one-eyed flamingo with sunglasses and a bow tie who smokes pot. Meanwhile Opus just made a boring, somewhat dorky flamingo. Even the ground and sky are more interesting in Qwen's version

But in terms of making something physically plausible, Opus certainly got a lot closer

by wongarsu

4/16/2026 at 8:07:11 PM

Given adherence is a more significant practical barrier, it's probably the better signal. That is, if we decide too look for signal here.

by kmacdough

4/16/2026 at 11:53:55 PM

The fundamental challenge of AI is preventing unprompted creativity. I can spin up a random initialization and call all of it's output avante garde if we want to get creative.

by BobbyJo

4/17/2026 at 12:29:04 AM

I recently fell down the rabbithole of AI-generated videos, and realised that many of the "flaws" that make them distinctive, such as objects morphing and doing unusual things, would've been nearly impossible or require very advanced CGI to create.

by userbinator

4/16/2026 at 10:40:43 PM

[flagged]

by doobiedowner

4/17/2026 at 3:38:58 AM

"artistically interesting" is IMHO both a subjective and 'solved' problem. These models are trained with an "artistically interesting" reward model that tries to guide the model towards higher quality photos.

I think getting the models to generate realistic and proportional objects is a much harder and important challenge (remember when the models would generate 6 fingers?).

by itake

4/17/2026 at 5:47:30 AM

The Opus bike isn't very physically plausible though.

by tpm

4/17/2026 at 1:27:47 AM

Qwen, at least, can draw a complete bicycle frame. The opus frame will snap in half and can’t steer.

by kube-system

4/17/2026 at 9:12:46 PM

Qwen's frame is so strong that it broke both feet off the pelican.

by gowld

4/17/2026 at 10:06:49 PM

Clearly he's riding a fixie and trying to stop. Pelican didn't drink his Ovaltine.

by kube-system

4/16/2026 at 9:01:52 PM

Even the first one - Qwen added extra details in the background sure. But he Pelican itself is a stork with a bent beak and it's feet is cut off it's legs. While impressive for a local model, I don't think it's a winner.

by tecoholic

4/16/2026 at 9:31:09 PM

Did you see opus bike though for that same test? I know it is about the flamingo but that is bad.

by mejutoco

4/16/2026 at 9:48:54 PM

It's a 3B model. It should not be this close. Debating their artistic qualities is missing the point.

by irthomasthomas

4/17/2026 at 12:21:06 AM

35B, but your point stands I think.

by monocasa

4/16/2026 at 7:35:05 PM

For coding, qwen 3.6 35b a3b solved 11/98 of the Power Ranking tasks (best-of-two), compared to 10/98 for the same size qwen 3.5. So it's at best very slightly improved and not at all in the class of qwen 3.5 27b dense (26 solved) let alone opus (95/98 solved, for 4.6).

by jbellis

4/16/2026 at 9:32:21 PM

This has similar problems to swe bench in that models are likely trained on the same open source projects that the benchmark uses.

https://blog.brokk.ai/introducing-the-brokk-power-ranking/

by kristianp

4/16/2026 at 9:55:45 PM

If all models are trained on the benchmark data, you cannot extrapolate the benchmark scores to performance on unseen data, but the ranking of different models still tells you something. A model that solves 95/98 benchmark problems may turn out much worse than that in real life, but probably not much worse than the one that only solved 11/98 despite training on the benchmark problems.

This doesn't hold if some models trained on the benchmark and some didn't, but you can fix this by deliberately fine-tuning all models for the benchmark before comparing them. For more in-depth discussion of this, see https://mlbenchmarks.org/11-evaluating-language-models.html#...

by yorwba

4/17/2026 at 9:29:52 AM

It is much faster though. On my m1 max, describing a picture (quick way to get a pretty large context):

Qwen 3.6 35b a3b: 34 tok/sec

Qwen 3.5 27b: 10 tok/sec

Qwen 3.5 35b a3b: doesn't support image input

by spwa4

4/17/2026 at 5:09:27 PM

I've been using Qwen 3.5 35B-A3B with images as input so I suspect you perhaps didn't include the vision part of the model during testing (I use llama.cpp and I learned I needed to include the separate mmproj part).

by upboundspiral

4/19/2026 at 3:00:00 PM

What is the quantization level of your Owen 3.6 3b model?

by m-emre

4/16/2026 at 8:09:59 PM

You compare tiny modal for local inference vs propertiary, expensive frontier model. It would be more fair to compare against similar priced model or tiny frontier models like haiku, flash or gpt nano.

by __natty__

4/16/2026 at 8:21:07 PM

Not when the article they're commenting on was doing literally exactly the same thing.

by javawizard

4/16/2026 at 8:11:22 PM

Eh it’s important perspective, lest someone start thinking they can drop $5k on a laptop and be free of Anthropic/OpenAI. Expensive lesson.

by ericd

4/16/2026 at 7:18:24 PM

I understand the 'fun factor' but at this point I really wonder what this pelican still proofs ? I mean, providers certainly could have adapted for it if they wanted, and if you want to test how well a model adapts to potential out of distribution contexts, it might be more worthwhile to mix different animals with different activity types (a whale on a skateboard) than always the same.

by mentalgear

4/16/2026 at 7:35:57 PM

That's why I did the flamingo on a unicycle.

For a delightful moment this morning I thought I might have finally caught a model provider cheating by training for the pelican, but the flamingo convinced me that wasn't the case.

by simonw

4/16/2026 at 8:07:14 PM

It is completely wild to me that you prefer Qwen's flamingo. I think it's really bad and Opus' is pretty good.

by furyofantares

4/16/2026 at 8:09:08 PM

The Opus one doesn't even have a bowtie.

by simonw

4/16/2026 at 8:40:45 PM

The Opus one looks like a flamingo, and looks like it's riding the unicycle. Sitting on the seat. Feet on the pedals.

The Qwen one looks like a 3-tailed, broken-winged, beakless (I guess? Is that offset white thing a beak? Or is it chewing on a pelican feather like it's a piece of straw?) monstrosity not sitting on the seat, with its one foot off the pedal (the other chopped off at the knee) of a malmanufactured wheel that has bonus spokes that are longer than the wheel.

But yeah, it does have a bowtie and sunglasses that you didn't ask for! Plus it says "<3 Flamingo on a Unicycle <3", which perhaps resolves all ambiguity.

by furyofantares

4/16/2026 at 9:59:01 PM

Let's not oversell Opus' output. The Qwen flamingo is flawed but could be easily fixed with 1-2 prompts if you're really upset with it. The Opus SVG is not any better than something that I could make in Inkscape with 3 minutes and sufficient motivation. Calling Opus' flamingo "programmer art" would be an insult to programmers.

by bigyabai

4/16/2026 at 9:39:26 PM

Game over opus

by monksy

4/16/2026 at 7:47:08 PM

To me the opus flamingo is waaaay better than the qwen one. qwen has the better pelican, though.

by prodigycorp

4/16/2026 at 7:50:54 PM

Is a flamingo on a unicycle not merely a special case of a pelican on a bicycle?

by dude250711

4/17/2026 at 1:48:22 AM

If I (commercially) made models I’d put specific care into producing SVGs of various animals doing (riding) various things ... I find it interesting how confident you seem to be that they’re not.

by solarkraft

4/16/2026 at 9:38:12 PM

This is a gag that's long outlived its humor, but we're in a space so driven by hype there are people who will unironically take some signal from it. They'll swear up and down they know it's for fun, but let a great pelican come out and see if they don't wave it as proof the model is great alongside their carwash test.

by BoorishBears

4/16/2026 at 11:06:11 PM

Consider reading the article, which addresses all of the points you raise.

It's directly stated in the post that the entire test is meant to be humorous, not taken seriously, only that is has vaguely followed model performance to date. The author also writes that this new result shows that trend has broken..

by luyu_wu

4/17/2026 at 6:51:30 AM

Yeah I can imagine these popular benchmarks get special treatment in the training of new models. I wonder how they would perform for "Elephant riding a car" or "Lion sleeping in a bed"

by gistscience

4/16/2026 at 8:45:53 PM

Such a disconnect from the minutes I’ve lost and given up on Gemini trying to get it to update a diagram in a slide today. The one shot joke stuff is great but trying to say “that is close but just make this small change” seems impossible. It’s the gap between toy and tool.

by wood_spirit

4/17/2026 at 7:22:55 AM

I swear every single time someone says "my laptop" on hacker news, it's some insane MacBook that is more powerful than 98% computers out there

by big-chungus4

4/16/2026 at 8:44:26 PM

I'm an iguana and need to wash my bicycle in the carwash. Shall I walk or take the bus?

by sailingcode

4/16/2026 at 8:55:29 PM

You should have the pelican ride it to the carwash and wash it for you.

by layer8

4/16/2026 at 8:46:56 PM

That’s a long walk! You should reserve a ride with $PartnerRideshareCo.

by DANmode

4/17/2026 at 7:24:35 AM

[dead]

by ucyo

4/17/2026 at 1:52:33 AM

You can just straight up ask Opus if it's good at generating images and it will say no. It has never been marketed as being for image generation.

by ralph84

4/17/2026 at 2:21:22 AM

Claude is actually very good at SVGs, and it's genuinely useful. I have Claude knock out little SVG icons all the time.

Illustrations with SVGs of pelicans riding bicycles will never be useful, because pelicans can't ride bicycles.

by simonw

4/17/2026 at 7:26:00 AM

[dead]

by th0ma5

4/17/2026 at 2:20:53 AM

More and more I suspect OpenAI is generating comments on HN to try shift the discussion.

I’m not sure you’re a bot but this is the stereotypical comment being overly critical of anything where OpenAI is not superior or being overly supportive (see comments on the Codex post today) while clearly not understanding the discussed topic at all.

by henry2023

4/17/2026 at 3:23:55 AM

His account is from 2016.

This is not refutation of astroturfing on HN, but in this case, I doubt it.

by SJMG

4/16/2026 at 7:52:34 PM

That's not surprising; Opus & Sonnet have been regressing on many non-coding tasks since about the 4.1 release in our testing

by VHRanger

4/16/2026 at 9:52:04 PM

I don't know what such a demo would prove in the first place. LLMs are good at things that they have been trained on, or are analogues of things they have been trained on. SVG generation isn't really an analogue to any task that we usually call on LLMs to do. Early models were bad at it because their training only had poor examples of it. At a certain point model companies decided it would be good PR to be halfway decent at generating SVGs, added a bunch of examples to the finetuning, and voila. They still aren't good enough to be useful for anything, and such improvements don't lead them to be good at anything else - likely the opposite - but it makes for cute demos.

I guess initially it would have been a silly way to demonstrate the effect of model size. But the size of the largest models stopped increasing a while ago, recent improvements are driven principally by optimizing for specific tasks. If you had some secret task that you knew they weren't training for then you could use that as a benchmark for how much the models are improving versus overfitting for their training set, but this is not that.

by f33d5173

4/17/2026 at 12:04:07 AM

On thinking about the reasons this may be something at least slightly more than training on the task is the richness with which language is filled with spatial metaphors even in basic language not by laymen considered metaphor outside the field of linguistics proper, in which concepts eg Lakoff's analysis in "Metaphors we Live By and others are simply part of the field, (though unsurprisingly, among the HN crowd I've occasionally seen it brought up)

The amount of money you have in the bank may often "increase" or "decrease" but it also goes up and down, spatial. Concepts can be adjacent to each, orthogonal. Plenty more.

So, as models utilize weight more densely with more complex strategies learned during training the patterns & structure of these metaphors might also be deepened. Hmmm... another thing to add to the heap of future project-- trace down the geometry of activations in older/newer models of similar size with the same prompts containing such metaphors, or these pelican prompts, test the idea so it isn't just arm chair speculation.

by ineedasername

4/18/2026 at 3:57:37 AM

> I’m giving this one to Qwen too, partly for the excellent <!-- Sunglasses on flamingo! --> SVG comment

You say you like the one from Qwen better and the only reason you give has nothing to do with the task.

In general, one should provide specific expectations regarding the properties of the image before the experiment. One important property should be "does not hallucinate stuff into the image that are unrelated to the prompt".

by bulbar

4/17/2026 at 12:11:49 AM

Maybe the next time we suspect they're optimising for the test, switch the next test to drawing "the cure for cancer".

by Quarrelsome

4/16/2026 at 6:50:34 PM

I've been using Qwen3.5-35B-A3B for a bit via open code and oMLX on M5 Max with 128Gb of RAM and I have to say it's impressively good for a model of that size. I've seen a huge jump in the quality of the tool calls and how well it handles the agentic workflow.

by comandillos

4/16/2026 at 7:11:58 PM

This is about the newly release Qwen3.6. Just wanted to make sure you got that correctly.

by iib

4/17/2026 at 1:01:06 PM

[dead]

by maltyxxx

4/16/2026 at 7:52:38 PM

I'm really curious about what competes with Claude Code to drive a local LLM like Qwen 3.6?

by aliljet

4/16/2026 at 10:29:01 PM

OpenCode or Pi are popular agent harnesses. Lots of IDEs integrate LLMs now. I believe there’s also a Qwen Code that exists, but I have yet to try it.

by chabes

4/16/2026 at 8:00:21 PM

OpenCode?

by smashed

4/16/2026 at 8:29:31 PM

I'm currently testing Qwen3.6-35B-A3B with https://swival.dev for security reviews.

It's pretty good at finding bugs, but not so good at writing patches to fix them.

by jedisct1

4/17/2026 at 1:38:57 AM

This is a useless benchmark now a days, every model provider trains their models on making good pelicans. Some have even trained every combination of animal/mode of transportation

by quux

4/17/2026 at 2:43:25 AM

Every model provider except OpenAI?

by henry2023

4/17/2026 at 1:47:48 AM

Wonder what would happen if we unleashed Karpathy’s autoresearch on the pelican bicycle test. And had it read back the image to judge it.

Oh maybe it might continue to iterate on the existing drawing?

by atonse

4/16/2026 at 8:09:09 PM

That Qwen flamingo on the unicycle is actually quite good. A work of art.

by lofaszvanitt

4/16/2026 at 9:39:48 PM

I liked both of Opus' better, it was very illuminating, in both cases I didn't see the error's Simon saw and wondered why Simon skipped over the errors I saw.

Pelican: saturated!

by refulgentis

4/17/2026 at 12:50:52 PM

How much ram on the MacBook.

God bless these open models. Claude can’t subsidize its users forever and no one can afford 1200$ a month for llm credits.

by 999900000999

4/17/2026 at 12:52:59 PM

> no one can afford 1200$ a month for llm credits.

you'd be surprised....

by bdangubic

4/17/2026 at 1:31:53 PM

A Blackwell pro is only 10k.

Will Claude constantly be able to deliver more value than rolling your own ?

I think the future is a bunch of just good enough models, which is what most people need. Not top of the line models that require millions in hardware to run

by 999900000999

4/17/2026 at 2:19:55 PM

not that I disagree with you in principle but I see this the same was a "cloud" - 10's of thousands of companies could save gazillion dollars by hosting their infrastructure and yet they continue to pay insane amounts of moneys to AWSs and Azures and whatnots. While some company's future may as well be running local models I would venture a guess that vast majority will just eat the costs and pass on as much of it as they can to their customers...

by bdangubic

4/17/2026 at 4:52:55 PM

Hmm, can we agree open models place a sort of price ceiling on what most companies will pay.

Eventually another cloud provider can just spin up a few llms vs paying whatever Claude demands

by 999900000999

4/16/2026 at 8:59:03 PM

I really wish they spent some time training for computer use. This model is incapable of finding anywhere near the correct x,y coordinate of a simple object in a picture.

by bottlepalm

4/16/2026 at 8:48:43 PM

FYI, using a 128GB M5 MacBook Pro, sourced from another article by the author.

by JaggerFoo

4/16/2026 at 11:51:45 PM

Between the legs and the beak I'd still rate the opus pelican higher

by Havoc

4/17/2026 at 7:07:18 AM

LLM's really causing serious brainrot if html pelican drawings are a usage basis for your programming projects, even all these shitty benchmarks don't say or mean anything if companies secretly tweak them on the go

by hopinhopout

4/17/2026 at 7:11:05 AM

Most of the 'coding benchmarks' are deeply flawed too. This one at least makes it explicit

And so far, the ability to make SVGs of $animal on $ vehicle seems to correlate surprisingly well with model 'intelligence'

by wongarsu

4/19/2026 at 9:02:06 AM

why is that flammingo in Qwen smoking?

by stevefan1999

4/16/2026 at 10:23:09 PM

All those models that were just at version 1.x in 2024

That’s so wild

by yieldcrv

4/16/2026 at 9:32:13 PM

I love this benchmark!

by justinbaker84

4/16/2026 at 11:04:03 PM

looks like opus have been nerfed from day1

by kburman

4/16/2026 at 9:47:38 PM

Good reminder that these tests have always been useless, even before they started training on it.

by nba456_

4/16/2026 at 7:40:44 PM

How about switching to MechaStalin on a tricycle? It gets kind of boring.

by 19qUq

4/16/2026 at 8:03:10 PM

boring ... the ways all the models fail at a simple task never gets boring to me

by mvanbaak

4/17/2026 at 12:52:59 AM

[dead]

by tmatsuzaki

4/17/2026 at 10:17:07 AM

[dead]

by aimadetools

4/16/2026 at 9:55:59 PM

[flagged]

by whywhywhywhy

4/16/2026 at 10:04:27 PM

If they're testing against it why do most of their attempts suck so much?

by simonw

4/16/2026 at 11:16:23 PM

[flagged]

by smcl

4/16/2026 at 9:11:03 PM

[flagged]

by simon_is_genius

4/16/2026 at 8:40:08 PM

I literally cannot believe that people are wasting their time doing this either as a benchmark or for fun. After every single language model release, no less.

by throwuxiytayq

4/16/2026 at 8:44:02 PM

It feels like the results stopped being interesting a little while ago but the practice has become part of simonw's brand, and it gives him something to post even when there is nothing interesting to say about another incremental improvement to a model, and so I don't imagine he'll stop.

by sharkjacobs

4/16/2026 at 9:26:43 PM

I, for one, expected progress. Uneven, sometimes delayed, but ever increasing progress.

But that Opus pelican?

by stephbook

4/16/2026 at 10:22:29 PM

It’s not a waste of time. As the boundaries of AI are pushed we increasingly struggle to define what intelligence actually is. It becomes more useful to test what models cannot do instead of what they can. Random tasks like the pelican test can show how general the intelligence really is, putting aside the obvious flaw that the labs can optimise for such a simple public benchmark.

by cedws

4/17/2026 at 7:25:53 AM

The whole point of this benchmark is that it asks the model to work in a modality it is not trained in and does not understand well. The result is largely meaningless. This is just like the people who are endlessly surprised by the fact that a raw LLM does not work with numbers well, or miscounts letters. In short, this test benchmarks the intelligence of the person running it, not of the model.

by throwuxiytayq

4/19/2026 at 6:53:10 PM

The rasterised SVG is just a different representation of the same data. A sufficiently advanced LLM may not need to 'see' the rasterised image to be able to draw a good picture. A human could draw a very basic image through raw SVG just by mentally plotting points.

by cedws

4/16/2026 at 9:57:24 PM

Fun is so un-productive. Everyone doing things for "fun" is going to be sorry when they look back and realizes they were wasting time having a "good time" rather than optimizing their KPIs.

by recursive

4/17/2026 at 7:22:20 AM

Sarcasm aside, asking LLMs do draw pelicans is your idea of fun? I'm worried for you.

by throwuxiytayq

4/18/2026 at 3:46:33 AM

No. I've never done it. However the stuff I do is even weirder. Thanks for your concern.

by recursive

4/17/2026 at 7:15:05 PM

That's what happens in a monopoly enviornment. Literally everyone and every company becomes dancing monkeys for teracaps

by casey2

4/16/2026 at 9:29:13 PM

I can't believe you're such a party pooper. It's exciting times, the silly things do matter!

by segmondy

4/17/2026 at 1:55:06 AM

I do wonder how much energy collectively has been burned on this useless "benchmark".

by bschwindHN

4/16/2026 at 11:44:24 PM

I also can't understand how this goes so viral every time on Hackernews lol

by Marciplan