alt.hn

3/6/2026 at 6:47:29 AM

Show HN: 1v1 coding game that LLMs struggle with

https://yare.io

by levmiseri

3/7/2026 at 2:48:29 AM

Cool!

From the prompt it looks like you don’t give the llms a harness to step through games or simulate - is that correct? If so I’d suggest it’s not a level playing field vs human written bots - if the humans are allowed to watch some games that is.

by vessenes

3/7/2026 at 3:02:52 AM

That’s true, I’m trying to figure out a better testing environment with a feedback loop.

I did try letting the models iterate on the bot code based on a summary of an end-of-game ‘report’, but that showed only marginal improvements vs. zero-shot

by levmiseri

3/7/2026 at 2:40:43 PM

In my mind, I’d give it the following:

Step(n) - up to n steps forward

RunTil(movement|death|??) - iterate until something happens

Board(n) - board at end of step n

BoardAscii(n) - ascii rep of same

Log(m,n) - log of what happened between step m and n

Probably all this could be accomplished with a state structure and a rendering helper.

Do you let humans review opposing team’s code?

by vessenes

3/6/2026 at 10:18:21 AM

Cool project, this is my first time seeing such project using LLMs. Took me a while to understand what's happening on the home page.

A question though, why such powerful bots like Gemini 3.1 failed against Clowder bot? Is it because of inefficient code or the LLMs did not handle edge cases? Or they are not as good as humans when it comes to strategy.

by javadhu

3/6/2026 at 10:50:36 AM

I’m not sure honestly. It could be some combination of bad spatial reasoning of the LLMs and lack of any training data for this specific challenge.

You can see replays for all of the matches if you hover over the cells in the table.

by levmiseri

3/8/2026 at 8:24:37 AM

You should check out codingame.com It has similar battle based objectives

by neondude

3/7/2026 at 8:44:48 AM

LLMs need to have feedback of the outcomes. Just like a human does.

by DeathArrow