alt.hn

3/19/2026 at 4:01:30 PM

Launch HN: Canary (YC W26) – AI QA that understands your code

by Visweshyc

3/19/2026 at 5:02:03 PM

I really want automated QA to work better! It's a great thing to work on.

Some feedback:

- I definitely don't want three long new messages on every PR. Max 1, ideally none? Codex does a great job just using emoji.

- The replay is cool. I don't make a website, so maybe I'm not the target market, but I'd like QA for our backend.

- Honestly, I'd rather just run a massive QA run every day, and then have any failures bisected, rather than per-PR.

- I am worried that there's not a lot of value beyond the intelligence of the foundation models here.

by blintz

3/20/2026 at 3:25:03 PM

This benchmark measures whether tests are relevant, coherent, and have good coverage. But there's a more subtle type of error: AI creates tests that look specific to PR but are actually generic patterns mapped from the training data—correct test structure, reasonable assertions, but not actually interacting with what this specific piece of code does.

How do you differentiate between ""understood the code and generated a targeted test" and "recognized this looks like an auth flow and produced a standard auth test template"? The latter might still pass your coherence/relevance metrics while missing the actual exception.

by thienannguyencv

3/19/2026 at 5:37:47 PM

Agree on your last point and it's going to be a very bitter lesson. In any case, you probably wanna shift alot of the code verification as left as possible so doing review at PR time isnt the right strat imo. And claude/codex are well positioned to do the local review.

by Bnjoroge

3/20/2026 at 6:54:56 PM

Agree on the shift left concept, but curious on your thoughts about a checker-maker loop. Running a PR review bot is different from running /review on local dev right? And also there has been instance of Claude already patching the test scripts instead of fixing the bugs to make the tests pass.

by ashgam

3/20/2026 at 1:27:46 AM

[flagged]

by arkheosrp26

3/20/2026 at 5:14:17 AM

Isn’t the last point the case with every AI startup? Nobody has a moat and it’s tough to build one because the playing field is so level.

by monkpit

3/20/2026 at 11:47:30 AM

I've been confused by this with many LLM products in general. Sometimes infrastructure is part of it so there's that, but often it seems like the product is a magic incantation of markdown files.

by _heimdall

3/20/2026 at 6:57:10 PM

Solving for infrastructure is a huge part of the problem too. Curious to know what you think about it?

by ashgam

3/20/2026 at 7:15:54 PM

Here I'm mostly considering the seemingly countless services that are little more than some markdown files and their own API passing data to/from the LLM procider's API.

By no means is that every AI product today, and I wasn't saying the OP QA service falls into that bucket though.

More of a general comment related to the GP, maybe too off topic here though?

by _heimdall

3/19/2026 at 5:13:06 PM

Thanks for the feedback! - Agreed that the form factor can be condensed with a link to detailed information - With the codebase understanding, backend is where we are looking to expand and provide value - The intelligence of the models does lay out the foundation but combining the strength of these models unlocks a system of specialized agents that each reason about the codebase differently to catch the unknown unknowns

by Visweshyc

3/20/2026 at 4:52:48 AM

The interesting question to me is not whether the system can generate a plausible PR-time test, but whether the useful ones survive after the PR is gone. If Canary catches a real regression, how often can that check be promoted into a stable long-lived regression test without turning into a flaky, environment-coupled browser script? That conversion rate feels closer to the real moat than the generation demo.

by pastescreenshot

3/20/2026 at 6:32:40 AM

Good point. To keep the regression tests reliable as the app evolves, we run a reliability cascade. First, we generate and execute deterministic Playwright from the codebase. If execution fails then we fall back to DOM and aria tree. If that still fails, we fall back to vision agents that verify what the user actually sees before flagging a drift in the application behavior

by Visweshyc

3/19/2026 at 7:39:41 PM

The market timing on this is perfect - it fills a major current gap I've seen emerging.

I've heard a few stories of QA departments being near-burnout due to the increased rate developers are shipping at these days. Even we're looking for any available QA resources we can pull in here.

No harm meant with the question - but what's the advantage over Claude Code + the GitHub integrations?

by recsv-heredoc

3/19/2026 at 8:54:55 PM

We evaluated test generation using Claude code and our purpose built harness and measured the quality of tests in catching the unknown unknowns. We noticed Claude Code misses the second order effects that actually break applications. You also need infrastructure to execute the tests - browser fleets, ephemeral environments, data seeding need to be handled

by Visweshyc

3/19/2026 at 4:59:16 PM

Good work. But what makes this different than just another feature in Gemini Code assist or Github copilot?

by warmcat

3/19/2026 at 6:10:34 PM

Thanks! To execute these tests reliably you would need custom browser fleets, ephemeral environments, data seeding and device farms

by Visweshyc

3/20/2026 at 5:06:05 AM

If that's what you guys are bringing, you should put that more up front; focus on making it clear you're providing ingredients that Claude et al will not be providing on their own without Real Actual Software to do it.

by mikestorrent

3/20/2026 at 7:21:23 AM

Fair feedback. Will make that clearer. Appreciate it

by Visweshyc

3/19/2026 at 5:06:01 PM

Not a direct competitor but another YC company I use and enjoy for PR reviews is cubic.dev. I like your focus on automated tests.

by solfox

3/19/2026 at 5:19:13 PM

Thanks! We believe executing the scenarios and showing what actually broke closes the loop

by Visweshyc

3/19/2026 at 5:36:14 PM

what kinds of tests does it generate and how's this different from the tens of code review startups out there?

by Bnjoroge

3/19/2026 at 6:23:21 PM

The system focuses on going beyond the happy path and generating edge case tests that try to break the application. For example, a Grafana PR added visual drag feedback to query cards. The system came up with an edge case like - does drag feedback still work when there's only one card in the list, with nothing to reorder against?

by Visweshyc

3/19/2026 at 5:03:52 PM

Looks interesting! Looks like perhaps no support for Flutter apps yet?

by solfox

3/19/2026 at 5:31:05 PM

Yes we currently support web apps but plan to extend the foundation to test mobile applications on device emulators

by Visweshyc

3/19/2026 at 8:23:09 PM

[flagged]

by opensre

3/20/2026 at 5:59:36 AM

[dead]

by tgtracing

3/20/2026 at 3:42:24 AM

- there are atleast 10 dozen code review startups at this point and i see a new one on YC every week

- what is your differentiator?

by vivzkestrel

3/20/2026 at 7:26:33 AM

We see this as different from review. The system generates tests to catch second-order effects and executes them against the live application to expose bugs

by Visweshyc