3/27/2026 at 7:34:31 PM
I've been working on legislative data for 15 years now, on open source scrapers with OpenStates and running a commercial product targeted at professionals (competitor to those in the article).We tried for years with OpenStates to run a free legislative tracking product before eventually having it partner with a commercial provider who was willing to contribute the resources to keep it alive and help out with the open source pieces (shout out to Plural, nice folks).
Believe me when I say that this space is a classic nerd tar pit. It looks like a relatively easy problem, a few hundred scrapers, search, and some basic CRM functionality and you're off to the races.
The problem is that behind the scenes the data is very complicated, and the sources constantly change and break in goofy ways. You need to be running hundreds of scrapers constantly (many of them against akamai or cloudflare), and working around new source website bugs or procedural edge cases every week. It doesn't scale like something like product or web search where you can just ignore broken pages, the penalty for missing things is too high. Tuning your workflow so people find what they need without getting buried is tough, because there are tens of thousands of bills a session about things people think they care about like "AI" or "taxes". On top of that, the low or zero budget clientele is often that mix of high-expectation and low domain knowledge that makes them a big support burden.
Fiscalnote burned 750 million dollars in VC money on this and just went under this week, granted with a series of spectacular own-goals.
I wish this author the best of luck, and if you want to team up on scrapers please give us a shout. But please be aware that you're promising the moon, and try to build a model that will be financially and effort-sustainable. Keeping this stuff going is a _slog_. I'm really hoping that someone can bring the professional level tools to normal people.
by showerst