4/2/2025 at 12:42:06 PM
This has to be one of strangest targets to crawl, since they themselves make database dumps available for download (https://en.wikipedia.org/wiki/Wikipedia:Database_download) and if that wasn't enough, there are 3rd party dumps as well (https://library.kiwix.org/#lang=eng&category=wikipedia) that you could use if the official ones aren't good enough for some reason.Why would you crawl the web interface when the data is so readily available in a even better format?
by diggan
4/2/2025 at 12:56:56 PM
I've written some unfathomably bad web crawlers in the past. Indeed, web crawlers might be the most natural magnet for bad coding and eye-twitchingly questionable architectural practices I know of. While it likely isn't the major factor here I can attest that there are coders who see pages-articles-multistream.xml.bz2 and then reach for a wget + HTML parser combo.If you don't live and breath Wikipedia it is going to soak up a lot of time figuring out Wikipedia's XML format and markup language, not to mention re-learning how to parse XML. HTTP requests and bashing through the HTML is all everyday web skills and familiar scripting that is more reflexive and well understood. The right way would probably be much easier but figuring it out will take too long.
Although that is all pre-ChatGPT logic. Now I'd start by asking it to solve my problem.
by roenxi
4/2/2025 at 1:07:41 PM
You don't even need to deal with any XML formats or anything, they publish a complete dataset on Huggingface that's just a few lines to load in your Python training scriptby a2128
4/2/2025 at 1:13:14 PM
To be a "good" web crawler, you have to go beyond "not bad coding". If you just write the natural "fetch page, fetch next page, retry if it fails" loop, notably, missing any sort of wait between fetches, so that you fetch as quickly as possible, you are already a pest. You don't even need multiple threads or machines to be a pest; a single machine on a home connection fetching pages as quickly as it can be already be a pest to a website with heavy backend computation or DB demands. Do an equally naive "run on a couple dozen threads" upgrade to your code and you expand the blast radius of your pestilence out to even more web sites.Being a truly good web crawler takes a lot of work, and being a polite web crawler takes yet more different work.
And then, of course, you add the bad coding practices on top of it, ignoring robots.txt or using robots.txt as a list of URLs to scrape (which can be either deliberate or accidental), hammering the same pages over and over, preferentially "retrying" the very pages that are timing out because you found the page that locks the DB for 30 seconds in a hard query that even the website owners themselves didn't know was possible until you showed them by taking down the rest of their site in the process... it just goes downhill from there. Being "not bad" is already not good enough and there's plenty of "bad" out there.
by jerf
4/2/2025 at 2:40:04 PM
I think most crawlers inevitably tend to turn into spaghetti code because of the number of weird corner cases you need to deal with.Crawlers are also incredibly difficult to test in a comprehensive way. No matter what test scenarios you come up with, there's a hundred more weird cases in the wild. (e.g. there's a world's difference between a server taking a long time to respond to a request, and a server sending headers quickly but taking a long time to send the body)
by marginalia_nu
4/2/2025 at 8:28:50 PM
I thrive for these kinds of moving-target challenges. But nobody will hire.by joquarky
4/2/2025 at 1:05:50 PM
You'd probably ask ChatGPT to write you a crawler for Wikipedia, without thinking to ask whether there's a better way to get Wikipedia info. So that download would be missed, because how and what we ask AI stays very important. Actually this is not new, googling skills were known as being important before and even philosophers recognized that asking good questions was crucial.by soco
4/2/2025 at 8:26:49 PM
> Why would you crawl the web interface when the data is so readily available in a even better format?Have you seen the lack of experience that is getting through the hiring process lately? It feels like 80% of the people onboarding are only able to code to pre-existing patterns without an ability to think outside the box.
I'm just bitter because I have 25 years of experience and can't even get a damn interview no matter how low I go on salary expectations. I obviously have difficulty in the soft skills department, but companies who need real work to get done reliably used to value technical skills over social skills.
by joquarky
4/2/2025 at 12:57:20 PM
Because the scrapers they use aren't targeted, they just try to index the whole internet. It's easier that way.by Cthulhu_
4/2/2025 at 1:40:18 PM
While the dump may be simpler to consume, building it isn't simpler.The generic web crawler works (more or less) everywhere. The Wikipedia dump solution works on Wikipedia dumps.
Also mind: This is tied in with search engines and other places, where the AI bot follows links from search results etc. thus they'd need the extra logic to detect a Wikipedia link, then find the matching article in the dump, and then add the original link back as reference for the source.
Also in one article on that I read about spikes around death from people etc. in that scenario they want the latest version of the article, not a day old dump.
So yeah, I guess they used the simple straight forward way and didn't care much about consequences.
by johannes1234321
4/2/2025 at 1:54:46 PM
I'm not sure this is what is currently affecting them the most, the article mentions this:> Since AI crawlers tend to bulk read pages, they access obscure pages that have to be served from the core data center.
So it doesn't seem to be driven by "Search the web for keywords, follow links, slurp content" but trying to read a bulk of pages all together, then move on to another set of bulk pages, suggesting mass-ingestion, not just acting as a user-agent for an actual user.
But maybe I'm reading too much into the specifics of the article, I don't have any particular internal insights to the problem they're facing I'll confess.
by diggan
4/2/2025 at 12:44:56 PM
I think most of these crawlers just aren't very well implemented. Takes a lot of time and effort to get crawling to work well, very easy to accidentally DoS a website if you don't pay attention.by marginalia_nu
4/2/2025 at 12:57:19 PM
This is what you get when an AI generates your code and your prompts are vague.by is_true
4/2/2025 at 1:34:02 PM
Vibe coded crawlersby cowsaymoo
4/2/2025 at 12:59:28 PM
With the way Transclusion works in MediaWiki, dumps and the wiki api’s are often not very useful, unfortunatelyby iamacyborg
4/2/2025 at 12:56:11 PM
There are crawlers that will recursively crawl source repository web interfaces (cgit et al, usually expensive to render) despite having a readily available URL they could clone from. At this point I'm not far from assuming malice over sheer incompetence.by mzajc
4/2/2025 at 12:59:13 PM
> Why would you crawl the web interface when the data is so readily available in a even better format?It's entirely possible they don't know about this. I certainly didn't until just now.
by MiscIdeaMaker99
4/2/2025 at 1:23:07 PM
Because AI and everything about it is about being as lazy as possible and wasting money and compute in service of becoming even lazier. No one who is willing to burn the compute necessary to train the big models we see would think twice about the wasted resources involved in doing the most wasteful, least efficient means of collecting the data possible.by skywhopper
4/2/2025 at 12:44:35 PM
I think those crawlers are just very generic: they basically operate like wget scripts, without much logic for avoiding sites that already offer clean data dumps.by wslh
4/2/2025 at 12:46:24 PM
That is not an excuse. Wikipedia isn't just any site.by ldng
4/2/2025 at 12:47:13 PM
Not an excuse, a plausible explanation of what's actually happening.by wslh
4/2/2025 at 1:14:02 PM
Also plausibly they are trying to kill the site via soft ddos. Then they can sell a service based on all the data they scraped + unauditable censoring.by franktankbank
4/2/2025 at 1:33:53 PM
i was thinking the same thing. it might be the case that the scrapers are result of the web search feature in llms?by shreyshnaccount
4/2/2025 at 12:52:29 PM
> Why would you crawl the web interface when the data is so readily available in a even better format?Because grifters have no respect or care for other people, nor are they interested in learning how to be efficient. They only care about the least amount of effort for the largest amount of personal profit. Why special-case Wikipedia, when they can just scratch their balls and turn their code loose? It’s not their own money they’re burning anyway; there are more chumps throwing money at them than they know what to do with, so it’s imperative they look competitive and hard at work.
by latexr
4/2/2025 at 1:04:19 PM
The vast, vast majority of companies using AI are on the same level as the people distributing malware to mine crypto on other peoples' machines. They're exploiting resources that aren't theirs to get rich quick from stupid investors & market hype. We all suffer so they can get a couple bucks. Thanks, AI & braindead investors. This bubble can't pop soon enough, and I hope it takes a whole lot of terrible people down with it.by coldpie
4/2/2025 at 1:42:01 PM
The crawler companies just do not give a shit. They're running these crawl jobs because they can, the methodology is worthless, the data will be worthless, but they have so much computing resources relative to developer resources that it costs them more to figure out that the crawl is worthless and figure out what isn't worthless, than it does to just do the crawl and throw away the worthless data at the end and then crawl again. Meanwhile they perform the internet's most widespread DDoS (which is against the CFAA btw so if they caused actual damages to you, try suing them). I don't personally take an issue with web crawling as a concept (how else would search engines work? oh, they don't work any more anyway) but the implementation is obviously a failure.---
I've noticed one crawling my copy of Gitea for the last few months - fetching every combination of https://server/commithash/filepath. My server isn't overloaded by this. It filled up the disk space by generating every possible snapshot, but I count that as a bug in Gitea, not an attack by the crawler. Still, the situation is very dumb, so I set my reverse proxy to feed it a variant of the Wikipedia home page on every AI crawler request for the last few days. The variation has several sections replaced with nonsense, both AI-generated and not. You can see it here: https://git.immibis.com/gptblock.html
I just checked, and they're still crawling, and they've gone 3 layers deep into the image tags of the page. Since every URL returns that page if you have the wrong user-agent, so do the images, but they happen to be in a relative path so I know how many layers deep they're looking.
Interestingly, if you ask ChatGPT to evaluate this page (GPT interactive page fetches are not blocked) it says it's a fake Wikipedia. You'd think they could use their own technology to evaluate pages.
---
nginx rules for your convenience - be prepared to adjust the filters according to the actual traffic you see in your logs
location = /gptblock.html {root /var/www/html;}
if ($http_user_agent ~* "https://openai.com/gptbot") {rewrite ^.*$ /gptblock.html last; break;}
if ($http_user_agent ~* "claudebot@anthropic.com") {rewrite ^.*$ /gptblock.html last; break;}
if ($http_user_agent ~* "https://developer.amazon.com/support/amazonbot") {rewrite ^.*$ /gptblock.html last; break;}
if ($http_user_agent ~* "GoogleOther") {rewrite ^.*$ /gptblock.html last; break;}*
by immibis
4/2/2025 at 12:56:26 PM
> Why would you crawl the web interface when the data is so readily available in a even better format?To cause deliberate harm as a DDOS attack. Perhaps a better question is, why would companies who hope to replace human-curated static online information with their own generative service not use the cloak of "scraping" to take down their competition?
by nonrandomstring
4/2/2025 at 1:35:17 PM
This is the most reasonable explanation. Wikipedia is openly opposed by the current US administration, and 'denial of service' is key to their strategy (i.e. tariffs, removal of rights/due process, breaking net neutrality, etc.).In the worst case, Wikipedia will have to require user login, which achieves the partial goal of making information inaccessible to the general public.
by concerndc1tizen
4/2/2025 at 3:15:27 PM
In the worst case Wikipedia will have to relocate to Europe and block the entire ASN of US network estates. But if the United States is determined to commit digital and economic suicide, I don't see how reasonable people can stop that.by nonrandomstring
4/2/2025 at 3:40:40 PM
It would be trivial to use botnets inside of the EU, so I doubt that blocking ASNs would make any difference. And as I said, it achieves the goal of disrupting access to information, so that would be nevertheless be a win for them. Your proposition does not solve for Wikipedia's agenda of providing free access to information.> digital and economic suicide
My view is that it's an economic coup which started decades ago (Bush-Halliburton, bank bailouts in 2008, etc.). It's only inflation and economic uncertainty is only for the poor. For the people that do algorithmic stock trading, it's an arbitrage opportunity that occurs in the scale of microseconds.
By the time that the people will be properly motivated to revolt against the government, it will be too late.
by concerndc1tizen