1/21/2026 at 6:34:10 PM
> Building a comparable one from scratch is like building a parallel national railroad..Not too be pedantic here but I do have a noob question or two here:
1. One is building the index, which is a lot harder without a google offering its own API to boot. If other tech companies really wanted to break this monopoly, why can't they just do it — like they did with LLM training for base models with the infamous "pile" dataset — because the upshot of offering this index for public good would break not just google's own monopoly but also other monopolies like android, which will introduce a breath of fresh air into a myriad of UX(mobile devices, browsers, maps, security). So, why don't they just do this already?
2. The other question is about "control", which the DoJ has provided guidance for but not yet enforced. IANAL, but why can't a state's attorney general enforce this?
by ghm2199
1/21/2026 at 7:59:00 PM
> 1. One is building the index, which is a lot harder without a google offering its own API to boot. If other tech companies really wanted to break this monopoly, why can't they just do it?FTA:
> Context matters: Google built its index by crawling the open web before robots.txt was a widespread norm, often over publishers’ objections. Today, publishers “consent” to Google’s crawling because the alternative - being invisible on a platform with 90% market share - is economically unacceptable. Google now enforces ToS and robots.txt against others from a position of monopoly power it accumulated without those constraints. The rules Google enforces today are not the rules it played by when building its dominance.
by oh_fiddlesticks
1/21/2026 at 8:54:56 PM
robots.txt was being enforced in court before google even existed, let alone before google got so huge:> The robots.txt played a role in the 1999 legal case of eBay v. Bidder's Edge,[12] where eBay attempted to block a bot that did not comply with robots.txt, and in May 2000 a court ordered the company operating the bot to stop crawling eBay's servers using any automatic means, by legal injunction on the basis of trespassing.[13][14][12] Bidder's Edge appealed the ruling, but agreed in March 2001 to drop the appeal, pay an undisclosed amount to eBay, and stop accessing eBay's auction information.[15][16]
by creato
1/21/2026 at 10:28:12 PM
Not only was eBay v. Bidder's Edge technically after Google existed, not before, more critically the slippery-slope interpretation of California trespass to chattels law the District Court relied on in it was considered and rejected by the California Supreme Court in Intel v. Hamidi (2003), and similar logic applied to other states trespass to chattels laws have been rejected by other courts since; eBay v. Bidder's Edge was an early aberration in the application of the law, not something that established or reflected a lasting norm.by dragonwriter
1/22/2026 at 1:05:07 AM
The point is, robots.txt was definitely a thing that people expected to be respected before and during google's early existence. This Kagi claim seems to be at least partially false:> Google built its index by crawling the open web before robots.txt was a widespread norm, often over publishers’ objections.
by creato
1/22/2026 at 5:41:47 AM
Perhaps it wasn't a widespread norm though. But I don't really see why that matters as much, is the the issue that sites with robots.txt today only allow Googlebot and not other search engines? Or is Google somehow benefitting from having two decade old content that is now blocked because of robots.txt that the website operators don't want indexed?by hattmall
1/23/2026 at 12:12:50 PM
Agree. It was not standard in the late 90s or early 00s. Most sites were custom built and relied on the _webmaster_ knowing and understanding how robots.txt worked. I'd heard plenty of examples where people had inadvertently blocked crawlers from their site, not knowing the syntax correctly. CMS' probably helped in the widespread adoption e.g. wordpressby ricardo81
1/23/2026 at 10:55:05 AM
> robots.txt was definitely a thing that people expected to be respected before and during google's early existenceAs someone who was a web developer at that time, robots.txt wasn't a "widespread norm" by a large margin, even if some individuals "expected it to be respected". Google's use of robots.txt + Google's own growth made robots.txt a "widespread norm" but I don't think many people who were active in the web-dev space at that time, would agree that it was a widespread norm before Google.
by embedding-shape
1/21/2026 at 9:55:06 PM
Nitpick: Google incorporated in 1998, so, before the Bidder's Edge case.by throw-the-towel
1/21/2026 at 9:56:45 PM
[flagged]by yuuxheu
1/21/2026 at 8:24:58 PM
A classic case of climbing the wall, and pulling the ladder up afterward. Others try to build their own ladder, and Google uses their deep pockets and political influence to knock the ladder over before it reaches the top.by baggachipz
1/21/2026 at 10:02:51 PM
Why does Google even need to know about your ladder? Build the bot, scale it up, save all the data, then release. You can now remove the ladder and obey robots.txt just like G. Just like G, once you have the data, you have the data.Why would you tell G that you are doing something? Why tell a competitor your plans at all? Just launch your product when the product is ready. I know that's anathema to SV startup logic, but in this case it's good business
by dylan604
1/22/2026 at 12:22:28 AM
Running the bot nowadays is hard, because a lot of sites will now block you - not just by asking nicely via robots.txt, but by checking your actual source IP. Once they see it's not Google, they send you a 403.by Nextgrid
1/22/2026 at 12:01:02 PM
Cloudflare’s ubiquity makes bootstrapping a search index via crawler virtually impossible, but what about data sources like Common Crawl?by eloisius
1/22/2026 at 12:32:30 AM
Cost, presumably. From the article:> Microsoft spent roughly $100 billion over 20 years on Bing and still holds single-digit share. If Microsoft cannot close the gap, no startup can do it alone.
by monooso
1/22/2026 at 9:38:37 AM
Wouldn't it be nice if Microsoft opened the bing index for all.by kavalg
1/22/2026 at 11:49:28 AM
Don't they? DDG and Kagi use it. I would think you have to pay money but it does seem like they're willing to get partners.edit: this is wrong
by direwolf20
1/22/2026 at 11:58:09 AM
This is incorrect. Kagi does not use the Bing index, as detailed in the article:> Bing: Their terms didn’t work for us from the start. Microsoft’s terms prohibited reordering results or merging them with other sources - restrictions incompatible with Kagi’s approach. In February 2023, they announced price increases of up to 10x on some API tiers. Then in May 2025, they retired the Bing Search APIs entirely, effective August 2025, directing customers toward AI-focused alternatives like Azure AI Agents.
by monooso
1/22/2026 at 2:14:56 PM
Now that you mention it...It's odd that Microsoft hasn't aggressively pushed for "openness". That's in the usual playbook for attacking a market leader.
(And then pull up the ladder once you've become king of hill.)
Microsoft will probably never topple Google, absent anti-monopolistic enforcement. But they can certainly attack Google's profits.
by specialist
1/23/2026 at 12:16:25 PM
There's one great example of a company that did that and managed to go viral on their release, Cuil. They claimed to have a Google size of search index. Unfortunately for them their search results weren't good and so that visibility quickly disappeared.Going further back, AlltheWeb was actually pretty decent but was eventually bought by Overture and then Yahoo and ended up in their graveyard.
For everyone else it's the longer grind trying to gain visibility.
by ricardo81
1/23/2026 at 4:06:19 PM
I forgot about Cuil! I really wanted to like it.by baggachipz
1/21/2026 at 8:36:33 PM
True. But the thing is if one says "We will make sure your site is in a world wide freely availabled index" which is kept fresh, google's monopoly ship already begins to take on water. Here is a appropriate line from a completely different domain of rare earth metals from The Economist on the chinese govt's weaponization of rare earths[1]:> Reducing its share from 90% to 80% may not sound like much, but it would imply a doubling in size of alternative sources of supply, giving China’s customers far more room for manoeuvre.
by ghm2199
1/21/2026 at 9:16:29 PM
Building an index is easy. Building a fresh index is extremely hard.Ranking an index is hard. It's not just BM25 or cosine similarity. How do you prioritize certain domains over others? How do you rank homepages that typically have no real content in them for navigational queries?
Changing the behavior of 90% of the non-Chinese internet is unraveling 25 years and billions of dollars spent on ensuring Google is the default and sometimes only option.
Historically, it takes a significant technological counter position or anti-trust breakup for a behemoth like Google to lose its footing. Unfortunately for us, Google is currently competing well in the only true technological threat to their existence to appear in decades.
by jeromechoo
1/21/2026 at 11:48:55 PM
Good news! Google doesn't know how to rank pages either!by AlienRobot
1/22/2026 at 10:15:06 AM
yet ... it works "ok" most of the time.not to mention that people mostly need wikipedia, the news, navigating the infuriating world of websites of big service providers (gov sites, or try to find anything on Microsoft's dark corner of the web), porn and brainrot
but it's awfully hard to make traction on a business that provides this.
by pas
1/21/2026 at 7:32:45 PM
A huge amount of the web is only crawlable with a googlebot user-agent and specific source IPs.by walls
1/21/2026 at 8:22:24 PM
> And given you-know-what, the battle to establish a new search crawler will be harder than ever. Crawlers are now presumed guilty of scraping for AI services until proven innocent.I have always wondered but how does wayback machine work, is there no way that we can use wayback archive and then run a index on top of every wayback archive somehow?
by Imustaskforhelp
1/21/2026 at 9:03:06 PM
You can read https://hackernoon.com/the-long-now-of-the-web-inside-the-in... it was a nice look into their infra structure. One could theoretically build it. A few things stand out:1. IIUC depends a lot on "Save Page Now" democratization, which could work, but its not like a crawler.
2. In absence of alexa they depend quite heavily on common crawl, which is quite crazy because there literally is no other place to go. I don't think they can use google's syndicated API, cause they would then start showing ads in their database, which is garbage that would strain their tiny storage budget.
3. Minor from a software engineering perspective but important for survival of the company: since they are an artifact of record storage, to convert that to an index would need a good legal team to battle google to argue. They do that the DoJ's recent ruling in their favor.
by ghm2199
1/21/2026 at 8:31:33 PM
I do not know a lot about this subject, but couldn’t you make a pretty decent index off of common crawl? It seems to me the bar is so low you wouldn’t have to have everything. Especially if your goal was not monetization with ads.by deepsquirrelnet
1/21/2026 at 8:52:35 PM
I think someone had commented on another thread about SerpAPI the other day that common crawl is quite small. It would be a start, I think the key to a good index people will use is freshness of the results. You need good recall for a search engine, precision tuning/re-ranking is not going to help otherwise.by ghm2199
1/22/2026 at 5:50:34 AM
Are these websites not serving public content? If there's some legal concerns just create a separate scraping LLC that fakes user agent and uses residential IPs or VPN or something. I can't imagine that the companies would follow through with some sort of lawsuit against a scraper that's trying to index their site to get them more visitors, if they allow GoogleBot.by hattmall
1/22/2026 at 10:44:37 AM
Isnt that what SerpAPI was doing?by ddtaylor
1/21/2026 at 9:16:45 PM
If a crawler offered enough money they could be allowed too. It's not like Google has exclusive crawling rights.by charcircuit
1/22/2026 at 12:30:13 AM
There is a logistics problem here - even if you had enough money to pay, how would you get in touch with every single site to even let them know you're happy to pay? It's not like site operators routinely scan their error logs to see your failed crawling attempts and your offer in the user-agent.Even if they see it, it's a classic chicken & egg problem: it's not worth the time of the site operator to engage with your offer until your search engine popular enough to matter, but your search engine will never become popular enough to matter if it doesn't have a critical mass of sites to begin with.
by Nextgrid
1/22/2026 at 3:45:40 AM
Realistically you don't need every single site on board before you index becomes valuable. You can get in touch with sites via social media, email, discord, or even visiting them face to face.by charcircuit
1/22/2026 at 7:04:25 AM
You really do need every single site, as search is a long tail problem. All the interesting stuff is in the fringes, if you only have a few big sites you'll have a search engine of spam.by stavros
1/22/2026 at 8:36:02 AM
I think that is only needed for a small subset of queries. Seriously think of the last time you did a search and went to a fringe site as opposed to a well known brand or social media. Ranking quality is much more important than coverage over the whole internet.by charcircuit
1/22/2026 at 9:34:45 AM
> Seriously think of the last time you did a search and went to a fringe site as opposed to a well known brand or social media.Oh, almost never. That's exactly why search sucks now.
by stavros
1/21/2026 at 7:47:44 PM
Scraping is hard. Very good scraping is even harder. And today, being a scraping business is veeery difficult; there are some "open"/public indices, but none of these other indices ever took offby KellyCriterion
1/21/2026 at 8:00:33 PM
Well sure yes, I don't contend with the fact that its hard, but if the top tech companies joined their heads I am sure if for example, Meta, Apple, MS have enough talent between to make an open source index if only to reap gains from the de-monopolization of it all.by ghm2199
1/22/2026 at 4:03:14 PM
I learned on here that this has been happening to a degree with maps. Several big companies have been cooperating to improve open street map data, a rare example of a beneficial commons. This is probably some unique accident of incentives and timing and history but maybe it could happen in other domains.by zanderz
1/22/2026 at 12:26:30 AM
All these companies have the exact same business model as Google (advertising) and have the same mismatched incentives: good search results are not something they want.Google Search sucks not because Google is incapable of filtering out spam and SEO slop (though they very much love that people believe they can't), but that spam/slop makes the ads on the SERP page more enticing, and some of the spam itself includes Google Ads/analytics and benefits them there too.
There is no incentive for these companies to build a good search engine by themselves to begin with, let alone provide data to allow others to build one.
by Nextgrid
1/22/2026 at 5:57:27 AM
I was on the Goog forums for years (before they even fucking ruined the FORMAT of the forums, possibly to 'be more mobile friendly') and it was people absolutely (justifiably) screaming at the product peopleNo, the customer isn't 'always' right, but these guys like to get big and once big, fuck you, we don't have to listen to you, we're big; what are you going to do, leave?
by alex1138
1/23/2026 at 9:53:03 AM
They will prefer to band up with Google, and rip us off.by t_mahmood
1/21/2026 at 8:21:40 PM
I mean, doesn't microsoft have bing?by Imustaskforhelp
1/21/2026 at 8:44:33 PM
Yeah but no one uses it. I am not even sure people that are forced to use it like using it because it was productized it pretty poorly. After all who wants another google? They invested 100 Billion dollars, which is a lot of wasted money TBH.Search indexes are hard, surely, but if you were to strip it to just a good index on the browser, made it free, kept it fresh, it cannot be 100 billion dollars to build. Then you use this DoJ decision and fight against google to not deny a free index to have equal rights on chrome you can have a massive shot at a win for a LOT less money.
by ghm2199
1/21/2026 at 8:48:55 PM
> Yeah but no one uses it. I am not even sure people like using it because it was productized it pretty poorly. They invested 100 Billion dollars, which is a lot of wasted money TBH.I mean... Duckduckgo uses bing api iirc and I use duckduckgo and many people use duckduckgo.
I also used bing once because bing used to cache websites which weren't available in wayback archive, I don't know how but It was pretty cool solution for a problem.
I hate bing too and I am kind of interested in ecosia/qwant's future as well (yes there's kagi too and good luck to kagi as well! but I am currently still staying on duckduckgo)
by Imustaskforhelp
1/21/2026 at 9:10:58 PM
Duck duck go is really cool. I am almost fully rooting for them and they are my default mobile and web browser.The small distributed team grinding it out against the goliath. They are awesome and perhaps the right example of what a path like this would look like. Maybe someone from their team can chime in on the difficulties of building a search engine that works in the face of tremendous odds.
by ghm2199
1/22/2026 at 11:50:56 AM
DDG is mostly just an anonymizing proxy for Bing. Microsoft encourages it because it increases Bing's market share over Google.by direwolf20
1/21/2026 at 10:10:24 PM
I would imagine the users of DDG to be closer to a rounding error than an actual percentage of users. I'd imagine theGoog would love and hate to have 100%. They'd love it because all the data, and hate it as it would prove the monopoly. At the end of the day, the % that is not going to them probably doesn't cause theGoog to lose much sleepby dylan604
1/21/2026 at 10:38:49 PM
It's just so wild how great Duckduckgo is & how under-rated it is.It's available in all major browsers (Here in zen browser, it doesn't even have a default browser but rather on the start page it asks between the three options, google duckduckgo and bing but yes if you press next it starts from google but zen can even start from ddg, its not such a big deal)
Duckduckgo is super amazing. I mean they are so amazing and their duck.ai or ai actually provides concise data instead of Google's AI
DDG is leaps ahead of Google in terms of everything. I found Kagi to be pleasant too but with PPP it might make sense in Europe and America but privacy isn't/ shouldn't be the only who only pays. So DDG is great for me personally and I can't recommend it enough for most cases.
Brave/Startpage is a second but DDG is so good :)
It just works (for most cases, the only use case I use google is for uploading images to then get more images like this or use an image as a search query and I just do !gi and open images.google.com but I only use this function very rarely, bangs are amazing feature by ddg)
by Imustaskforhelp
1/21/2026 at 11:00:20 PM
I use DDG myself. I just assumed that I'm not a very sophisticated user as I've never had it not serve my needs based on how other people here say it's not very good.by dylan604
1/22/2026 at 12:07:51 PM
>I've never had it not serve my needsSame here. It may be 'not very good' for highly specialized or complex technical questions ... but I do research across a broad range of (non-specialized) topics daily. I often need to find 2nd and 3rd points of view on a topic ... or detailed facts about singular events ... and I rarely need to go to the 2nd page. And all ad-free!
It's a remarkable education tool. A curious, explorative kid these days could easily sail WAY beyond their age group using DDG. I can only wish I'd had it.
Their recently added 'Search assistant' consistently provides a couple of CITATIONS to backup its (multi-leveled) responses (Ask for more, get more.) I've seen nothing like it elsewhere. It is even quite good at diggin up useful ... and working ... example code for some languages. Also with citations.
by 8bitsrule
1/22/2026 at 11:51:28 AM
DDG is just an anonymizing front-end for Bing. Your DDG results are Bing results.by direwolf20
1/22/2026 at 4:40:18 PM
Then the anonimization is a key component of their goodness. When I compare searches between Bing and DDG I find the DDG ones superior every time.by antiframe
1/21/2026 at 9:20:38 PM
Scraping is hard, and is not hard that much at the same time. There are many projects about scraping, so with a few lines you can do implement scraper using curl cffi, or playwright.People complain that user-agent need to be filled. Boo-hoo, are we on hacker news, or what? Can't we just provide cookies, and user-agent? Not a big deal, right?
I myself have implemented a simple solution that is able to go through many hoops, and provide JSON response. Simple and easy [0].
On the other hand it was always an arms race. It will be. Eventually every content will be protected via walled gardens, there is no going around it.
Search engines affect me less, and less every day. I have my own small "index" / "bookmarks" with many domains, github projects, youtube channels [1].
Since the database is so big, the most used by me places is extracted into simple and fast web page using SQLite table [2]. Scraping done right is not a problem.
[0] https://github.com/rumca-js/crawler-buddy
by renegat0x0
1/21/2026 at 11:15:43 PM
+1 so much for this. I have been doing the same, an SQLite database of my "own personal internet" of the sites I actually need. I use it as a tiny supplementary index for a metasearch engine I built for myself - which I actually did to replace Kagi.Building a metasearch engine is not hard to do (especially with AI now). It's so liberating when you control the ranking algorithm, and can supplement what the big engines provide as results with your own index of sites and pages that are important to you. I admit, my results & speed aren't as good as Kagi, but still good enough that my personal search engine has been my sole search engine for a year now.
If a site doesn't want me to crawl them, that's fine. I probably don't need them. In practice it hasn't gotten in the way as much as I might have thought it would. But I do still rely on Brave / Mojeek / Marginalia to do much of the heavy lifting for me.
I especially appreciate Marginalia for publicly documenting as much about building a search engine as they have: https://www.marginalia.nu/log/
by SyneRyder
1/23/2026 at 9:22:50 AM
Do you have any documentation/blog post for this? I would love to do something similar for my own use.by carte_blanche
1/31/2026 at 8:06:03 PM
Unfortunately I still haven't had a chance to make a blog post about this, which I really must do. But I can give some quick hints. Anyone reading this, feel free to reach out & I can try to answer questions, and they might help my blog post too.I started off with a meta-search calling out to Brave / Mojeek / Marginalia, and the basics of that are something that you can ask an AI to make for you as a one-file PHP script. I still think this is a good place to start, because you'll quickly find "okay, I can replace my everyday search engine with this". Once you're dogfooding your engine every day, you'll notice all the rough points you want to improve.
Once you've got an array of objects with Title, URL, Description, and splitting the URL into domain, TLD, subdomain, path, file extension... there's a lot of ranking you can apply just to those. Honestly, a lot of my "ranking" has just been applying increased rankings to domains that I visit most often. I have an array of about 600 domains that it applies ranking boosts to. You can try experimenting with your re-ranking there, before even starting to build your own index.
As for building your own (small, personal) index, the technical details are not as difficult as you'd think. An SQLite database file, that your PHP file reads, will take you a long way... especially if you enable FTS5 indexing. I only did that last week, and I should have done that at the beginning. Search times are 10ms, and not just on my personally curated index of 80,000 pages... I just added a 2nd database with 1.3 Million entries from DMOZ (the old Mozilla Directory), and it's still only about 10ms. My search engine now feels super fast when it gets results from my database. And when it finds zero results, it automatically falls back to the metasearch.
At 1.3 Million entries, the two databases are only about 550MB total. It's running on a shared hosting account and apparently they're not worried - but it's only available to me, so I'm only hitting it maybe 50 times a day maximum. I'll move it onto a VPS eventually, but every time I think "this must be using up too many resources", I find I'm thinking too small by at least a factor of 10x.
For getting started with PHP & SQLite, I found this blog post helpful - but at this point, your AI can vibe code the entire thing for you:
https://davidejones.com/blog/simple-site-search-engine-php-s...
It's amazing how far you can get with just SQLite and FTS5 and a little PHP. Read the Marginalia blog too, there's so much good information in there.
Don't hold yourself back, don't think it's impossible.
by SyneRyder
1/22/2026 at 4:47:28 AM
> Search engines affect me less, and less every day. I have my own small "index" / "bookmarks" with many domains, github projects, youtube channelsExactly, why can't we just hoard our bookmarks and a list of curated sources, say 1M or 10M small search stubs, and have a LLM direct the scraping operation?
The idea is to have starting points for a scraper, such as blogs, awesome lists, specialized search engines, news sites, docs, etc. On a given query the model only needs a few starting points to find fresh information. Hosting a few GB of compact search stubs could go a long way towards search independence.
This could mean replacing Google. You can even go fully local with local LLM + code sandbox + search stub index + scraper.
by visarga
1/22/2026 at 11:52:00 AM
Marginalia Search does something like thisby direwolf20
1/21/2026 at 11:33:24 PM
When I saw the Internet-Places-Database I thought it was an index on some sort of PoI and I got curious. But the personal internet spiel is pretty cool. One good addition to this could be the Foursquare PoI dataset for places search: https://opensource.foursquare.com/os-places/by ghm2199
1/21/2026 at 6:57:18 PM
I don’t think it’s comparable to today’s AI race.Google has a monopoly, an entrenched customer base, and stable revenue from a proven business model. Anyone trying to compete would have to pour massive money into infrastructure and then fight Google for users. In that game, Google already won.
The current AI landscape is different. Multiple players are competing in an emerging field with an uncertain business model. We’re still in the phase of building better products, where companies started from more similar footing and aren’t primarily battling for customers yet. In that context, investing heavily in the core technology can still make financial sense. A better comparison might be the early days of car makers, or the web browser wars before the market settled.
by hsuduebc2
1/21/2026 at 8:12:37 PM
> ... stable revenue from a proven business mode... In that game, Google already won.But if they were to pour that money strategically to capture market share one of two things would happen if google was replaced/lost share:
1. it would be the start of the commoditization of search. i.e. search engine/index would become a commodity and more specialized and people could buy what they want and compete.
2. A new large tech company takes rein. In which case it would be as bad as this time.
Like what I don't get is that if other big tech companies actually broke apart monopoly on search, several google dominos in mobile devices, browser tech, location capabilities would fall. It would be a massive injection of new competition into the economy, lots of people would spend more dollars across the space(and ad driven buying too) money would not accrue in an offshore tax haven in ireland
To play the devils advocate, I think the only reason its not happening is because meta, apple, microsoft have very different moats/business models to profit off. They all have been stung one time or another is small or big ways for trying to build something that could compete but failed. MS with bing, Meta with facebook search, Foursquare — not big tech but still — with Maurauder's Map.
by ghm2199
1/21/2026 at 7:07:46 PM
> If other tech companies really wanted to break this monopoly, why can't they just do itGoogle is a verb, nobody can compete with that level of mindshare.
by hamdingers
1/21/2026 at 7:23:45 PM
A big part of it is about the legal minefield if you presented any sort of real threat to Google. Nobody wants to wager billions in infrastructure and IP against Google or Apple or Microsoft, even if you could whip up a viable competing product in a weekend (for any given product.)Part of it is also the ecosystem - don't threaten adtech, because the wrong lawsuits, the wrong consumer trend, the wrong innovation that undercuts the entire adtech ecosystem means they lose their goose with the golden eggs.
Even if Kagi or some other company achieves legitimate mindshare in search, they still don't have the infrastructure and ancillary products and cash reserves of Google, etc. The second they become a real "threat" in Google's eyes, they'd start seeing lawsuits over IP and hostile and aggressive resource acquisitions to freeze out their expansion, arbitrary deranking in search results, possible heightened government audits and regulatory interactions, and so on. They have access to a shit ton of legal levers, not to mention the whole endless flood of dirty tricks money can buy (not that Google would ever do that.)
They're institutional at this point; they're only going away if/when government decides to break it up and make things sane again.
by observationist
1/21/2026 at 7:16:53 PM
Xerox is a verb, but most copy machines I see are made by their competitionby wongarsu
1/21/2026 at 7:19:08 PM
Wonder why that could be?https://www.nytimes.com/1975/07/31/archives/xerox-settlement...
by hamdingers
1/21/2026 at 7:18:07 PM
Kleenex isn't the only brand of tissues sold in stores.by eikenberry
1/21/2026 at 11:13:36 PM
How’s that working out for Hoover in the UK?by iamacyborg
1/21/2026 at 9:07:56 PM
Licensing their index doesn’t change that.by cowsandmilk
1/21/2026 at 7:19:01 PM
So were AOL, and Skypeby Zyst
1/21/2026 at 10:05:41 PM
I don't ever recall anyone using AOL as a verb. How would you do that?by dylan604
1/22/2026 at 7:47:46 AM
Let me AOL this for youby terespuwash
1/22/2026 at 5:04:22 PM
said no one everby dylan604
1/23/2026 at 11:02:09 AM
You clearly did not live in the world of watching two teens on computers in the same room hold two entirely different conversations out-loud and over AIM.by mgiampapa
1/22/2026 at 12:22:48 AM
>why can't they just do itMoney. Google controls 99% of the adverting market. That's why its called a monopoly. No one else can compete because they can never make enough money to make it worth the costs of doing it themselves.
by citizenpaul
1/21/2026 at 7:32:54 PM
Apple had a chance to break Google's search monopoly, but they chose to take billions from them instead.Microsoft had a chance (well another chance, after they gave up IE's lead) to break up Google's browser monopoly, but they decided to use Chromium for free instead.
Ultimately all these decisions come down to what's more profitable, not what's in the best interests of the public. We have learned this lesson x1000000. Stop relying on corporations to uphold freedoms (software or otherwise), becuase that simply isn't going to happen.
by paxys
1/21/2026 at 9:26:30 PM
>but they chose to take billions from them instead.They chose to use Google with a revenue sharing agreement. Google is very well monetized. It would be very difficult for Apple to monetize their own search as good as Google can.
>they decided to use Chromium
Windows ships with Microsoft Edge as the browser which Microsoft has full control over.
by charcircuit
1/22/2026 at 5:50:36 PM
Other comments mention difficulty, cost, conditions, etc.Also, competitive agreements: of the big players like Apple, Microsoft, Facebook/Meta, Amazon, etc., only Google is in the ad business. But it has credible threats of digging into their businesses - GCP, Android, (not to mention software licenses and competitive access to e.g., Samsung), etc. So they agree to cede the ad world to Google, to keep Google out of their businesses.
The injunctions cannot be effective. Google ads are essentially a tax at a fine scale that rational people chose when it didn't change site behavior. But then Google ads changed the nature of the web itself, converting every snippet of information into an opportunity to monetize. Neither would change with a public search.org, and injunctions to license ad-free indexes won't change site behavior or publishers' self-interest in selling access to their content to Google alone.
Google knows the injunctions are unworkable and ultimately ineffective. The only question is what price they have to pay to the Trump judiciary to counter them.
by w10-1
1/21/2026 at 7:24:11 PM
> If other tech companies really wanted to break this monopoly, why can't they just do itCompanies would rather sue than try and compete by investing their own money.
by xnx