1/12/2025 at 11:49:45 PM
We had a similar CDN problem with releasing major Warframe updates: our CDN partner would inadvertently DDoS our origin servers when we launched an update because thousands of cold edges would call home simultaneously when all players players relogged at the same time.One CDN vendor didn't even offer a tiered distribution system so every edge called home at the same time, another vendor did have a tiered distribution system designed to avoid this problem but it was overwhelmed by the absurd number of files we'd serve multiplied by the large user count and so we'd still end up with too much traffic hitting the origin.
The interesting thing was that no vendor we evaluated offered a robust preheating solution if they offered one at all. One vendor even went so far as to say that they wouldn't allow it because it would let customers unfairly dominate the shared storage cache at the edge (which sort of felt like airlines overbooking seats on a flight to me).
These days we run an army of VMs that fetch all assets from every point of presence we can cover right before launching an update.
Another thing we've had to deal with mentioned in the article is overloading back-end nodes; our solution is somewhat ham-fisted but works quite well for us: we cap the connection counts to the back end and return 503s when we saturate. The trick, however, is getting your load-balancer to leave the client connection open when this happens -- by default multiple LBs we've used would slam the connection closed so that when you're serving up 50K 503s a second the firewall would buckle under the runaway connection pool lingering in TIME_WAIT. Good times.
by shaggie76
1/13/2025 at 4:47:42 AM
Something I've been wondering for a while is if BitTorrent or other P2P protocols are ever a consideration for pushing game updates? Naively, it seems like an ideal fit since a large swarm of leechers quickly turns into a large swarm of (partial) seeders mostly chattering amongst themselves. I recall Facebook and Twitter used to internally torrent their updates in the 2010s and BT scales just fine to thousands of peers and tens of GB files at least, but I think I've only ever played one game whose updater was a torrent client so I'm guessing it's a nonstarter for one reason or another. Are game publishers just allergic to it due to the piracy association? Are end-user upload speeds too slow to meaningfully make a difference? Are swarms of ~100k just too large to manage?Edit: Silly me for posting while sleep deprived. It's not the update itself that you're saying is causing thundering herd issues, but the log-ins all being synced up afterwards much like in TFA, duh. My curiosity wrt the apparent lack of P2P game updaters still stands though.
by snackbroken
1/13/2025 at 9:02:41 AM
See my related comment. It was a popular idea around 2005-10. As mentioned Red Swoosh was primarily sold as a “p2p” CDN, was bought up by akamai for a billionty dollars, and promptly disappeared. AWS S3 also implemented a torrent interface early on. AFAIK they keep it alive in name at least, but its effectively deadcode with $0 revenue as far back as Ive ever known. A handful of private companies built p2p themselves, but eventually moved off. As an example p2p is where spotify started in this time range and then moved to a CDN (us) for better consistency and not having to deal with it themselves.The primary business problem is one of visibility and control. The customer UX would be entirely out if your control, and exceedingly variable, based on factors you (the provider) cant even see. At the same time CDNs were pushing down to cents per GB delivered by 2010, and ~1¢/GB by 2015. At a penny per GB distribution for higher throughout, better visibility, and control CDN distribution costs started to not matter compared to other costs and priorities.
Oh! Porn delivery companies, theyre an interesting content distribution case. AFAIK commercial CDNs are still way too expensive to meet their business model needs. My recollection is that they all built their own in house CDNs, like GPs “run a bunch of VMs” approach, or used a peers. This was accelerated as all of those companies consolidated ala MindGeek in the 2010s.
by donavanm
1/13/2025 at 4:13:44 PM
One reason for Spotify's move away from p2p was it was absolutely a no-go on mobile platform, which was rapidly becoming dominant at the time.by dikei
1/13/2025 at 10:27:54 PM
Microsoft Store and Xbox games/updates are distributed with a proprietary P2P protocol, which also includes ISP appliances. afaik it's the largest P2P network in the world. https://learn.microsoft.com/en-us/windows/deployment/do/mcc-...Steam recently introduced LAN-based P2P to complement their significant appliance/CDN infrastructure, but idk if anyone has pulled it apart yet. and I don't think it does tunnelling like the msft network
by pl4nty
1/13/2025 at 6:43:03 AM
Blizzard used to have p2p support, they removed it around 2015. It’s not hard to think of a bunch of problematic cases which become absolute hell to diagnose because they’re client side.by masklinn
1/13/2025 at 9:38:43 AM
Their downloaders for classic games still have the options to enable peer to peer. Though it failed to initialise, but I'm not sure if that's because their tracker is down or because it demands upnp. I recently did this with Diablo 2 and it's expansion.by AndrewDavis
1/13/2025 at 10:39:29 AM
Windows Update has the option to download signed updates from Microsoft and any other computer that has downloaded it. And it says that 38% (247MB) of all windows update bytes have been downloaded form "PCs on the internet" and I have uploaded 340MB to "PCs on the Internet"by UltraSane
1/13/2025 at 11:00:53 AM
around 2010, we (Zynga at the time) used torrent to distribute the MafiaWars code/assets to all servers in a couple of data centers. Worked without much challenge.by tupshin
1/13/2025 at 3:03:27 AM
As someone who worked on a major CDN I have some perspective.> thousands of cold edges would call home simultaneously when all players players relogged at the same time.
Our more mature customers (esp console gaming) would enable early background downloads, spaced out over a few hours, the day/hours before 'launch'. Otherwise adhoc/jit is definitely best effort, though we did a few things to help:
Conceptually each CDN POP is ~3 logical layers 1) a client-request-terminating 'hot' cache distributed across all nodes in the POP 2) a shared POP cache segmented by content/resource ID 3) a shared origin-request-facing egress layer. Every layer would attempt to perform request coalescing, with 90% efficacy or more. eg, 10 client requests to the same layer 1 node _should_ only generate a single request to the segmented layer 2 cache. The same layer 2 node would we serving multiple requests to the layer 1 nodes, while making a single request back towards the origin.
Some exceptional behavior occurred, or was driven by, 'load' and trying to account for 1) head of line blocking 2) tail latencies etc from inequal load distribution. Based on load for an object, or a nodes current total load, we used forward signaling to distribute requests to peers. That is a 'busy' layer 2 node would signal to the layer 1 nodes to use additional/alternate peers. This increased the number of copies of a popular object in the segmented cache, increasing the total throughput available to populate the 'hot' L1 cache nodes _or_ to serve objects that were not consistently popular enough to stay in that distributed L1 cache. And relevant to your example we had similar problems when going back to the origin; In the first case we want to minimize the number of new TCP/TLS connections, which have a large RTT setup penalty, by reusing active & idle 'layer 3' connections to the origin. This, however, introduces hotspots and head of line blocking for those active origin connections. Which, again, based on 'load' would be forward signaled so that additional layer 3 nodes/processes would be used to fetch _additional_ origin content.
Normally this all means 1 origin request can serve a few orders of magnitude more concurrent client requests. For very large content, or exceedingly large client numbers, you'd see the CDN 'scale out' on concurrency in an effort to minimize blocking and maximize throughput in the system.
> One CDN vendor didn't even offer a tiered distribution system so every edge called home at the same time, another vendor did have a tiered distribution system designed to avoid this problem
See above on request coalescing. In the vast vast majority of cases it was effective in reducing the problem by a few orders of magnitude; AFAIK every CDN does/did that. _In addition_ we did have an distributed hierarchal system for caching between edge POPs and origins _but_ it was non-public/invite/managed by us for a long time. The reason being that the _vast_ majority of customers incurred additional latency (& cost to us) without meaningful benefit from this intermediate cache layer.
> The interesting thing was that no vendor we evaluated offered a robust preheating solution if they offered one at all.
This is interesting to me. AFAIK Akamai Netstorage was sold to solve the origin distribution angle, _and_ drove something like 50% of the revenue from large object distribution customers. For us the customer use case of 'prefetch' was perennial 'top 5' but never one that would drive revenue, and conflicted with other system tenets.
> One vendor even went so far as to say that they wouldn't allow it because it would let customers unfairly dominate the shared storage cache at the edge
That could have been us. And yes a huge problem is that you're fundamentally asking for control over a shared resource so that you can bias performance to _your content_ at the expense of _all other customers_. Even without intentional 'prefetch' control we had still had some customers with pseudo-degenerate access patterns that might consume 25-50% of the shared cache space in a POP. We did build shared quotas and such but (when I was there) we couldn't see a way to align the pricing & incentives to confidently expose that to customers. It also felt very very bad to tell a customer 'pay us $$$ to care about your bits' when that was our entire job, and what we were doing to the best extent possible already.
> we cap the connection counts to the back end and return 503s when we saturate.
Depending on the CDN you may be able to use `max-age` or `s-maxage` to implement psuedo backoff from the CDN. For us at least those 'negative hits' would be cached with a short (seconds by default) TTL to act a dampener in failure scenarios. Ensure that your client can handle/recover from the 503 as well, I'd expect the CDN to return those all the way through in the response.
by donavanm
1/13/2025 at 3:12:43 AM
> Otherwise adhoc/jit is definitely best effort, though we did a few things to helpI should also give a sense of scale here. Hundreds of GB/s to multi TB/s of throughput for a single customer was pretty normal a decade ago. CDNs, classically, are also biased towards latency & throughput. Once you have millions of client requests per second and pushing that kind of volume its kind of expected/implied that the origin is capable of meeting the demand necessary to maximize that throughput.
While cost efficiency maximizing CDNs _were_ a thing they kind of died out with Red Swoosh AFAIK. We repeatedly investigated 'follow the moon' use cases to maximize the diurnal cycle. Outside of a handful of game companies there wasnt any real interest, and the price/revenue wasnt worth investing compared to other priorities. The market wanted better performance, not minimal costs, in the 2000-10s.
by donavanm
1/13/2025 at 3:36:28 PM
I have always found it remarkable with how well Warframe handles updates, I've seen other games do the "Update live now everyone restart!" and then no one can get in due to thundering herd.But you close Warframe after the red text and the game updates pretty fast, even if its a massive update like 1999 was, and then you are back in the game (Unless you say yes to Optimising download cache, that takes an absolute age for some reason plsfix), definitely a pretty amazing engineering achievement.
by gsck
1/13/2025 at 12:05:15 PM
I remember I liked the Fastly API because they seemed to offer preheating, but this was a long time ago, and perhaps not sufficient for your needs.by robertlagrant
1/13/2025 at 12:18:55 AM
Really one of those “has anyone that built this tried using it for its intended purpose?” things. Not having a carefully considered cache warning solution* is like…if someone built a CDN based on a description someone gave them, instead of actually understanding the problem a CDN sets out to solve.* EDIT: actually, any solution that at least attempts to mitigate a thundering herd. I am at least somewhat empathetic to the “indiscriminately allowing pre-warming destroys the shared cache” viewpoint. But there are still plenty of things that can be done!
by bolognafairy
1/13/2025 at 2:43:16 AM
The easiest solution to the pre-warming problem is charge quite a bit for it. Then only those who really need it will pay (or you’ll collect more money to build out the cache).by bombcar