3/8/2026 at 2:02:00 AM
I just ran some massive tests on our own CI. I use AMD Turin for this on gcp, which was noted as one of the fastest ones in the article.The most insane part here is that the AMD EPYC 4565p can beat the turin's used on the cloud providers, by as much as 2x in the single core.
Our tests took 2 minutes on GCP, 1 minute flat on the 4565p with its boost to 5.1ghz holding steady vs only 4.1ghz on the gcp ones.
GCP charges $130 a month for 8vcpus. ALSO this is for SPOT that can be killed at any moment.
My 4565p is a $500 cpu... 32 vcpus... racked in a datacenter. The machine cost under 2k.
i am trying hard to convince more people to rack themselves especially for CI actions. The cloud provider charging $130 / mo for 3x less vcpus you break even in a couple months, it doesn't matter if it dies a few months later. On top of that you're getting full dedicated and 2x the perf. Anyways... glad to see I chose the right cpu type for gcloud even though nothing comes close to the cost / perf of self racking
by zackify
3/8/2026 at 6:54:01 AM
Hetzner charge between €10 and €48 for an 8vcpu setup, depending on how many other users you're happy to share with.For €104/mo you can get a 16-core Ryzen 9 7950X3D (basically identical to your 4565p) w/ 128GB DDR5, 2x2TB PCIE Gen4 SSD.
That's not to say you're wrong about dedicated being much better value than VPS on a performance per dollar basis, but the markup that the European companies charge is much, much lower compared to what they'd charge in the US.
In this instance you're looking at a ~17 month payback period even ignoring colo fees. Assuming a ~$100 colo fee that sibling comment suggested, you're looking at closer to 8 years.
by AussieWog93
3/8/2026 at 7:31:46 AM
Great points. If we’re going to talk about dedicated servers and long lock-in contracts, you have to look at the equivalent prices for hosted alternatives.It’s fun to start thinking about building your own server and putting in a rack, but there’s always a lot of tortured math to compare it to completely different cloud hosted solutions.
One of the great things about cloud instances is that I can scale them up or down with the load without being locked into some hardware I purchased. For products I’ve worked on that have activity curves that follow day-night cycles or spike on holidays, this has been amazing. In some cases we could auto scale down at night and then auto scale back up during the day. As the user base grows we can easily switch to larger instances. We can also geographically distribute servers and provide lower latency.
There is a long list of benefits that are omitted when people make arguments based solely on monthly cost numbers. If we’re going to talk about long term dedicated server contracts we should at least price against similar options from companies like Hetzner.
by Aurornis
3/8/2026 at 9:26:24 AM
> One of the great things about cloud instances is that I can scale them up or down with the load without being locked into some hardware I purchased. For products I’ve worked on that have activity curves that follow day-night cycles or spike on holidays, this has been amazing. In some cases we could auto scale down at night and then auto scale back up during the day.At work we have this day / night cycle. But for some reason we're married to AWS. If we provisioned 24/7/365 a bunch of servers at Hetzner or such to cover the peaks with some margin, it would still be cheaper by a notable margin. Sure, 90% of them would twiddle their thumbs from 22 PM to 10 AM. So what?
Sure, if your clients are completely unpredictable and you'll see x100 traffic without notice, the cloud is great.
But how many companies are actually in that kind of situation? Looking back over a year or two, we're quite reliably able to predict when we'll have more visitors and how many more compared to baseline. We could just adjust the headroom to be able to take in those spikes. And I suppose if you want to save the environment, you could just turn off the Hetzner servers while they sit unused.
by vladvasiliu
3/8/2026 at 12:07:22 PM
I’ve ran MP game servers that follow this pattern. A good rule of thumb is you should cover 75% of your peak load with your cheaper steady state pre allocated machines, and burst for the last 25%. It really is that expensive to do.by maccard
3/8/2026 at 5:52:44 PM
If you can reasonably estimate your usage and the peak total usage is less than ~5x the minimum, it still makes sense to just rent hardware at Hetzner.You even have the possibility of managed racks, whereby you rent one or more racks, but the servers are still provided by Hetzner so you don't have to handle procurement, logistics or replacements.
by ragall
3/8/2026 at 5:47:14 PM
I'd be terrified to run anything other than a classic web server on Hetzner, have heard too many stories of them arbitrarily terminating workloads they didn't understand.by rozenmd
3/8/2026 at 9:42:38 PM
really? like what? Maybe crypto mining?by gizzlon
3/9/2026 at 4:18:59 AM
I've gotten notices from Hetzner for hosting IPFS node, apparently it does some local network discovery by default which looks like a malware when you squint hard enough.by fireant
3/9/2026 at 7:32:40 AM
apparently too many outbound requests is enough to get on their radarby rozenmd
3/8/2026 at 2:05:32 AM
> My 4565p is a $500 cpu... 32 vcpus... racked in a datacenter. The machine cost under 2k.> The cloud provider charging $140 / mo for 3x less vcpus you break even in a couple months, it doesn't matter if it dies a few months later
How do you calculate break even in a couple months if the machine costs $2,000 and you still have to pay colo fees?
If your colo fees were $100 month you wouldn’t break even for over 4 years. You could try to find cheaper colocation but even with free colocation your example doesn’t break even for over a year.
by Aurornis
3/8/2026 at 2:07:15 AM
the 140/mo is for 3x less vcpu, so $420/mo savings if you use all those same cores. sorry for the poor comparison wording there. in a few months already up to $1300+ by 6 months already paid the machine.colo fees are cheap if you need more than just 1u. even with a 50-100 fee you easily get way more performance and come ahead within a year
by zackify
3/8/2026 at 2:22:16 AM
> by 6 months already paid the machine.You originally said “a couple months” but now it’s 6 months and assumption of $0 collocation fees which isn’t realistic
In my experience situations rarely call for precisely 32 cores for a fixed period of 3 years to support calculations like this anyway. We start with a small set of cloud servers and scale them up as traffic grows. Today’s tooling makes it easy to auto scale throughout the day, even.
When trying to rack a server everyone aims higher because it sucks to start running into limits unexpectedly and be stuck on a server that wasn’t big enough to handle the load. Then you have to start considering having at least two servers in case one starts failing.
Racking a single self-built server is great for hobby projects but it’s always more complicated for serving real business workloads.
by Aurornis
3/8/2026 at 2:38:14 AM
Don't nit-pick the "couple". It was used casually - like to mean not terribly long time. So the 2-6 spread, while technically big, is still just a trifle. While I'm nit-picking; up thread is talking about a limited box for CI and you're talking about scaling up real business workloads. That's just like the difference between 2 and 6. Give it a rest.Everyone: run your scenarios and expectations in a spreadsheet and then use real data to run your CBA. Your case will be unique(ish) so make your case for your situation.
by edoceo
3/8/2026 at 7:24:25 AM
> So the 2-6 spread, while technically big, is still just a trifle.I think you’re misreading. Even the 6 month thread was based on invalid assumptions of $0 collocation fees. Add in even cheap collocation fees and it’s pushed out even further
That’s not really a nit pick when the claims were based on impossible math. It’s more of a Motte and Bailey where they come in with a “couple of months” claim that sounds awesome on the surface but then falls back to a completely different number if anyone looks at the details.
by Aurornis
3/8/2026 at 10:02:40 AM
It’s even dumber than that.Let’s not forget that if even three engineers are working on this migration for only a week your cost is now 10’s of thousands for this couple hundred euros cost saving.
(assuming avg all-in engineer costs in europe)
It makes no sense to optimise cost for infrastructure mostly, it does make sense to make it faster, since almost all your spend is on engineers.
Spending thousands to save hundreds is not a healthy business.
by cyberpunk
3/8/2026 at 3:26:14 AM
yeah thanks for that i was just meaning a very fast returnby zackify
3/8/2026 at 6:48:13 AM
You can take a hybrid approach and use the rack for base capacity, cloud for scaling.by jjmarr
3/8/2026 at 11:44:26 AM
minor point to say but I have seen in some locations colocation costs to be around $30-40 as well. $100 is usually reserved for say colocating within Hetzner for example (iirc)Just as a rule of thumb, if your servers cost more than 1k$ per month or even 500$ maybe even, depending upon if you are okay with colocation and everything. I have found it to break even (more than even the cheapest say hetzner or similar actually so for GCP or anything which charge significantly more, maybe you should warrant a deeper analysis on what is better colocation or dedicated servers or for short burstable units maybe even vps)
by Imustaskforhelp
3/8/2026 at 2:55:52 AM
I used to run a site that compares prices[0]. Not only is the ecosystem pull to the cloud strong, but many developers today look at bare metal as downright daunting.Not sure where that fear comes from. Cloud challenges can be as or more complex than bare metal ones.
by oDot
3/8/2026 at 4:34:48 AM
> Cloud challenges can be as or more complex than bare metal ones.Big +1 to this. For what I thought was a modest sized project it feels like an np-hard problem coordinating with gcloud account reps to figure out what regions have both enough hyperdisk capacity and compute capacity. A far cry from being able to just "download more ram" with ease.
The cloud ain't magic folks, it's just someone else's servers.
(All that said... still way easier than if I needed to procure our own hardware and colocate it. The project is complete. Just delayed more than I expected.)
by hamandcheese
3/9/2026 at 5:32:28 AM
The cloud is magic. If it is down nobody is in trouble. You just throw your hands in the air and say oh azure / aws / gcloud is down.But if you are the admin of a physical machine you are in deep trouble.
by whatever1
3/8/2026 at 6:46:00 PM
> The cloud ain't magic folks, it's just someone else's servers.The cloud is where the entire responsibility for those servers lives elsewhere.
If you're going to run a VM, sure. But when you're running a managed db with some managed compute, the cost for that might be high in comparison. But you just offloaded the whole infra management responsibility. That's their value add
by hvb2
3/9/2026 at 12:46:55 AM
But any serious deployment of "cloud" infrastructure still needs management, you're just forcing the people doing it to use the small number of knobs the cloud provider makes available rather than giving them full access to the software itself.by stephenr
3/9/2026 at 11:39:35 AM
not sure what you mean by a serious deployment, but a lot of companies will be perfectly fine with, some compute, object storage and a managed rdbms.Will that be more expensive than running it yourself? Absolutely. Does it allow teams to function and deliver independently, yes. As an org, you can prioritize cost or something else.
by hvb2
3/9/2026 at 12:07:33 PM
> a lot of companies will be perfectly fine with, some compute, object storage and a managed rdbms.Right, and who or what causes those services to be provisioned, to be configured, etc.?
by stephenr
3/9/2026 at 4:24:03 PM
Infrastructure as code tools like terraform etc? That's trivial compared to configuring a database for production use I would say.You don't need to be DevOps to write that stuff, it's really simple.
by hvb2
3/9/2026 at 4:30:59 PM
And who is writing the terraform configs?by stephenr
3/8/2026 at 5:36:43 AM
> Not sure where that fear comes from.Probably because most developers these days have not known a world without using cloud providers, with AWS being 20 years old now.
by satvikpendem
3/8/2026 at 5:53:04 AM
Racking your own hardware doesn’t get you web UIs and APIs out of the box. At least it didn’t 2 decades ago.by jmathai
3/8/2026 at 5:58:18 AM
Sure, now it does however (via the many OSS PaaS) so the calculus must also therefore change.by satvikpendem
3/8/2026 at 10:12:42 AM
Which OSS PaaS are there that are noteworthy? Or do you mean something like Kubernetes?by spockz
3/8/2026 at 11:46:40 AM
Coolify is usually loved by the community.Dokploy is another good one.
Kubero seems nice for more kubernetes oriented tasks.
But I feel like if someone is having a single piece of hardware as the OP did. Kubernetes might not be of as much help and Coolify/Dokploy are so much simpler in that regards.
by Imustaskforhelp
3/8/2026 at 6:49:36 PM
Thanks. I will look into those.I suppose kubernetes with the right operators installed and the right node labels applied could almost work as a self service control plane. But then VMs have to run in kubevirt. There is crossplane but that needs another IaaS to do its thing.
by spockz
3/8/2026 at 8:22:30 AM
Partitioning a server! Omg lolIt’s funny, bc AWS did not start this tour of business. What they did do is make it possible to pay by the hour. The ephemeral spare compute is what they started.
Yet almost nobody understood the ephemeral part.
You might even be better off running a macmini at home fiber, especially for backend processing
by jbverschoor
3/8/2026 at 4:14:29 AM
The fragmentation and friction! Comparing prices usually requires 10 open browser tabs and a spreadsheet, which is what keeps people locked into their default cloud. I built a tool to solve this called BlueDot (ie, Earth, where all the clouds are)[0]. It’s a TUI that aggregates 58,000+ server configurations across 6 clouds (including Hetzner). It lets you view side-by-side price comparisons and deploy instantly from the terminal. It makes grabbing a cheap Hetzner box just as easy as spinning up something on AWS/GCP.by keepamovin
3/8/2026 at 11:51:42 AM
I use serververify which is created by jbiloh from the lowendtalk forum which uses yabs (yet-another-benchmark-script) to give details about lot more things than usually meets the eye.That being said, I have found getdeploying.com to be a decent starting point as well if you aren't too well versed within the Lowend providers who are quite diverse and that comes at both costs and profits.
Btw legendary https://vpspricetracker.com (which was one of the first websites that I personally had opened to find vps prices when I was starting out or was curious) is also created by jbiloh.
So these few websites + casually scrolling LET is enough for me to nowadays find the winner with infinitely more customizability but I understand the point of TUI but actually the whole hosting industry has always revolved around websites even from the start. So they are less interested in making TUI's for such projects generally speaking atleast that's my opinion
by Imustaskforhelp
3/8/2026 at 2:34:22 AM
Self-racking lets you rack a bunch of gear you'd never find in VM/dedicated rentals, like consumer parts or older, still very good parts. Overclocking options are available as well if you DIY.If you need single-threaded performance, colo is really the only way to go anyway.
We have two full racks and we're super happy with them.
by icelancer
3/8/2026 at 4:48:16 AM
Or under clocking and under volting for even better performance to price/power/longevity ratiosby Melatonic
3/8/2026 at 5:27:33 AM
For a single rack, you really don’t have too many choices for power. You make a choice to provision and pay, I never had anyone check how much of that I used and give me money back. Maybe things have changed though.by sroussey
3/8/2026 at 5:02:56 AM
No doubt. Especially for GPU inference at scale. We overclock/overvolt for training and tune way down for inference.by icelancer
3/8/2026 at 3:08:10 AM
You can go on OVH and get a dedicated server with 384 threads and a Turin cpu for $1147 a month. You have to pay $1147 for installation and the default has low ram and network speeds but even after upgrading those it's going to be 1/5 of what it would cost on public clouds.by vmg12
3/9/2026 at 12:49:06 PM
> The most insane part here is that the AMD EPYC 4565p can beat the turin's used on the cloud providers, by as much as 2x in the single core.That is ... hard to believe for a CPU-bound task. Do you have any open benchmark which can reproduce that?
by BeeOnRope
3/8/2026 at 3:28:56 AM
This is basically the premise of https://www.blacksmith.sh/ as far as I know, though without the need to host the hardware yourself and the potential complexity they comes with that.by tempay
3/8/2026 at 5:23:41 AM
I did have some MySQL servers racked for over a decade and I was afraid to restart the machines. And yes as new versions of MySQL came out I did have to compile them myself.Similar lower specced machines that were closer to the public internet had boot disk failures, but I had a few of them, so it wasn’t an issue. Spinning metal and all.
One of the db servers dying would have required a next day colo visit… so I never rebooted.
by sroussey
3/8/2026 at 3:18:59 AM
"VCPUS" are a bit of scam in my experience. You usually don't get what the hardware (according to /proc/cpuinfo) is capable of.by ahartmetz
3/8/2026 at 10:17:56 AM
Just want to say something in defence of cloud providers- sometimes you need to limit the list of available CPU features to allow live migration between different hypervisors
- even if you migrate the virtual machine to the latest state of the art CPU, /proc/cpuinfo won't reflect it (linux would go crazy if you tried to switch the CPU information on the fly) (the frequency boost would be measurable though, just not via /proc/cpuinfo )
by tryauuum
3/8/2026 at 8:22:56 AM
> i am trying hard to convince more people to rack themselves especially for CI actions.What do you think the typical duty cycle is for a CI machine?
Raw performance is kind of meaningless if you aren't actually using the hardware. It's a lot of up front capex just to prove a point on a single metric.
by bob1029
3/8/2026 at 10:17:31 AM
Raw performance, in the sense of single core performance, is still one of the most important factors for us. Yes, we have parallelised tests, in different modules, etc. But there are still many single threaded operations in the build server. Also, especially in the cloud, IO is a bottleneck you can almost only get around by provisioning a bigger CPU.Our CI run smaller PR checks during the day when devs make changes. In the “downtime” we run longer/more complex tests such as resilience and performance tests. So typically a machine is utilised 20-22/7.
by spockz
3/8/2026 at 2:14:11 AM
A 16-core 4565p is of course faster in max single thread speed than a 96-core that GCP is running at an economically optimal base clock.A year ago I gave a talk about optimizing Cloud cost efficiency and I did a comparison of colocation vs cloud over time. You might find it interesting here, linking to the relative part: https://youtu.be/UEjMr5aUbbM?si=4QFSXKTBFJa2WrRm&t=1236
TLDR, colocation broke even in 6 to 18 months for on-demand and 3y reserve cloud respectively. But spot instances can actually be quite cheaper than colocation.
You generally don't go to the cloud for the price (except if we are talking hetzner etc).
by dkechag
3/8/2026 at 7:45:51 AM
Yeah I expected this benchmark to include hosted "metal" hardware with the "per instruction cost" benchmark to see how provider like Hetnzer fare against classic AWS VMs. It's a bit apple to oranges I know, but I think nowadays is what most people compared pure performance cost are interested in. I'm not going to migrate from AWS VMs to GCP or Hetzner VMs, but I might be open to Hetzner hosted servers instead for a massive enough cost reduction.by darkwater
3/8/2026 at 9:00:09 AM
> ... but I might be open to Hetzner hosted servers instead for a massive enough cost reduction.Don't use Hetzner for anything actually important to you. :(
by justinclift
3/8/2026 at 6:31:51 PM
A good business would send you a warning a month before your credit card expires, not after the fact.by nine_k
3/9/2026 at 1:51:05 AM
For some reason parent is using the word "expired" when they really mean "cancelled by the issuing bank".by stephenr
3/8/2026 at 10:00:36 PM
To be honest, I find it hard to believe this is common. They have been around for ages and are quite beloved by many. Maybe something went wrong in this case?Guess I will find out, think my cc expires soon.
Also, you can pay by bank transfer, at least for dedicated.
by gizzlon
3/9/2026 at 12:37:52 AM
> To be honest, I find it hard to believe this is common.I agree. But it still happened, with literally no warning (I actually checked), and their support staff refused to even call me to get updated card details when I was in the middle of an actual cyclone. ie phone service worked, internet didn't
Directly impacting our customers, who were extremely unhappy (to say the least).
"Fuck Hetzner!" is not nearly strong enough to convey the sentiment.
by justinclift
3/9/2026 at 1:54:04 AM
I mean, the context here is that a company stopped providing services after a bank cancelled a credit card they had been charging.For all they know, your legitimate charges were the fraudulent charges that triggered the cancellation.
I cannot fathom why you keep using the term "expired" when that is a very different scenario to "cancelled by the issuing bank".
by stephenr
3/9/2026 at 10:40:50 AM
> For all they know, your legitimate charges were the fraudulent charges that triggered the cancellation.Literally years of paying the bills. ;)
> I cannot fathom why you keep using the term "expired" when that is a very different scenario to "cancelled by the issuing bank".
That seems like a you problem. No worries, hope your day is going ok.
by justinclift
3/9/2026 at 11:06:51 AM
> That seems like a you problem.I dunno man, it wasn't me having a breakdown in public because I forgot to update a biller after I cancelled my card.
by stephenr
3/8/2026 at 6:12:04 AM
Both Datapacket & OVH have the 4565p.This proc is a hidden gem.
For most workloads it’s not just the most performant, but also the best bang-for-buck.
by alberth
3/8/2026 at 6:18:04 AM
I don't see the 4565P at Datapacket or OVH. But that doesn't invalidate your comment.by nerdsniper
3/9/2026 at 3:12:22 AM
They have the higher cache variant (4585PX - same clock speed & core count)by alberth
3/8/2026 at 3:31:28 AM
Big cloud is ludicrously expensive. It’s truly amazing. Bandwidth is even worse. It’s like a 10000X markup.by api
3/8/2026 at 5:31:27 AM
It’s wild that no one knows just how cheap bandwidth really is. AWS pulled one over on people and it’s like the movie studios still demanding 10% of the top for VHS distribution. Today.by sroussey
3/8/2026 at 8:26:52 AM
That’s with every industryMake things look like a complicated black box. Make sure it feels scary to roll your own. Hide the core technical skills behind abstracted skills
by jbverschoor
3/8/2026 at 3:09:52 PM
Cloud has done a truly epically awesome job at this. People are now afraid to set stuff up.by api