Discussion
Migrating from DigitalOcean to Hetzner: From $1,432 to $233/month With Zero Downtime
orsorna: I always appreciate savings posts, but is $14k USD annual really make or break for a Turkish business? I would not know.
xhkkffbf: It's tough to work with these publicly traded companies. They need to boost prices to show revenue growth. At some point, they become a bad deal. I've already migrated from DO. Not because of service or quality, but solely because of price.
nixpulvis: We need more competition across the board. These savings are insane and DO should be sweating, right?
mrweasel: It's a nice chunk of change, which you could use for other purposes. It might not make or break the company, but it could pay for something that actually generates business.
JSR_FDED: > Cloud providers are expensive for steady-state workloads.Asking the obvious question: why not your own server in a colo?
vb-8448: I Guess hetzner is basically "your server in colo"
pennomi: I saved about $1200 a year by moving from AWS to Hetzner. Can’t recommend it enough. AWS has kind of become a scam.
steve1977: Hetzner Cloud or their VPS offerings?
littlestymaar: I suspect with that money you could get a full time customer support person for your business. Now think about it, what's creating more value to your customers: having your infra on Digital Ocean or having a better customer support?
Doohickey-d: What are you doing for DB backups? Do you have a replica/standby? Or is it just hourly or something like that?Because with a single-server setup like this, I'd imagine that hardware (e.g. SSD) failure brings down your app, and in the case of SSD failure, you then have hours or days downtime while you set everything up again.
infocollector: Where did you migrate out to?
xhkkffbf: Hetzner. Vultr.
largbae: The migration sharing is admirable and useful teaching, thank you!I see the DigitalOcean vs Hetzner comparison as a tradeoff that we make in different domains all day long, similar to opening your DoorDash or UberEats instead of making your own dinner(and the cost ratio is similar too).I work in all 3 major clouds, on-prem, the works. I still head to the DigitalOcean console for bits and pieces type work or proof of concept testing. Sometimes you just want to click a button and the server or bucket or whatever is ready and here's the access info and it has sane defaults and if I need backups or whatnot it's just a checkbox. Your time is worth money too.
jonahs197: I use OVH btw.
BonoboIO: When I hear OVH I immediately think about their burning datacenters …
traceroute66: > why not your own server in a colo?Have you seen what the LLM crowd have done to server prices ?
traceroute66: > Because with a single-server setup like this, I'd imagine that hardware ...Yeah. This blog post reads like it was written by someone who didn't think things through and just focused on hyper-agressive cost-cutting.I bet their DigitalOcean vm did live migrations and supported snapshots.You can get that at Hetzner but only in their cloud product.You absolutely will not get that in Hetzner bare-metal. If your HD or other component dies, it dies. Hetzner will replace the HD, but its up to you to restore from scratch. Hetzner are very clear about this in multiple places.
nixpulvis: That's not the point. You should be asking why DO is so much more expensive.Not everyone likes wasting money.
dllrr: Guess I've been enjoying it for so long, I feel so stupid. Thanks for this.
kro: Hetzner normally advertises their hardware servers as 2x 1 TB SSD, because it's strongly recommended to run them in SWraid1 for net 1TB.Once the first SSD fails after some years, and your monitoring catches that, you can either migrate to a new box, find another intermediate solution/replica, or let them hotswap it while the other drive takes on.Of course though, going to physical servers loses redundency of the cloud, but that's something you need to price in when looking at the savings and deciding your risk model.And yes, running this without also at least daily snapshotting/backup to remote storage is insane - that applies to cloud aswell, albeit easier to setup there.
klodolph: “Your own server in a colo” means going to the colo to swap RAM or an SSD when something goes wrong. You rent a server and the benefit is the rentor has spare parts on hand and staff to swap parts out.
bingo-bongo: The comparison is somewhat skewed, since they went from an (expensive) virtual server to a cheaper dedicated server (hardware).One of the new risks is if anything critical happens with the hardware, network, switch etc. then everything is down, until someone at Hetzner go fixes it.With a virtual server it’ll just get started on a different server straight away. Usually hypervisors also has 2 or more network connections etc.And hopefully they also got some backup setup.It’s still a huge amount of of savings and I’d probably do the same of I were in their shoes, but there is tradeoffs when going from virtual- to dedicated hardware.
echelon: AWS has always been a scam.It's worse than Oracle and they don't even use lawyery contracts.The technology itself is the tendrils.
perbu: You have to deal with a lot more stuff. You have to order/pay for a server (capex), mount it somewhere, wire up lights-out-mgmt and recovery and do a few more tasks that the provider has already done.Then, say if the motherboard gives up, you have to do quite a bit of work to get it replaced, you might be down for hours or maybe days.For a single server I don't think it makes sense. For 8 servers, maybe. Depends on the opportunity cost.
izacus: You can get an (a bit less than) a full employee for that. And that's a better ROI than just throwing it away.
OutOfHere: Didn't Hetzner prices increase 30-40% recently? See https://news.ycombinator.com/item?id=47120145As such, I doubt the noted price reduction is reproducible.
apitman: I wish we had something like Hetzner dedicated near us-east-1.They do offer VPS in the US and the value is great. I was seriously looking at moving our academic lab over from AWS but server availability was bad enough to scare me off. They didn't have the instances we needed reliably. Really hoping that calms down.
layer8: Tale a look at https://www.google.com/finance/quote/USD-TRY?window=MAX
subscribed: Scam? You mostly get what you pay for.Sure, it cost me £6/mo to serve ONE lambda on AWS (and perhaps 500 requests per month). Sure it was awesome and "proper". But crazy expensive.I host it now (and 5 similar things) for free on Cloudflare.But if you need what AWS provides, you'll get that. And that means sometimes it's not the most cost-effective place.
onetimeusename: AWS only requires a card from me. I tried registering at Hetzner and they wanted a picture of my passport.
Yeroc: Have you done this yourself? If you haven't I think you'd discover server hardware is actually shockingly reliable. You could go years without needing to physically touch anything on a single machine. I find that people who are used to cloud assume stuff is breaking all the time. That's true at scale, but when you have a handful of machines you can go a very long time between failures.
antirez: I moved two servers, one from Linode and the other from DO to Hetzner a few months ago, with similar savings. The best part was that the two servers had tens of different sites running, implemented in different languages, with obsolete libraries, MySQL and Redis instances. A total mess. Well: Claude Code migrated it all, sometimes rewriting parts when the libraries where no longer available. Today complex migrations are much simpler to perform, which, I believe, will increase the mobility across providers a lot.
rustyhancock: Wow a Claude add embedded into a Hetzner add.How deep does this go?
antirez: "ad", with a single "d".
treesknees: For the price, they could buy an exact replica bare metal server and still save money.
traceroute66: > they couldThey could, but they didn't and instead they wrote that blog post which, even being generous is still kinda hard to avoid describing as misleading.I would not have written the post I did if they had presented a multi-node bare-metal cluster or whatever more realistic config.
locknitpicker: > They could, but they didn't and instead they wrote that blog post which, even being generous is still kinda hard to avoid describing as misleading.What do you feel was misleading?
Someone1234: They could buy then that exchanges cost savings for complexity. You now need to keep them in sync and it is double the cost.I agree with the other poster, this is fine for a toy site or sites but low quality manual DR isn't good for production.
OliverGuy: What's the HA plan?Sounds like from the requirement to live migrate you can't really afford planned downtime, so why are you risking unplanned downtime?
adamcharnock: This is something we've[0] done a number of times for customers coming from various cloud providers. In our case we move customers onto a multi-server (sometimes multi-AZ) deployment in Hetzner, using Kubernetes to distribute workloads across servers and provide HA. Kubernetes is likely a lot for a single node deployment such as the OP, but it makes a lot more sense as soon as multiple nodes are involved.For backups we use both Velero and application-level backup for critical workloads (i.e. Postgres WAL backups for PITR). We also ensure all state is on at least two nodes for HA.We also find bare metal to be a lot more performant in general. Compared to AWS we typically see service response times halve. It is not that virtualisation inherently has that much overhead, rather it is everything else. Eg, bare metal offers:- Reduced disk latency (NVMe vs network block storage)- Reduced network latency (we run dedicated fibre, so inter-az is about 1/10th the latency)- Less cache contention, etc [1]Anyway, if you want to chat about this sometime just ping me an email: adam@ company domain.[0] https://lithus.eu[1] I wrote more on this 6 months ago: https://news.ycombinator.com/item?id=45615867
aungpaing: 100
0123456789ABCDE: without looking at either the article or the pricing pages, on any of the relevant providers, just what's on the title of this thread and your comment> $1,432 to $233a difference of 5/6 in price does not materially change the decision to move between providers, even with a 40% price increase
faangguyindia: it's not scam, it's like Casino House. Everything is designed to pull your money and make you believe that you are benefiting from it.
andai: https://xkcd.com/2948/
faangguyindia: The easiest I’ve done is in MongoDB replication, sharding, failover, and all that is super easy.Recently, I did it in PostgreSQL using pg_auto_failover. I have 1 monitor node, 1 primary, and 1 replica.Surprisingly, once you get the hang of PostgreSQL configuration and its gotchas, it’s also very easy to replicate.I’m guessing MySQL is even easier than PostgreSQL for this.I also achieved zero downtime migration.
acdha: Replication is not a backup. It helps for migrations or clean single node failures but not human error, corruption, or an attack.
faangguyindia: You can just run 3 dedicated servers and design your app so that it never fails.
missedthecue: I moved from Heztner to DO because my Hetzner IPs kept getting spoofed and then Hetzner would shut down my servers for "abuse". This hasn't happened once on DO, and I'm happy to pay a little more.
spaniard89277: Scaleway, OVH, Exoscale, Clouding, Upcloud...
uxcolumbo: That's nuts. Why do they want a pic of your passport.Absolutely no to this - reason enough to go with AWS or alternatives. And why are ppl willingly giving it to hosting provider?Unnecessarily exposing yourself to identity theft if they get compromised.
linsomniac: For over a decade I ran a small scale dedicated and virtual hosting business (hundreds of machines) and the sort of setup you describe works very well. Software RAID across 2 devices, redundant power supplies, backups. We never had a significant data loss event that I recall (significant = beyond user accidentally removing files).For quite a while we ran single power supplies because they were pretty high quality, but then Supermicro went through a ~6 month period where basically every power supply in machines we got during that time failed within a year, and replacements were hard to come by (because of high demand, because of failures), and we switched to redundant. This was all cost savings trade-offs. When running single power supplies, we had in-rack Auto Transfer Switches, so that the single power supplies could survive A or B side power failure.But, and this is important, we were monitoring the systems for drive failures and replacing them within 24 hours. Ditto for power supplies. If you don't monitor your hardware for failure, redundancy doesn't mean anything.
m00dy: yeah, everything is about to be repriced.
ozgrakkurt: I don't think you can get someone decent in a tech related job for that much money anywhere.
phamilton: Given the premise that zero day exploits are going to be frequent going forward, I feel like there is a new standard for secure deployment.Namely, all remote access (including serving http) must managed by a major player big enough to be part of private disclosure (e.g. Project Glasswing).That doesn't mean we have to use AWS et al for everything, but some sort of zero trust solution actively maintained by one of them seems like the right path. For example, I've started running on Hetzner with Cloudflare Tunnels.Anyone else doing something similar?
locknitpicker: > For example, I've started running on Hetzner with Cloudflare Tunnels.How much latency does this add?
igtztorrero: Try OVH Canada they have good prices and service
rawoke083600: Super happy customer for about 5 years now..And i say it every time they came up: Their cloud UX is brilliant and simple! Compared to the big ones out there.
acdha: That’s like saying Mercedes is a scam because you’re fine with a Honda Civic. It’s a totally legitimate preference but not being in the target market doesn’t make something a scam.
tmpz22: Its certainly a choice to accuse antirez of all people
faangguyindia: i've my own flyio style deploy built, where i just use API of digital ocean to roll out my servicei hardly ever visit their website, everything from terminal.
dessimus: Depending upon the nature of the data, they may need to keep it within the US.
utopiah: Migrated from OVH to Hezner last Winter too, 0 downtime since, rolling backup running fine and lower bill too.
sph: I have just seen with my own eyes Claude astroturfing on a gamedev subreddit from a botting account that was picked up by Google so I could see a few of their other comments. This account's operation was going on development subs complaining how good Claude's latest model is and how awful it is being afraid of losing one's job to AI.I know your comment is tongue-in-cheek and the poster here is kinda known, but this kind of astroturfing is a new low and it's everywhere on forums such as these.
subscribed: It's also expensive (redundant server hardware, xconnect, power, firewall(s), PSU access, smart hands, sysadmin).But it's indeed cheaper with high, sustained workloads.
acdha: They have to operate within the laws of the countries they’re physically located in. Those countries want to know that they’re not hosting illegal content, providing services to crime rings, Russia or North Korea, etc.If Hetzner allows you to host something and you use it for illegal acts, they aren’t going to jail to shield you for €10/month.
raphinou: Am I missing something? I'm genuinely surprised it was not deployed from the start on a dedicated server. Don't you make a cost analysis before deploy? And if the cost analysis was ok at initial deploy, why wait to have such a difference in cost before migrating? How much money goes wasted in such situations?
tannhaeuser: Not every fscking story has to be about AI.
nine_k: There are two interesting parts in the post.One is about all the steps of zero downtime migration. It's widely applicable.The other is the decision to replace a cloud instance with bare metal. It saves a lot in costs, but also the loss of fast failover and data backups is priced in.If I were doing this, I would run a hot spare for an extra $200, and switched the primary every few days, to guarantee that both copies work well, and the switchover is easy. It would be a relatively low price for a massive reduction of the risk of a catastrophic failure.
ianberdin: When you find a gold, why tell everyone where it is? Silent happiness keeps the benefits:)
therealmarv: That's a trend which is more and more common nowadays.I wish the industry would adopt more zero knowledge methods in this regards. They are existing and mathematically proven but it seems there is no real adoption.- OpenAI wants my passport when topping up 100 USD- Bolt wanted recently my passport number to use their service- Anthropic seems wants to have passports for new users too- Soon age restriction in OS or on websitesI wished there would be a law (in Europe and/or US) to minify or forbid this kind of identity verification.I want to support the companies to not allow misuse of their platforms, at the same time my full passport photo is not their concern, especially in B2B business in my opinion.
pmdr: It used to be "innocent until proven/suspected guilty." Now it's more like "let's see that ID, you know, just in case..."
hnthrow0287345: It's possible no one will care much if it's down even for that long. I couldn't care less if my HOA mobile app was down even for a week for example. We don't need constant uptime for everything.
acdha: Don’t forget that integrity matters as much as availability in many applications. You might not mind if your HOA takes time to bring a server back up but you’d care a lot more if they lost the financial records or weren’t able to recover from a ransomware attack.
rdevilla: The whole internet is like this now, and it's only just getting started. Makes me sick tbh, and I am still questioning if this is the kind of industry I want to work in.
MikeNotThePope: For those who remember Digg, the recently relaunched a new version and shut it down almost immediately. They were getting hammered with AI bots when it was realized the Digg apparently still has good SEO. The explain it right on homepage.https://digg.com/
OneMorePerson: Yeah just how it is even outside of the cloud. At some point nearly all companies eventually try to take advantage of inertia and vendor lock in, if you are willing to undertake the pain of switching it's almost always a savings.
brianwawok: You forgot that this entire forum is a VC/incubator ad. Its ads all the way down.
richwater: Your thesis is that everyone who uses AWS is being duped...?
rolymath: Probably most are overpaying.Cloud used to be marketed for scalability. "Netflix can scale up when people are watching, and scale down at night".Then the blogosphere and astroturfing got everyone else on board. How can $5 on amazon get you less than what you got from almost any VPS (VDS) provider 10 years ago?
mariopt: Every time I see this kind of article, no one really bothers about sb/server redundancy, load balancers, etc. are we ok with just 1 big server that may fail and bring several services down?You saved a lot of money but you'll spend a lot of time in maintenance and future headaches.
daneel_w: They may be making this decision based on a long history of, in fact, never really having run into "a lot of time in maintenance and future headaches".
OneMorePerson: I'm not a legal expert/lawyer but I do think a lot of this is not the company just randomly wanting to do it, but lawyer driven development. No company wants to introduce more friction for no reason, unless somehow there's precedent or risk involved in not doing it. Curious to know what legal precedents or laws have changed recently.The only possible non legally driven reason I can think of would be if they think the tradeoff of extra friction (and lost customers) is more than offset by fraud protection efforts. This seems unlikely cause I don't see how that math could have changed in the last few years.
cyanydeez: Now imagine you can do that with a local model. You're basically breaking lockin on _Every_ end. Simply beautiful. A digital guillotine for the digital elite!
grey-area: It depends on the service and how critical that website is.Sometimes it's completely acceptable that a server will run for 10 years with say 1 week or 1 month of downtime spread over those 10 years, yes. That's the sort of uptime you can see with single servers that are rarely changed and over-provisioned as many on Hetzner are. Some examples:Small businesses where the website is not core to operations and is more of a shop-front or brochure for their business.Hobby websites too don't really matter if they go down for short periods of time occasionally.Many forums and blogs just aren't very important too and downtime is no big deal.There are a lot of these websites, and they are at the lower end of the market for obvious reasons, but probably the majority of websites in fact, the long tail of low-traffic websites.Not everything has to be high availability and if you do want that, these providers usually provide load balancers etc too. I think people forget here sometimes that there is a huge range in hosting from squarespace to cheap shared hosting to more expensive self-hosted and provisioned clouds like AWS.