Discussion
Healthchecks.io Now Uses Self-hosted Object Storage
_joel: I'm sure it's a lot better now but everytime I see btrfs I get PTSD.
poly2it: Care to elaborate?
metadat: Years of serious corruption bugs.
tobilg: I don't get it, if it's running on the same (mentioning "local") machine, why does it even need the S3 API? Could just be plain IO on the local drive(s)
zipy124: seperate machine I think given the quoted point at the end:> The costs have increased: renting an additional dedicated server costs more than storing ~100GB at a managed object storage service. But the improved performance and reliability are worth it.
uroni: I'd worry about file create, write, then fsync performance with btrfs, but not about reliability or data-loss.But a quick grep across versitygw tells me they don't use Sync()/fsync, so not a problem... Any data loss occurring from that is obviously not btrfs fault.
VHRanger: The S3 API doesn't work like normal filesystem APIs.Part of it is that it follows the object storage model, and part of it is just to lock people into AWS once they start working with it.
tobilg: I'm 100% aware of how S3 works. I was questioning why the S3 API is needed when the service is using local storage.
jen20: > part of it is just to lock people into AWS once they start working with it.This is some next-level conspiracy theory stuff. What exactly would the alternative have been in 2006? S3 is one of the most commonly implemented object storage APIs around, so if the goal is lock-in, they're really bad at it.
esafak: So you don't need to refactor your code?
ryanjshaw: And when/if you decide to head back to a 3rd party it requires no refactoring again.
zdw: Sometimes API compatibility is an important detail.I've worked at a few places where single-node K8s "clusters" were frequently used just because they wanted the same API everywhere.
lsb: Self Hosted object storage looks neat!For this project, where you have 120GB of customer data, and thirty requests a second for ~8k objects (0.25MB/s object reads), you’d seem to be able to 100x the throughput vertically scaling on one machine with a file system and an SSD and never thinking about object storage. Would love to see why the complexity
daveguy: > What exactly would the alternative have been in 2006?Well, WebDAV (Document Authoring and Versioning) had been around for 8 years when AWS decided they needed a custom API. And what service provider wasn't trying to lock you into a service by providing a custom API (especially pre-GPT) when one existed already? Assuming they made the choice for a business benefit doesn't require anything close to a conspiracy theory.
dundercoder: Gluster was that for me
_joel: Ah, another one! Yep, also same, before ceph days at least (although I've had my own, albeit self-inflicted, nightmare there too).
sigio: Yup, still get nightmares about glusterfs.... still have one customer running on it.
dundercoder: I heard it got better, but we ran into the BOTF (billions of tiny files) issue around 2016. (For a genealogy startup this was a serious issue)
orev: If the app was written using the S3 API, it would be much faster/cheaper to migrate to a local system the provides the same API. Switching to local IO would mean (probably) rewriting a lot of code.
smjburton: > In March 2026, I migrated to self-hosted object storage powered by Versity S3 Gateway.Thanks for sharing this, I wasn't even aware of Versity S3 from my searches and discussions here. I recently migrated my projects from MinIO to Garage, but this seems like another viable option to consider.
PunchyHamster: WebDAV is ass tho. I don't remember a single positive experience with anything using it.And still need redundant backend giving it as API
cuu508: (Author here) There are multiple web servers for redundancy (3 currently), and each needs access to all objects.
PunchyHamster: with average object size of 8.5kB I'd honestly consider storing it as blobs in cloud DB, with maybe some small per-server cache in front
rconti: Or a simple SAN
ethan_smith: The app was already built against the S3 API when it used cloud storage. Keeping that interface means the code doesn't change - you just point it at a local S3-compatible gateway instead of AWS/DO. Makes it trivial to switch back or move providers if needed.