Discussion
ai_slop_hater: The takeaway is that you can just buy a single beefy server instead of using kubernetes or whatever.
vyrotek: "Always has been"
Thaxll: And your beefy server goes down what do you do? Where do you think those object storage live exactly?Kubernetes is not just for scaling, it's a way to standardize all ops.
senko: > And your beefy server goes down what do you do?Boot it up again. You'll still have higher availability than AWS, GitHub, OpenAI, Anthropic, and many others.> Where do you think those object storage live exactly?On a RAID5 array with hot-swappable disks, of course.
itsthecourier: I have tried to use tiger beetle in production. haven't been successful yet.nice stuff, multi master replication.user API, super small.doubts about how to do streaming backup.after studying the API and doing some spike architectures I come to the conclusion (I may be wrong):tiger beetle is awesome to keep the account balance. that's it.because you pretty much get the transactions affecting and account and IIRC there was not a lot you can do about how to query them or use them.also I was thinking it would be nice to have something like an account grouping other accounts to answer something like: how much money out user accounts have in this microsecond?I think that was more or less about itm they have some special fields u128 to store ids to the transaction they represent into your actual systemand IIRC handle multi currency in different booksmy conclusion was: I think I don't get it yet. I think I'm missing something. had to write a ruby client for it and build an UI to play with the API and do some transactions and see how it behaved. yet that was my conclusionwould be great to have an official UI client
jzelinskie: I think I need a deeper-dive into the "diagonal scaling" presented. From my understanding, this is actually no different from "industry decoupling" he disparages earlier in the presentation. There are even off-the-shelf libraries for LSMs backed by object storage like SlateDB.
adityaathalye: I feel the Expression Problem neatly frames the "diagonal scaling" proposition; what system design choices will allow the architecture to scale vertically in what fashion, while also being able to scale what horizontally, without losing strict serialisability.If we add a "vertical" capability, it cannot be at the cost of any existing "horizontal" capability, nor should doing so forfend any future "horizontal" capability. And vice-versa (adding horizontal capability should not mess with vertical ones). The point at which one will break the other is the theoretical design limit of the system.
convolvatron: in general these aren't in conflict. in particular once I have a system which can distribute work among faulty nodes and maintain serializability, exploiting parallelism _within_ a fault domain just falls out.
nickmonad: On the streaming side, are you looking for Change Data Capture?https://docs.tigerbeetle.com/operating/cdc/
itsthecourier: sorry, i meant something like an external continuous backup. just in case the system get compromised, a constant off-site non operational backup
hiyer: Amazing. Easily the most learning I've had in 18 minutes (I watched at 1.2x speed) in my life.
juancn: Nice, the only weird thing was the assumptions about OLAP (and I had to speed it up to ~1.4x).Like it uses strings (OLAP works way better over integral data, it sucks at strings) or that it's easy to scale.It is easy-ish under fixed queries (classic MOLAP for example) but not over arbitrary queries and frequent updates, then it degenerates to a problem much worse than OLTP.
adityaathalye: To a first approximation, yes. But, why? And for up to how many hundred terabytes of data can you get away with the single beefy server? Provided you make what design choices?Which leads to the real takeaway which is "Tiger Style": https://tigerstyle.dev/ which I am partial to, along with Rich Hickey's "Hammock Driven Development" https://www.youtube.com/watch?v=f84n5oFoZBc"Tiger on Hammock" will absolutely smoke the competition.(edit: add links)
ai_slop_hater: > But, why?To keep things simple. My current company is running multiple instances of back-end services for absolutely no fucking reason, and I had to fix numerous race condition bugs for them. I had an interview with a startup where, after I asked why they were using distributed DynamoDB locks in a monolith app with only a single instance running, the person said "it works for us" and got defensive. Later they told me I wasn't experienced enough. I am so frustrated that there appears to be zero basic engineering rigor anywhere I can find nowadays.> And for up to how many hundred terabytes of data can you get away with the single beefy server?Do you even need to store many hundred terabytes of data? I have never encountered a scenario in my career (admittedly not very long so far) where there was a need to store even one terabyte of data. But in case of TigerBeetle, from skimming through the video, it appears they offload the main bulk of data to a "remote storage."
sophacles: If your opinion was worth listening to, you'd be working somewhere that had multiple back-end instances with a very clear reason, designed with rigor.You'd still be fixing race conditions though.
ai_slop_hater: If you think that my opinion is not worth listening to, i.e. that I am wrong, would you mind elaborating why? There is a real opportunity to sway my opinion here, because I am not unsure. I could just be crazy. I don't know anymore. But, generally, I don't think that you necessarily have to have multiple back-end instances, and that if you have multiple back-end instances, that you will necessarily have race conditions. Am I wrong in this?
sophacles: Well, your opinion doesn't consider many real world factors. (It's also worth noting that you toned down your response to me with hedging language... "generally", "necessarily")No matter how good a particular server it, it isn't immune to power outages, fiber cuts, fires, basic hardware failure or even just downtime for basic updates (os, application deployment, etc).As soon as there is any sort of cost with downtime (direct or indirect monetary cost, or reputational cost), basic engineering rigor requires that you use redundancy to handle such failures, and that means spending money (in the form of vendors or engineering) on it. If money and time is being spent to create an application, and there is a reasonable assumption that downtime to that app will have costs to it, one way to amortize the cost of retrofitting redundancy into an app is to start with it.Having multiple instances of the backend also allows for other cost saving measures:* N instances of smaller sized server may be cheaper than 1 instance of really good server* Multiple instances of backend allows for update deployments while the app is live, indirectly driving other cost savings (no overtime or pager pay, happier employess... "they don't give us snacks but we never have to work late", lower costs associated with downtime in due to botched deployments)* Its cheaper to hire engineers that follow this de facto standard pattern than to sit down and pave new ground with other tools, and using that pattern they will achieve the desired result in terms of reliability and uptime.* It allows for scaling if your app's traffic is seasonal, meaning you don't need to spend as much on resources as you scale (note this is the first time scaling has been brought up, and it's as a minor point).Does every app have these concerns? Of course not. Do a very large number of apps have these concerns, or have an expected value calculation around these concerns that says the smart money is on planning for them? Yes.In the context of a discussion of hardware being so good that a single bit of hardware can handle any load you're likely to throw at it - declaring multiple instances to be pointless is an analysis that hasn't considered any of the factors i brought up, in terms of a real cost analysis. Particularly if the complaint doesn't even introduce the concept of uptime as irrelevant to the app.So generally is it "necessary" to have multiple instances of backend? i don't know, generally necessary is a very narrow scope. Is it bad to dismiss multiple backends as probably not needed - yes, its equally foolish to immediately requiring them. Proper engineering rigor requires considering not just the technical, but the financial realities of an application.About race conditions... will multiple backends necessarily end up with them? No in the same way that lottery ticket won't necessarily be a loser. But multiple backends aren't required for race conditions... a single backend will presumably have concurrency within each instance too. And where there is concurrency, there almost certainly will be race conditions, I've rarely seen software that doesn't have them at some point or another. Maybe TigerBeetle doesn't and never will, but that is a unicorn team working on a very narrowly defined bit of software that is merely a component of other systems, working under conditions that are extremely expensive to reproduce for most engineering projects. The general case is that you will write, deploy and be frustrated by race conditions since the cost-benefit analysis doesn't call for absolute perfection... I know I have, odds are you've read about the consequences of it on this site at some point.The point of all of this is that engineering rigor goes beyond technical rigor. It includes understanding the tradeoffs in terms of budget, uptime, technical decision making, speed of iteration and so on.
ai_slop_hater: I did consider some factors, including the ones you mentioned, when drafting my original message. I just didn't include my considerations into my message. I prefer addressing factors on-demand rather than attempting to immediately address every possible factor, especially for topics such as this one where the number of possible factors is virtually unlimited.My point is that, by default ("generally"), there is no need to run multiple instances. But I admit that sometimes it can be justified, such as if you (truly) need zero-downtime deployment or N+1 redundancy (which is questionable, because, as senko pointed out, services usually go down for reasons that aren't a power outage or a fire). My complaint wasn't really about this anyway, sorry for not being clear. I mentioned that my company runs multiple services "for absolutely no fucking reason," but I did not state their intention, which is not zero-downtime deployment or redundancy, but rather "scalability," and I find this pointless because a single $5 VPS could easily handle many times the amount of traffic their IO-bound app receives. The end result is that they now must always think about the implications of running multiple instances, and failure to do that properly creates race conditions and other kinds of obscure bugs.On the other hand, if you only run a single instance, reasoning about the system becomes much easier, and even though that does not eliminate the possibility of a race condition, it becomes much harder to create one. I also like the architectural benefits of a single instance system, such as being able to keep transient state in memory worry-free.
adityaathalye: > To keep things simple.Oh I am so with this. The "why" I posed was in context of TigerBeetle's design choices to solve for high-contention OLAP.Sorry, I framed my question loosely --- too much implicit context.Me personally, I'm learning from the lesson of TigerBeetle and others, and just using SQLite for my multi-ten-gigabyte ambitions :D
Nican: This looks like yet another basic key value store.Benchmarking is a complicated problem, but FoundationDB claims 300,000 reads per second on a single core. TigerBeetle claims 100k-500k TPS on... Some kind of hardware?https://apple.github.io/foundationdb/benchmarking.html
mping: The design with hot/cold storage makes it much more interesting than FDB for some use cases. FDB is an excellent DB with very strong operational guarantees, TigerBeetle seems to be specialized for financial data and to optimize perf/cost ratio.Both are great
gfat: Wow the slides synced with the narration were smooth and ofc the visualization at the end. Stunning work.