Discussion
sllm
peter_d_sherman: What a brilliant idea!Split a "it needs to run in a datacenter because its hardware requirements are so large" AI/LLM across multiple people who each want shared access to that particular model.Sort of like the Real Estate equivalent of subletting, or splitting a larger space into smaller spaces and subletting each one...Or like the Web Host equivalent of splitting a single server into multiple virtual machines for shared hosting, or what-have-you...I could definitely see marketplaces similar to this, popping up in the future!Anyway, it's a brilliant idea!
spuz: It seems crazy to me that the "Join" button does not have a price on it and yet clicking it simply forwards you to a Stripe page again with no price information on it. How am I supposed to know how much I'm about to be charged?
esafak: Like vast.ai and TensorDock, and presumably others.
freedomben: This is an excellent idea, but I worry about fairness during resource contention. I don't often need queries, but when I do it's often big and long. I wouldn't want to eat up the whole cluster, but I also would want to have the cluster when I need it. How do you address a case like this?
spuz: Is this not a more restricted version of OpenRouter? With OpenRouter you pay for credits that can be used to run any commercial or open-source model and you only pay for what you use.
jrandolf: OpenRouter is a little different. We are trying to experiment with maximizing a single GPU cluster.
vova_hn2: 1. Is the given tok/s estimate for the total node throughput, or is it what you can realistically expect to get? Or is it the worst case scenario throughput if everyone starts to use it simultaneously?2. What if I try to hog all resources of a node by running some large data processing and making multiple queries in parallel? What if I try to resell the access by charging per token?Edit: sorry if this comment sounds overly critical. I think that pooling money with other developers to collectively rent a server for LLM inference is a really cool idea. I also thought about it, but haven't found a satisfactory answer to my question number 2, so I decided that it is infeasible in practice.
jrandolf: 1. It's an average. 2. We have sophisticated rate limiter.
singpolyma3: 25 t/s is barely usable. Maybe for a background runner
kaoD: How is the time sharing handled? I assume if I submit a unit of work it will load to VRAM and then run (sharing time? how many work units can run in parallel?)How large is a full context window in MiB and how long does it take to load the buffer? I.e. how many seconds should I expect my worst case wait time to take until I get my first token?
ninjha: > how many work units can run in parallelnot original author but batching is one very important trick to make inference efficient, you can reasonably do tens to low hundreds in parallel (depending on model size and gpu size) with very little performance overhead
lelanthran: > 25 t/s is barely usable. Maybe for a background runnerThat's over a 1000 words/s if you were typing. If 1000 words/s is too slow for your use-case, then perhaps $5/m is just not for you.I kinda like the idea of paying $5/m for unlimited usage at the specified speed.It beats a 10x higher speed that hits daily restrictions in about 2 hours, and weekly restrictions in 3 days.
Lalabadie: This is the most "Prompted ourselves a Shadcn UI" page I've seen in a while lolI dig the idea! I'm curious where the costs will land with actual use.
jrandolf: Thanks lol. I actually like Shadcn's style. It's sad that people view it as AI now.
mogili1: Can you show a comparison of cost of we went per token pricing.
QuantumNomad_: > How does billing work?> When you join a cohort, your card is saved but not charged until the cohort fills. Stripe holds your card information — we never store it. Once the cohort fills, you are charged and receive an API key for the duration of the cohort.Have any cohorts filled yet?I’m interested in joining one, but only if it’s reasonable to assume that the cohort will be full within the next 7 days or so.I’d be pretty annoyed if I join a cohort and then it takes like 3 months before the cohort has filled and I can begin to use it. By then I will probably have forgotten all about it and not have time to make use of the API key I am paying for.
poly2it: Does it take user time zones into account?
jrandolf: Yes
p_m_c: Do you own the GPUs or are you multiplexing on a 3rd party GPU cloud?
singpolyma3: Sure if it was just a matter of typing. But in practise it means sitting and staring for minutes at nothing happening with a "thinking" until something finally happens.I mean my local 122b is only 20t/s so for background stuff it can be used for that. But not for anything interactive IME.
RIMR: I read the FAQ, and I can't imagine this is going to work the way you want it to. It fundamentally doesn't make sense as a business model.I can sign up for a cohort today, but there's not even a hint of how long it will take the cohort to fill up. The most subscribed cohort is only at 42% (and dropping), so maybe days to weeks? That's a long time to wait if you have a use case to satisfy.And then the cohort expires, and I have to sign up for another one and play the waiting game again? Nobody wants that level of unreliability.Also, don't say "15-25 tok/s". That is a min-max figure, but your FAQ says that this is actually a maximum. It makes no sense to measure a maximum as a range, and you state no minimum so I can only assume that it is 0 tok/s. If all users in the cohort use it simultaneously, the best they're getting is something like 1.5 tok/s (probably less), which is abyssmal.You mention "optimization", but I have no idea what that means. It certainly doesn't mean imposing token limits, because your FAQ says that won't happen. If more than 25 users are using the cohort simultaneously, it is a physical impossibility to improve performance to the levels you advertise without sacrificing something else, like switching to a smaller model, which would essentially be fraud, or adding more GPUs which will bankrupt you at these margins. With 465 users per cohort, a large chunk of whom will be using tools like OpenClaw, nobody will ever see the performance you are offering.The issue here is you are trying to offer affordable AI GPU nodes without operating at a loss. The entire AI industry is operating at a loss right now because of how expensive this all is. This strategy literally won't work right now unless you start courting VCs to invest tens to hundreds of millions of dollars so you can get this off the ground by operating at a loss until hopefully you turn a profit at some point in the future, but at that point developers will probably be able to run these models at home without your help.
tensor-fusion: Interesting direction. One adjacent pattern we've been working on is a bit less about partitioning a shared node for more tokens, and more about letting developers keep a local workflow while attaching to an existing remote GPU via a share link / CLI / VS Code path. In labs and small teams we've found the pain is often not just allocation, but getting access into the everyday workflow without moving code + environment into a full remote VM flow. Curious whether your users mostly want higher GPU utilization, or whether they also want workflow portability from laptops and homelabs. I'm involved with GPUGo / TensorFusion, so that's the lens I'm looking through.
jrandolf: We implement rate-limiting and queuing to ensure fairness, but if there are a massive amount of people with huge and long queries, then there will be waits. The question is whether people will do this and more often than not users will be idle.
freedomben: Is there any way to buy into a pool of people with similar usage patterns? Maybe I'm overthinking it, but just wondering
ssl-3: [delayed]
IanCal: Can you explain the benefits over something like openrouter?
jrandolf: vLLM handles GPU scheduling, not sllm. The model weights stay resident in VRAM permanently so there's no loading/unloading per request. vLLM uses continuous batching, so incoming requests are dynamically added to the running batch every decode step and the GPU is always working on multiple requests simultaneously. There is no "load to VRAM and run" per request; it's more like joining an already-running batch.TTFT is under 2 seconds average. Worst case is 10-30s.
kaoD: > The model weights stay resident in VRAM permanently so there's no loading/unloading per request.Yes, I was thinking about context buffers, which I assume are not small in large models. That has to be loaded into VRAM, right?If I keep sending large context buffers, will that hog the batches?
kgeist: >If I keep sending large context buffers, will that hog the batches?Technically it should, if your large context (= KV cache) fills the entirety of VRAM, I don't see how vLLM would be able to process other requests without more VRAM.
jrandolf: 24/7 LLM for $10/month.
jrandolf: No cohorts have been filled yet. We're still early. We are seeing reservations pick up quickly, but I'd be able to give you a more concrete estimate of fill velocity after about a week.That said, we're planning to add a 7-day window: if a cohort doesn't fill within 7 days of your reservation, it cancels automatically and your card is released. We don't want anyone's payment method sitting in limbo indefinitely.