Discussion
fast-servers
lmz: Seems similar to the SEDA architecture https://en.wikipedia.org/wiki/Staged_event-driven_architectu...
ratrocket: discussed in 2016: https://news.ycombinator.com/item?id=10872209 (53 comments)
kogus: Slightly tangential, but why is the first diagram duplicated at .1 opacity?
bee_rider: > One thread per core, pinned (affinity) to separate CPUs, each with their own epoll/kqueue fd> Each major state transition (accept, reader) is handled by a separate thread, and transitioning one client from one state to another involves passing the file descriptor to the epoll/kqueue fd of the other thread.So this seems like a little pipeline that all of the requests go through, right? For somebody who doesn’t do server stuff, is there a general idea of how many stages a typical server might be able to implement? And does it create a load-balancing problem? I’d expect some stages to be quite cheap…
tecleandor: That plus the ellipsis makes me thing that it means the additional threads that would open for next connections...
fao_: this is more or less, in some way, what Erlang does and how Erlang is so easy to scale.
luizfelberti: A bit dated in the sense that for Linux you'd probably use io_uring nowadays, but otherwise it's a timeless designStill, I'm conflicted on whether separating stages per thread (accept on one thread and the client loop in another) is a good idea. It sounds like the gains would be minimal or non-existent even in ideal circumstances, and on some workloads where there's not a lot of clients or connection churn it would waste an entire core for handling a low-volume event.I'm open to contrarian opinions on this though, maybe I'm not seeing soemthing...
raggi: It’s not a good idea and that’s where I’d really start with the dated commentary here rather than focusing on the polling mechanism. It depends on the application but if the buffers are large (>=64kb) such as a common TCP workload then uring won’t necessarily help that much. You’ll gain a lot of scalability regardless of polling mechanism by making sure you can utilize rss and xss optimizations.
jfindley: io_uring is in a curious place. Yes it does offer significant performance advantages, but it continues to be such a consistent source of bugs - many with serious security implications - that it's questionable if it's really worth using.I do agree that it's a bit dated and today you'd do other things (notably SO_REUSEPORT), just feel that io_uring is a questionable example.
ciconia: > continues to be such a consistent source of bugs - many with serious security implications... just feel that io_uring is a questionable example.Are you saying this as someone with experience, or is it just a feeling? Please give examples of recent bugs in io_uring that have security implications.
dspillett: Not OP, and I'm no expert in the area at all, but I _do_ have a feeling that there have been quite a few such issues posted here and elsewhere that I read in the last year.https://www.cve.org/CVERecord/SearchResults?query=io_uring seems to back that up. Only one relevant CVE listed there for 2026 so far, for more than two per month on average in 2025. Caveat: I've not looked into the severity and ease of exploit for any of those issues listed.
epicprogrammer: It’s an interesting throwback to SEDA, but physically passing file descriptors between different cores as a connection changes state is usually a performance killer on modern hardware. While it sounds elegant on a whiteboard to have a dedicated 'accept' core and a 'read' core, you end up trading a slightly simpler state machine for massive L1/L2 cache thrashing. Every time you hand off that connection, you immediately invalidate the buffers and TCP state you just built up. There’s a reason the industry largely settled on shared-nothing architectures like NGINX having a single pinned thread handle the entire lifecycle of a request keeps all that data strictly local to the CPU cache. When you're trying to scale, respecting data locality almost always beats pipeline cleanliness.
toast0: You could presumably have an acceptor thread per core, which passes the fds to core alligned next thread, etc.That would get you the code simplicity benefits the article suggests, while keeping the socket bound to a single core, which is definitely needed.Depending on if you actually need to share anything, you could do process per core, thread per loop, and you have no core to core communication from the usual workings of the process (i/o may cross though)
password4321: Always interesting to review the latest techempower web framework benchmarks, though it's been a year:https://www.techempower.com/benchmarks/#section=data-r23&tes...
wild_egg: It's been a while but why is uring not helpful for larger buffers? I'd think the zero-copy I/O capabilities would make it more helpful for larger payloads, not less
Veserv: uring supports zero-copy, but is not a copy-reduction mechanism; it is a syscall-reduction mechanism. Large buffers mean less syscalls to start with, so less benefit.
pocksuppet: Did you read the CVEs? Half these aren't vulnerabilities. One allows the root user to create a kernel thread and then block its shutdown for several minutes. One is that if you do something that's obviously stupid, you don't get an event notification for it.Remember the Linux kernel's policy of assigning a CVE to every single bug, in protest to the stupid way CVEs were being assigned before that.
dspillett: > Did you read the CVEs?You obviously didn't read to the end of my little post, yet feel righteous enough to throw that out…> One allows the root user to create a kernel thread and then block its shutdown for several minutes.Which as part of a compromise chain could cause a DoS issue that might be able to bypass common protections like cgroup imposed limits.
kev009: Well, kernels grown some support for steering accept() to worker thread directly. For instance SO_REUSE_PORT (Linux)/SO_REUSE_PORT_LB (FreeBSD).
scottlamb: I don't think the author intended "code simplicity" as an end unto itself but a way to reduce cache pressure. He popped into the 2016 discussion [1] to say:> Another benefit of this design overlooked is that individual cores may not ever need to read memory -- the entire task can run in L1 or L2. If a single worker becomes too complicated this benefit is lost, and memory is much much slower than cache.I think this is wrong or at least overstated: if you're passing off fds and their associated (kernel- and/or user-side) buffers between cores, you can't run entirely in L1 or L2. And in general, I'd expect data to be responsible for much more cache pressure than code, so I'm skeptical of localizing the code at the expense of the data.But anyway, if the goal is to organize which cores are doing the work, splitting a single core's work from a single thread (pinnned to it) to several threads (still pinned to it) doesn't help. It just introduces more context switching.[1] https://news.ycombinator.com/item?id=10874616
jauntywundrkind: In node.js I've seen time and time again some slow task that happens only every now and then, but which causes significant latency spikes. Having the one single event loop, with tasks big and small, from all stages of the processing pipeline mixed in, feels so crude. I really want a more sophisticated architecture where different stages of the execution can be managed independently.I also want to mention that very very very few programs do, but io_uring does let you run multiple io_urings!! Your program can pick from which completion queue it wants to read, can put high priority tasks in a specific iou.
raggi: Exactly this. The kernel alloc’d buffers can help but if that was a primary concern you’re in driver territory. Anything still userspace kind of optimization domain the portion of syscalls for large buffers in a buffered flow is heavily amortized and not overly relevant.
toast0: > But anyway, if the goal is to organize which cores are doing the work, splitting a single core's work from a single thread (pinnned to it) to several threads (still pinned to it) doesn't help. It just introduces more context switching.(Mostly agreeing with you, I think). I think looking at the overall system and saying (handwave numbers) 25% of system time is spent on accept, and 75% on request handling, so let's set 25% of cores to accept and 75% to handle requests is unfortunately the wrong way to split the work too. Each core would have a small userland loop, but, communication between processes is expensive. And you have (more than necessary) kernel side communication between processors too because the TCP state will be touched by the processor handling the NIC queue it arrives on, as well as the processor handling the listen queue in userland and then the processor handling the request in userland. Setting up your system to have high interprocessor communication limits the number of cores you can effectively use.