Discussion
PF queues break the 4 Gbps barrier
rayiner: [delayed]
bell-cot: "Values up to 999G are supported, more than enough for interfaces today and the future." - Article"When we set the upper limit of PC-DOS at 640K, we thought nobody would ever need that much memory." - Bill Gates
WhyNotHugo: Honestly, I'm really curious about this number. 10bits is 1024, so why 999G specifically?
elevation: Looks like an arbitrary validation cap. By the time we're maxing out the 64-bit underlying representation we probably won't be using Ethernet any more.
bell-cot: https://en.wikipedia.org/wiki/Ethernet#History (& following sections)Calling something "Ethernet" amounts to a promise that:- From far enough up the OSI sandwich*, you can pretend that it's a magically-faster version of old-fashioned Ethernet- It sticks to broadly accepted standards, so you won't get bitten by cutting-edge or proprietary surprises*https://en.wikipedia.org/wiki/OSI_model
ralferoo: In the days when even cheap consumer hardware ships with 2.5G ports, this number seems weirdly low. Does this mean that basically nobody is currently using OpenBSD in the datacentre or anywhere that might be expecting to handle 10G or higher per port, or is it just filtering that's an issue?I'm not surprised that the issue exists as even 10 years ago these speeds were uncommon outside of the datacentre, I'm just surprised that nobody has felt a pressing enough need to fix this earlier in the previous few years.
citrin_ru: AFAIK performance is not a priority for OpenBSD project - security is (and other related qualities like code which is easy to understand and maintain). FreeBSD (at least when I followed it several years ago) had better performance both for ipfw and its own PF fork (not fully compatible with OpenBSD one).
IcePic: One thing could also be that by the time you have 10GE uplinks, shaping is not as important.When we had 512kbit links, prioritizing VOIP would be a thing, and for asymmetric links like 128/512kbit it was prudent to prioritize small packets (ssh) and tcp ACKs on the outgoing link or the downloads would suffer, but when you have 5-10-25GE, not being able to stick an ACK packet in the queue is perhaps not the main issue.