Discussion
Node.js worker threads are problematic, but they work great for us
Jcampuzano2: I get its a constraint of the language but the ubiquitousness of bundlers and differing toolchains in the JS world has always made me regret trying to use worker primitives, whether they be web workers, worker threads and more. Not to mention trying to ship them to users via a library being a nightmare as mentioned in the article.Almost none of them treat these consistently (if they consider these at all) and all require you to work around them in strange ways.It feels like there is a lot they could help with in the web world, especially in complex UI and moving computation off the main thread but they are just so clunky to use that almost nobody tries to work around it.The ironic part is if bundlers, transpilers, compilers etc. weren't used at all they would probably have much more widespread use.
ptrwis: I'm currently writing simulations of trading algorithms for my own use. I'm using worker_threads + SharedArrayBuffer and running them in Bun. I also tried porting the code to C# and Go, but the execution time ended up being very similar to the Bun version. NodeJS was slower. Only C gave a clear, noticeable performance advantage — but since I haven't written C in a long time, the code became significantly harder to maintain.
baublet: Reading the article, I didn’t see this answered: why not scale to more nodes if your workload is CPU bound? Spin off 1 cpu and a few gb of ram container and scale that as wide as you need?e.g., this certainly helps when the event loop is blocked, but so could FFI calls to another language for the CPU bound work. I’d only reach for a new Node thread if these didn’t pan out, because there’s usually a LOT that goes into spinning up a new node process in a container (isolating the data, making sure any bundlers and transpilers are working, making sure the worker doesn’t pull in all the app code, etc.).Side car processes aren’t free, either. Now your processes are contending for the same pool of resources and can’t share anything, which IME means more likelihood of memory issues, esp if there isn’t anything limiting the workers your app can spawn.Still, good article! Love seeing the ways people tackle CPU bound work loads in an otherwise I/O bound Node app.
n_e: > but so could FFI calls to another language for the CPU bound workWorker threads can be more convenient than FFI, as you don't need to compile anything, you can reuse the main application's functions, etc.
vilequeef: It’s not weird that you can’t share state between totally different processes except by passing in args.And you can make it thread-like if you prefer by creating a “load balancer” setup to begin with to keep them CPU bound. require('os').cpus().length Spawn a process for each CPU, bind data you need, and it can feel like multithreading from your perspective.More here https://github.com/bennyschmidt/simple-node-multiprocess
socketcluster: I love the simplicity of Node.js that each process or child process can have its own CPU core with essentially no context switching (assuming you have enough CPU cores).Most other ways are just hiding the context switching costs and complicating monitoring IMO.
chrisweekly: Related tangent: Platformatic's "Watt" server^1 takes a pretty interesting approach to Node, leveraging worker threads on all available cores for maximum efficiency.1. https://docs.platformatic.dev/docs/overview/architecture-ove...
groundzeros2015: - you should be using multiple node processes - you should be spawning tools to do heavy computation