Discussion
Kent C. Dodds
thadk: Ffmpeg.WASM is really good and might manage all these steps before even uploading.
DanielHB: Not really related to the topic, but I recently set up a baby-cam with ffmpeg by just telling ffmpeg to stream to the broadcast address on my home network and I can now open the stream from VLC on any device in the household.A very heavy-handed solution, but super simple. A single one-liner. Just thought to share a weird trick I found.
Doohickey-d: You can do this all in fly.io, no cloudflare container needed.The whole selling point of fly is lightweight and fast VMs that can be "off" when not needed and start on-request. For this, I would:Set up a "peformance" instance, with auto start on, and auto-restart-on-exit _off_, which runs a simple web service which accepts an incoming request, does the processing and upload, and then exits. All you need is the fly config, dockerfile, and service code (e.g. python). A simple api app like that which only serves to ffmpeg-process something, can start very fast (ms). Something which needs to load e.g. a bigger model such as whisper can also still work, but will be a bit slower. fly takes care of automatically starting stopped instances on an incoming request, for you.(In my use case: app where people upload audio, to have it transcribed with whisper. I would send a ping from the frontend to the "whisper" service even before the file finished uploading, saying "hey wake up, there's audio coming soon", and it was started by the time the audio was actually available. Worked great.)
michaelbuckbee: That's a good trick (the "get ready" ping). It reminds me of how early Instagram was considered fast because they did the photo upload in the background while you were typing your caption so that by the time you hit "upload" it was already there and appeared instantly.
pocksuppet: It could be a little more efficient to use a multicast address. Even if you don't have any special multicast routing set up, all the receiving machines should be able to discard the traffic a bit earlier in the pipeline.
x0x0: It may be even easier to not even leave a vm in off. Using either the fly command or their api, you can kick off a one-off machine that runs an arbitrary script on boot and dies when that script ends.yanked from my script: cmd = [ "fly", "machine", "run", latest_image, "--app", APP_NAME, "--region", options[:region], '--vm-size', 'performance-1x', '--memory', options[:memory] || '2048m', "--entrypoint", "/rails/bin/docker-entrypoint bundle exec rake #{rake_task}", "--rm" ] system(cmd) or a 1-1 transliteration to their api. You can of course run many of these at once.