upvote
This is pretty neat. I was experimenting something similar with my ffmpeg frontend to connect to the local machine (and remote) to run arbitrary encoding jobs, thus offloading the encode tasks to another machine, but still with a queuing mechanism locally.

The project is https://ffmpeg-commander.com for generating ffmpeg commands, but with an experimental backend to offload the tasks.

Do you support chunked encoding across multiple servers? It would be a great feature to support larger video files.

reply
Since we're on the homepage, please forgive my shameless plug: https://github.com/steelbrain/LemurCam

I built this macOS app that allows you to use any off the shelf wifi camera as your webcam with Zoom, Microsoft Teams, etc. It has lower latency than OBS, VLC etc based on my testing, its Swift-Native and pretty lightweight.

It was built mainly for my own team so they don't have to run long wires of USB cameras or pay a lot of money for a "wireless webcam". I hope you find it useful!

reply
nice, problem is that, with hikivision and dahua got banned these days, the majority of ip cameras on the market do not do onvif or rtsp, or neither, what a shame.
reply
Get a TP Link Tapo! They are like 20-30 bucks and come with ONVIF.

EZVIZ is another ban-evading arm of HIKVision, easily available in Europe and has RTSP (confirmed) with alleged support of ONVIF as well.

reply
Why are you using github as your personal proprietary app depot?
reply
I am using it as my everything-depot. Beyond this proprietary app, you'll find more than a hundred of my other, open source projects there as well.

Including the building blocks of said proprietary app: https://github.com/steelbrain/XMLKit & https://github.com/steelbrain/IPCamKit

reply
Nice to see you here!

Was really impressed by your work on Pundle. (It was an amazingly fast HMR dev environment - much like Vite today.) Felt like I was the only one using it, but it was hard to walk away from instant updates.

reply
Thanks for putting a smile on my face! I am glad you liked it! :)
reply
ffmpeg has great http input and output support. I've been using this quite a bit recently. Wrapping ffmpeg with node.js and using the built in http server and client to interact with it.

It's even reduced load considerably because most of the time the disk doesn't even need to be touched.

reply
Maybe you can submit a patch to ffmpeg.org.
reply
I've considered it, thanks for the nudge. Since the patches are quite specific to my usecase:

- https://github.com/steelbrain/ffmpeg-over-ip/blob/main/fio/f... - https://github.com/steelbrain/ffmpeg-over-ip/tree/main/patch...

I am not sure they'll be accepted for upstreaming, but in exploring the options, I noticed ffmpeg has sftp:// transport support and there were some bugs surrounding that. I do intend to publish some patches for those.

reply
cool idea. can you elaborate on IO and how the ffmpeg-server reads blocks from the client? that would seem to be a big blocker
reply
> cool idea. can you elaborate on IO and how the ffmpeg-server reads blocks from the client? that would seem to be a big blocker

ffmpeg-server runs a patched version of ffmpeg locally, ffmpeg requests to read some chunks (ie give me video.mp4) through our patched filesystem (https://github.com/steelbrain/ffmpeg-over-ip/blob/main/fio/f...), which gets sent over the same socket that the client established, client receives the request, does the file operations locally and sends the results back over the socket to the server, and server then sends them to ffmpeg.

ffmpeg has no idea its not interacting with a local file system

reply
Is video that cpu/gpu bound that streaming it over the interwebs isn't the issue?

Maybe my use cases for ffmpeg are quite narrow, but I always get a speedup from moving the files off my external hard-drive, suggesting that is my current bottleneck.

reply
> streaming it over the interwebs isn't the issue

The hope is that you stream over LAN not the interwebs!

> I always get a speedup from moving the files off my external hard-drive

Based on your description, it does seem like your ffmpeg may be IO limited

reply
very clever and thanks for explaining. for gpu-bound processes, which are common ffmpeg use cases, this is a great approach
reply
What's the point of this?

A single CPU core on a 9500T or a Ryzen V1500B is fast enough to real-time re-encode 60mbps 4K H264 to 1080p 5mbps h264, aka, for a core use case - transcoding for web for Jellyfin over cellular, for example - you haven't needed hardware video engines on PCs for 9 YEARS.

I have no idea why people are so hung up on hardware video encoding. It's completely wrong. The quality is worse. The efficiency is a red herring - you will still use every CPU core for IO threads in ffmpeg, if you don't configure that away, which you do not. And it requires really annoying setup and premium features on stuff like Plex. It just makes no sense!

If latency is important to you, well then hardware engines make sense. But you are throwing away the latency sending it over the network. The only use case (basically) is video game streaming, and in that case, you'll have a local GPU.

I have never read one of these ffmpeg network hardware encode innovations to have an actual benchmark comparison to single thread software transcoding tasks.

I know you mean well but really. It makes NO sense.

reply
Thank you for sharing your experience. Seems like this is not relevant to your setup & usecase.

People who need this know who they are. Not everything is for everybody.

reply
What is the use case?

I'd argue this is for nobody haha

Nobody using jellyfin plex or whatever needs it: they should just use software transcoding, it's better in pretty much every way.

reply
I've traveled around a lot in the past couple years so my situation (read: homelab equipment) has been changing and my usecase has been changing with it. It started out as:

- I dont want to unplug the GPU from my gaming PC and plug it into my linux server

- Then: I dont want to figure out PCI forwarding, I'll just open a port and nfs to the containers/vms (ffmpeg-over-ip v4 needed shared filesystem)

- Now: I have a homelab of 4 mini PCs and one of them has an RTX 3090 over Oculink. I need it for local LLMs but also video encoding and I dont want to do both on the same machine.

But you've asked a more fundamental question, why would people need hardware accelerated video decoding in the first place? I need it because my TV doesn't support all the codecs and I still want to watch my movies at 4K without stuttering.

reply
You can transcode in realtime in software to your TV. You don't need the GPU at all. Even on ancient USFF PCs.
reply
I'll tell my TV you said that and I'll see if it stops buffering during playback :)
reply
> The efficiency is a red herring - you will still use every CPU core for IO threads in ffmpeg, if you don't configure that away, which you do not. And it requires really annoying setup and premium features on stuff like Plex. It just makes no sense!

I would love to learn more about this! What can I do to fully optimize ffmpeg hardware encoding?

My use case is transcoding a massive media library to AV1 for the space gains. I am aware this comes with a slight drop in quality (which I would also be keen to learn about how to minimize), but so far, in my testing, GPU encoding has been the fastest/most efficient, especially with Nvidia cards.

reply
You would use your full system, saturating the CPU and GPU, including unlocking the number of simultaneous video sessions for consumer NVIDIA GPUs. That said, software AV1 looks a lot better than hardware AV1 per bit.
reply
As a rule, strong feelings about issues do not emerge from deep understanding.
reply