(MilkDrop3, projectm-visualizer/presets-cream-of-the-crop, westurner/vizscan for photosensitive epilepsy)
mapmapteam/mapmap does open source multi-projector mapping. How to integrate e.g. mapmap?
BespokeSynth is a C++ and JUCE based patch bay software modular synth with a "node-based UI" and VST3, LV2, AudioUnit audio plugin support. How to feed BespokeSynth audio and possibly someday video? Pipewire and e.g. Helvum?
The reason to create image sequences is not because you need to send it to other apps, it’s because you preserve quality and safeguard from crashes.
A crash mid video write out can corrupt a lengthy render. With image sequences you only lose the current frame.
People aren’t going to stop using image sequences even if they stayed in the same app.
And I’m not sure why this applies: “this goes beyond” what Apple has, because they do have hardware support for decoding several compressed codecs (also I’ll note that ProRes is also compressed). Other than streaming, when are you going to need that kind of encode performance? Or what other codecs are you expecting will suddenly pop up by not requiring ASICs?
Also how does this remove degradation when going between apps? Are you envisioning this enables Blender to stream to an NLE without first writing a file to disk?
You wouldn't contain FFv1 in MP4, the only format incompetent enough for such corruption.
Apple has an interest against people using codecs that they get no fees from. And Apple don't have a lossless codec. So they don't offer lossless compressed video acceleration.
The idea is that when working as a part of a team, and you get handed a CG render, you can avoid sending a huge .tar or .zip file full of TIFF which you then decompress, or ProRes which loses quality, particularly when in a linear colorspace like ACEScg.
Another reason to use image sequences is that it’s easier to re-render just a portion of the sequence easily. Granted this can be done with video too, but has higher overhead.
But even then why does the GPU encoding change the fact that you’d send it to another NLE? I just feel like there are a lots of jump in thought process here.
But even so everybody is often making their own proxies all the time. There’s a lot of passing around of ProRes Proxy or another intermediate quality format and you still make even lighter proxies locally so NLEs and workstation apps will still benefit from this