upvote
From my understand after reading, he's suggesting that Unreal and Unity's post processing are just applying effects to a camera/rendered frame, when what he wanted to do is simulate the CRT itself across the renderer to the frame that hits the swapchain.
reply
But that's nonsensical. The CRT doesn't see the graphics pipeline of, say, an SNES, it just sees an analog signal. The graphics processing is done in the digital realm, not in the analog realm. If you want to simulate a CRT, all you need is a physical model and a digital image to display, which can come from Unreal, Unity, or whatever any other engine or program or whatever. It makes literally zero sense to write an entire engine to implement a CRT simulation.
reply
Yeah I’m with you. Hate to assume such things but with how much AI spam is out there on programmer blogs these days I kinda just give up reading the blog post once something becomes confusing. Most of the time there’s not any insight to be learned by investigating deeper.

This one also has a lot of Its not X, its Y type phrasing

reply
Mmm, while this person's articles are clearly AI written, they do make some sense. Their renderer keeps samples the previous frame to achieve the effect, which of course is totally possible to do in Unreal or Unity but they also seem to have their own lighting and PBR models, which might be a bit harder to achieve.

>Lighting systems are designed to remain readable under CRT-style color quantization. Sprite and mesh pipelines emphasize bold shapes and strong contrast. Even debugging tools in the engine, like the grid overlays and scene visualization systems, exist partly to help developers maintain spatial clarity and composition.

This is AI nonsense but it could be a summarisation of something real.

reply
[dead]
reply