upvote
Also the first section I jumped to :) To Intel's credit, seems they're slowly improving, the section starts with:

> Over the last year or two, Intel has worked to deliver serious optimizations for and compatibility with Blender GPU rendering on its Arc GPUs. Although NVIDIA has long held an advantage in the application, our last time looking at Intel’s cards indicated ongoing improvements. This round of testing is no different. We found that the Arc Pro B70 provided more than twice the performance of the B50, also beating the R9700 by 9%.

reply
Yeah, I checked an Intel GPU some years ago and I think it was scoring near 1000 or below in Blender's open data. Glad it's slowly improving, although I have to check if the price is also increasing or not, although I suspect it must still be cheaper than the other options.
reply
This is because Blender is in fact using CUDA?
reply
Blender supports CUDA, HIP, OneAPI, and Metal. So Intel GPUs are performing poorly using their native API.
reply
The key feature on intel platforms is the hardware de-noise acceleration (NVIDIA OptiX also works well.) Note, AMD OpenCL works quite well for some renderings, but blender flamenco likes consistent cluster hardware.

For 8k HDR10 media or 3+ screens the rtx 5090 32G model is going to be the minimum card people should buy. Just because you see 4 DP ports, doesn't mean the card can push bit-rates needed to fill an HDR10 display >60Hz.

The Mac Studio Pro unified >512GB ram/vram is a better LLM lab solution (Apple recently NERF'd it to 256GB.) Who cares if a task completes a bit slower, it doesn't matter given the lower error rates... and not costing $14k like an rtx 6000. =3

Great tutorial on getting blender to behave on mid-grade PC and laptops etc. :

https://www.youtube.com/watch?v=a0GW8Na5CIE

reply