Hello,
I had two surprises. First, for a 4-minute, 20-second project, rendering time went from 50 minutes with h265 (Ryzen 7 + 32MB of RAM) to 30 minutes with h265-nvenc and an Nvidia GTX 1050 Ti (VDPAU setting in preferences).
But when I did the exact same render with h265-nvenc and an Nvidia GTX 1080 Ti, which is at least 200% more powerful, the rendering time remained exactly the same at 30 minutes!
Is this normal?
I'm not sure why there is no difference between Nvidia 1050 and 1080. Generally, GPU power refers to 3D rendering, and the 1080 has many more dedicated chips (CUDA cores), hence the higher performance. However, video rendering is in 2D, and GPU rendering in particular is always based on the usual nvenc, so it may be that the graphics card's “nvenc engine” is the same. This could reduce the differences that are seen in 3D. But I'm not sure.
Regarding CPU vs. GPU rendering, CPU rendering is more accurate in its calculations and leads to better quality. However, it lacks the parallelism of GPUs, which makes them much faster (in supported codecs, of course). A simple comparison of quality can be made by looking at the size of the files obtained; CPU rendering probably produces larger files.
However, GPU presets can be configured to achieve better quality. They will become slower (and increase the file size) but are still better than CPU rendering.
To increase the efficiency of CPU rendering by forcing greater parallelism, you can use a “render farm”; see:
https://download.cinelerra-gg.org/files/CinelerraGG_Manual/Render_Farm_Usage.html
NB: The VDPAU setting in “preferences” concerns decoding (playback) in the timeline and not the final encoding. The latter can only be configured in the rendering presets.