Encoding using hardware acceleration of your graphics board GPU is included in CINELERRA-GG but it is of limited availability and works only with a specific set of hardware graphics boards, a certain level of graphics driver versions and only with certain ffmpeg formats. The encoding is done via vaapi (libva installed), which is known to work with Intel HD graphics boards and some others or via nvenc as developed by Nvidia for Nvidia graphics boards.
To use hardware acceleration for rendering (that is, encoding) you do not have to set a preference or an environment variable, as was required for decoding. To use this feature you use an ffmpeg render options file which specifies a vaapi codec, such as h264_vaapi. You must include this line in that options file to trigger the hardware probe: CIN_HW_DEV=vaapi
There are currently 4 options files available in the Render menu already set up for you that you see when you select the Video wrench and use the down arrow on the first line in the menu. These are:
mp4 h264_vaapi cin_hw_dev=vaapi profile=high |
According to an online wiki, hardware encoders usually create output of lower quality than some software encoders like x264, but are much faster and use less CPU. Keep this in mind as you might want to set a higher bitrate to get output of similar visual quality.
Results of a particular test case performed on a Intel, 4-core computer, with Broadwell Graphics using an mp4 input video/audio file with dimensions of 1440x1080/29.97fps is shown next (note, filename is tutorial.mp4). This may very well be a best case scenario! But clearly, at least on this computer with only 4 cores, the hardware acceleration seems to be quite advantageous. A comparison of the 2 output files using ydiff as described in the Appendix (C.1) shows no obvious defects.
CPU usage | Render Time | File Size | File | |
none | 388% | 100 secs | 36,862,542 | h264.mp4 |
vaapi | 150% | 19 secs | 74,522,736 | h264_vaapi.mp4 |
To use hardware acceleration for rendering (that is, encoding) you do not have to set a preference or an environment variable, as was required for decoding. To use this feature you use an ffmpeg render options file which specifies the nvenc codec, either h264_nvenc.mp4 or nvenc.mp4. There are several requirements in order for this to work on your computer as listed here:
If you try to render using h264/h265_nvenc.mp4 formats and do not have an Nvidia graphics card or this feature was not built in, you will see in the window from where you started CINELERRA-GG, the error message: Cannot load libcuda.so.1
A small test using 2 minutes from the 4k version of Big Buck Bunny shows using nvenc can be about 4 times faster. The test was done on a 4 core Intel laptop with an Nvidia 950M graphics board.
CPU usage | Render Time | File Size | File | |
none | 388% | 20 mins 18 secs | 156,517,069 | h264.mp4 |
nvenc | 252% | 5 mins 44 secs | 42,052,920 | h264_nvenc.mp4 |
Of note in this test, 388% CPU usage with only 4 cores shows that there is probably slow down because there is no more CPU power available. Therefore, using the GPU hardware acceleration with nvenc provides a significant speed-up. Also, note the larger file size without making use of the GPU – this probably indicates that there is a big difference in bitrate or quality parameter settings used in the options file and this should be taken into consideration.
There is one last potentially significant graphics speedup when using the X11-OpenGL driver for users with Nvidia graphics boards who are seeing frames/sec achieved lower than what the video format is set to. You may want to disable sync to vblank (an option for OpenGL) in NVIDIA X Server Settings for the proprietary drivers. This could increase your frames per second on playback.
The CINELERRA-GG Community, 2021