Hello,
Things are moving along with my project which now includes my required (16 bit) TIFF images sequences (created with lists) and (AVCHD) 1080p60 MTS video, also some various video shot with smartphone and a few still images, I will need alpha channel so I am editing in RGBA mode. I had used Cinelerra a long time ago with the same video source and I had made intermediate codecs DNxHD (using a recommended ffmpeg script) back then, I believe its now updated to DNxHR 422. In the meantime I worked a bit in Kdenlive which allowed to used mixed media with no intermediaries although it worked slower....
If I select convert to proxies scaled 1:1 which, according to the manual, is best for images, (also I think I might need effects that benefit from keeping it 1:1 -i.e making the subject in some images get smaller and smaller while I keep a composited background at the same scale....), should I still convert my MTS files to an intermediate codec (transcoding done outside of Cinelerra) or is using one of the proxies settings suggestions in the manual sufficient and useful for the varied sources?
I did test one of the proxy settings and saved it to project b, ffmpeg; mpeg mpeg compression, yuv420p, 2000000 bitrate, but I did not know how to exclude the few jpeg still images which once converted, had an mpeg extension, do they really require a proxy? Is there a way to select what media requires a proxy and what can be excluded in the same project?
I should mention that I just got a more recent NVIDIA graphics card with 6 Gb memory (Geforce RTX2060 OC) while my system is quite old (Phenom 965 Black edition 4 core CPU, 3.5GHz, 12G DD3 memory) & I'm hoping that some of the transcoding, effects etc can make use the GPU processing- I have set the video driver to x11-openGL & using proprietary driver.
Is there a proxy setting that takes advantage of this? I'm a bit confused, I'm not sure what proxy setting can make both video and image sequences more efficient for editing & allow an alpha channel...
"Is there a way to select what media requires a proxy and what can be excluded in the same project?" The goal was to allow proxies for only some media in your project, but I can not seem to get this to work. I will see is this might be an introduced bug.
Proxies are used by Igor quite frequently so hopefully he will have some insight.
@andreapaz helped me in a previous post, & I had not realised there were transpose options in Cinelerra, in preferences, aside from the proxy options. I still mix up the terms, transcode, intermediates, encode & so on, still fumbling along! I just wanted to confirm if the DNxHD or DNxHR options are available amongst the options, I don't seem to find them. So for my MTS video I will transcode using the correct variation in terminal:
ffmpeg -i original.mts -b 185M -s 1920x1080 -vcodec dnxhd -acodec pcm_s16le intermediate.mov
ffmpeg -i "inputvideo.mp4" -c:v dnxhd -b:v 110M -pix_fmt yuv422p -c:a pcm_s16le "outputvideo.mxf"
. I am still getting confused unfortunately. Should I also be transcoding the Tiff sequences with DNxHR? I tried EXR sequences as a test but it cinelerra crashes. Also I tried a few more transcode or proxy settings but cannot select/exclude amongst the various media. I will try to transcode the images sequences in cinelerra before loading the other media ( MTS video or jpeg) so the other media is not re-transcoded again.
https://www.cinelerra-gg.org/forum/postid/2287/
@andreapaz : You have to decide whether to do a high quality workflow or reduce everything to avchd quality (which should be YUV 8 bits and 4.2.0 chroma subsampling). In the first case you have to transcode everything to a high quality intermediate (for example DNxHR 444 or image sequences) and use Tiff. Otherwise, png and sRGB are fine. Just be sure to keep the color spaces of the various media homogeneous, using the "ColorSpace" plugin if there is a need to conform them.
"I'm hoping that some of the transcoding, effects etc can make use the GPU processing- I have set the video driver to x11-openGL"
Although the video driver X11-OpenGL is the best for video playback because it uses PBuffers and shaders to do video rendering, there are some plugins and transitions that can not use OpenGL and will be using software instead of hardware which slows down playback. Camera and projector operations use OpenGL, You can change Settings->Preferences, Playback A tab, Video Out section to uncheck "Play every frame" but for me, missing playing a few frames just confuses me.
I just wanted to confirm if the DNxHD or DNxHR options are available amongst the options, I don't seem to find them.
Set the file format to "ffmpeg" and the file format type to "mov" and then by clicking on the "Video wrench" you can use the down arrow to see options: dnxhr_hq, dnxhr_lb,
dnxhr_444, dnxhr_hqx, and dnxhr_sq.
cannot select/exclude amongst the various media
I have not found a way to do this either. But am wondering if a change got introduced at some time that invalidates this, because my memory and the manual statement of "Checking the Creation of proxy on media loading results in any additional media loads to be automatically proxy scaled." seems to indicate that if you do not check the "Creation of proxy on media loading" it does not scale them. In reviewing past release notes, I see that Proxy code went through a lot of revisions as requests for improvements were implemented so maybe this was something that was no longer needed.
Because "single frame media such as PNG or JPEG stills, can not be scaled to stream media.", I am wondering what the purpose is in transcoding them - bitrate maybe?
Because my Laptop is old I have to use Proxy. Usually my Proxy is set to: Scale factor=1/4, "Rescaled to project size"= UNCHECKED, FFMPEG | mpeg or mov.
Using alpha channel you can see my Proxy settings at 5m17s in the "Animated Split Screens" tutorial. Link to https://www.youtube.com/watch?v=YCqJnHLmj6s
The Format Project is 1920x1080 @30fps, dimensions of the original videos 1920x1080.
My Settings->Preferences-> - Playback A TAB: - Video Out: - Play every frame = UNCHECKED - Video driver = X11 - Use direct x11 render if possible = Checked - Performance TAB: - Use HW Device = none - Project SMP cpus: 4
My Laptop works good enough in that condition.
If you don't change type of extension (or don't change the Scale Factor) for proxy but change type of coded (Compression) you have delete or move your proxy files to another folder. For example, if you have chosen FFMPEG | qt, Compression=png.qt and then you want to change from png.qt to mjpeg.qt, Cinelerra thinks that proxies have already been created. Cinelerra-GG don't know the difference between png.qt and mjpeg.qt.
@igorbeg thanks! I took a look at the linked video & will look at more...
Because I have image sequences and it is recommended to keep them at 1:1, I will see if I can remove the background in Natron then render the images as as a video file. So I can import & proxy all video scaled to 1:4. BTW can cinelerra mask with some sort of edge detection by any chance? I have not used a green screen behind my subject and the background and subject have some overlapping hues and exposure so chromakey is not a good option. Otherwise I would have to change the mask almost on every frame...
Hello yes thanks @phylsmith2004, I finally had realised that by trying the .mov extension, the DNxHR options were revealed, I'm so rusty on everything! I tried to test transcode with DNxHR_HQ the Tiff sequence as well but it stutters; is not fluid in the compositor while the various videos are fine in compositor. Not sure if its because I'm still trying to figure out what to use to for them, guess PNG proxy...I think that once I add effects my old computer will struggle so I need to proxy the DNxHR_HQ video to something else as well. I keep re-reading the manual and some sites online & trying to figure out the right combination of transcoding and proxy settings but it is still confusing me since I need to stick to 1:1 & have alpha channel.
So far converting to dnxHR_HQ seems to be the best option for all my media, drops to 11-12 frames per second from 24 with a few effects attached. Not sure if this can be improved. I tried compression magicyuv.qt, but it lowered the framerate even more & was very low with effects. Applying a combination of both transcode then proxy magicyuv.qt gave the worse result. I wonder if there is a way to put all the proxy & transcode files into a folder that cinelerra will be able to open?, right now they are all mixed up with other files in my /home/videos, where I keep the cin appimage.
I wonder if there is a way to put all the proxy & transcode files into a folder that cinelerra will be able to open?, right now they are all mixed up with other files in my /home/videos
Currently these files MUST be in the same directory as the original files - the original design goal was to make it as fast as possible to switch from viewing the proxy file back to viewing the original file (the P/S switch in the upper right hand corner of the program window) because you would only be using Proxy if you needed speed in the first place. It is a little messy, but I have found that once you decide on the "best" proxy scale size, you should go into your video folder and delete the "trial and error" ones. Also, once you have created these a lot of time is saved as you never have to create them again. And if you come back days later you can easily see that "oh yeah, I have to use the proxy or it will be too slow to edit". However, I will log a Bug Tracker request to use the same generic Settings->Preferences, Interface tab "grabshot location" default of $HOME/Pictures so you can set it to whatever you want ($HOME/Pictures was chosen because all of the O/S distros seem to automatically create this for a user). Unfortunately, it takes years to learn details of the CinGG software code so that this could ever be implemented.
I keep re-reading the manual ... but it is still confusing me
Yes. I re-read the Proxy section in the manual yesterday and it needs a lot of clarification. It kind of just grew 10 heads as so many suggested changes were made to the code over a few years.
I wonder if there is a way to put all the proxy & transcode files into a folder that cinelerra will be able to open?, right now they are all mixed up with other files in my /home/videos,
I'd be interested in the answer to this, it happens to me too. It's so damned untidy and more than a little confusing when searching through the clips!
I found the best way to transcode, although it's much more labour intensive, is to render DNxHR_HQ from the timeline by labelling each clip and selecting create new clip at each label in the render box, then transferring the clips to a "transcoded" folder. I have never tried this for proxies because it would not bring up the proxy on/off on the timeline. I've been editing using the transcodes.
Are you transcoding AND generating proxies, that's how it reads to me, if so, why? Just curious. 😀
@dejay That's an interesting way to organise them. I started by loading only resources to review in viewer first, but for future projects, I'll see...
I thought that creating proxies with the option to switch on for more CPU intense effects and off for colorgrading, might be a good idea. But from a few tests, I didn't find any useful transcode/proxy combination at 1:1 scale factor, so for now I just transcoded.
I may have to re-transcode? I forgot to ask whomever might know, if it was better to set the bitrate for DNxHR_HQ (for 1080p footage that was 60fps to 24fps project). I actually transcoded everything with 0 bitrate and quality -1 & the other defaults in: dnxhr_hq.mov compression. I had tested with 200000 and 4000000 bitrate but didn't see much difference.
I'm not sure how to use the information from :
https://avid.secure.force.com/pkb/articles/en_US/white_paper/DNxHR-Codec-Bandwidth-Specifications
would that be: 20790000 bytes or is it 20,000 bitrate for my project?
With my aging computer, I might have to do what I recall doing years ago, using MTS footage and the same computer; render portions with a few effects to a 32 float format and re-insert it into my project, for furthur editing. It's too bad that Cinelerra can't use my new GPU very much (which is what I was able to update for now) as with Darktable (OpenCL), even though I set to X-openGL and cuda in preferences...
If using offline editing (proxy editing), there is no need to transcode footage unless it is in a format your NLE cannot accept. Editing is performed on the proxy clips, the edits will be applied to the original clips upon rendering, as long as you switch the proxies off first (hence the comment in my previous post regarding not generating proxies the same way as I transcode).
If transcoding to an intermediate "edit friendly" codec, which usually means from a CPU intensive long-GOP such as mp4 to an all-i much easier to handle format such as DNxHD, then proxies should not be needed, but if they are, it would have been better to use proxies without transcoding in the first place.
There is no need to worry about bit rates and quality settings for DNxHR as they are fixed to the resolution and frame rate of the media being encoded.
Just a note to say that in CinGG the Transcode feature is not complete. It originates within creating proxies, as a way to do 1:1 proxies. For example transcode does not set the color spaces that will have to be varied either before, outside of CinGG, or after transcoding with the ColorSpace plugin.
I have found no problems using different formats and resolutions in the same timeline. I do recommend, however, that there be uniformity of color spaces (and FPS).
PS: you can use DNxHR_lb which is the "proxy" version of the avid format (low bitrate). But to apply filters and do Color Correction you have to switch to the no proxy version. It is easy in CinGG to switch between modes all the time.
Currently these files MUST be in the same directory as the original files - the original design goal was to make it as fast as possible to switch from viewing the proxy file back to viewing the original file (the P/S switch in the upper right hand corner of the program window) because you would only be using Proxy if you needed speed in the first place.
I would argue that you would also use proxy clips if you want smooth playback probably, but not necessarily, the same thing.
As for transcoding, it only needs to be done once and afterwards the original files could be deleted if desired.
We seem to be confusing transcoding with proxy generation. Transcoding is converting clips to a different working format, clips that can then be used in the place of the original clips. Proxy clips are low resolution clips to be used alongside the original clips, as temporary substitues but not to replace them. Both should help with a slow computer, in some cases proxy more so than transcoding.
Good clarification - definitely not the same things! But in any case it would be less messy if both Proxy and Transcode files were put somewhere else. I did log a BT:
Hey Guys! @dejay @phylsmith2004 @andreapaz thanks for all the info, I will go over it after taking a little break, much needed as its been a crazy bumpy journey from crash learning Darktable (Raw image editor that heavily uses open-Cli) for my stop motion images & getting a newer NVIDIA card (all i could budget) only to realise that there is still much used by the CPU and limited by the drive speed when dealing with most video editors except for Davinci Resolve. Thus I really need either efficient proxies or transcoding to help my system.
What I found confusing is that the many options for both transcoding and proxies seem identical except for the scale option, hence trying to wrap my head around the difference. How I remember it in Cinelerra for Grandmas manual, the terms intermediate codec, at that time there was DNxHD and the mjpeg type, were almost used interchangeably with proxies since we still had to point Cinelerra to the original files for the final render. I am surprised to read that with good codecs, you can now drop the originals!
My main types of files; TIFF sequences (which could not play in compositor, would cause Cin to crash) & MTS video originals (interframe long GOP) were all transcoded and replaced with DNxHR_HQ but upon testing with effects playback is slow esp the video ( 3-4 frames per second).
(To complicate things, I was hoping to try a few things like the painting effect in Natron with the edited footage (Natron works with many sources in Rec.709 space) but I might save that for another project.)
So maybe I should get my originals back in project and just proxy them with DNxHR_lb so I can switch to the originals for the colour correcting. If I didn't need an alpha channel, for masking & chroma key effect, I would have liked to use the mpeg.mpeg proxy, it seemed to play smoother. Can we still render small portions of a project in a float format and bring it back on the timeline if needed?
Unfortunately, all Open Source NLEs have problems with GPU acceleration. Only big companies with lots of experienced programmers in video/graphics/OpenCL/OpenGL can get good results (as is happening today with AI, I think it is the new revolution in editing). Plus CinGG during the workflow on the timeline has some forced single-threaded steps so there is not much advantage in switching from a single core CPU to a multicore CPU. I had only a small improvement, due to the higher processing frequencies, going from an Intel Sandy Bridge 2720QM (11 years old) to an AMD Ryzen 3700X (8 cores, 16 Threads). The freezes I had before continue even now....
On the term "digital intermediate" there is a little confusion because of the legacy of the film days. Back then, off-line editing (which then went through an intermediate) consisted of creating a low-quality copy of the daily, i.e., a proxy, both because the development lab took less time and also because the editor could work even on low-quality copies, since he was only doing the editing. So intermediate=proxy!
Today, with the digital workflow, off-line editing is done with high-quality intermediates taken from film or poor-quality sources, which at least do not further degrade the data during washing (especially color data...). But current NLEs, including CinGG, do not need off-line editing, so we resort to high-quality digital intermediates or proxies depending on the needs of the project. If we also do Color Correction and Compositing/VFX high quality intermediates are recommended, if we have efficiency issues on the timeline proxies are recommended. So Intermediate NOT proxies.
mpeg.mpeg has such a low bit rate that it runs fine on any hardware, that is why it is set as default in the proxy option of CinGG.
Doing (temporary) rendering of a part of the timeline is the "background rendering" option, which creates a sequence of still images. Every variation you make or filter you add makes the rendering repeat all over again. see:
https://cinelerra-gg.org/download/CinelerraGG_Manual/Background_Rendering.html
With old computers We can use Proxy. Cinelerra uses two types of Proxy and you can switch that with "Rescaled to project size".
When "Rescaled to project size" is checked, the size of the video data to be computed will always be as the format project.
The size of your video is scaled down of the Scale factor BUT, then, it is rescaled up to the project format.
For example, your format project is 1920x1080 and Proxy's Scale factor is 1/4, then your videos will be scaled down to 480x270 and then up-scaled to 1920x1080. For an old computer, this option is not recommended.
When "Rescaled to project size" is UNchecked, the size of the video data to be computed will be reduced of the Scale factor.
The size of your video is scaled down of the Scale factor.
For example, your format project is 1920x1080 and Proxy's Scale factor is 1/4, then your videos will be scaled down to 480x270.
It works as if the format project were 480x270 (and really it works so).
The good thing is that your computer will be faster. The bad thing is that some Effects (Plugins) doesn't work as expected because they use pixel units. An example for that are Title, Blur plugins. A workaround is needed for that.
In the list below, some possible Proxy settings with Alpha Channel, tested by me.
File Format [FFMPEG] |
Compression | Bitrate | Quality | Pixels |
---|---|---|---|---|
qt | png.qt | 0 | -1 | rgba |
png.qt | 0 | -1 | rgba64be | |
magicyuv.qt | 0 | -1 | yuva444p | |
openjpeg.qt | 0 | -1 | yuva420p | |
pro | prores_4444.pro | 0 | -1 | yuva444p10le |
prores_4444xq.pro | 0 | -1 | yuva444p10le | |
mkv | user_ffvhuff.mkv | 0 | -1 | yuva420p |
0 | -1 | yuva422p | ||
0 | -1 | yuva444p | ||
0 | -1 | yuva420p9le |
@igorbeg I went through a bunch of the recommended proxies with alpha, found that all took hours to transcode and played terribly slow for some reason. I tried with both driver X11 and X11_open gl. Not sure if I did try X11-XV. Launching cin in terminal shows that almost everything defaults back to software rendering. I am using RGBA float, harware device set to vdpau, since I am using proprietary NVIDIA driver. Background rendering helps marginally. I keep thinking there is something I was doing wrong?
My project is only HD, and now all the original media seems to play ok in compositor without crashing so I decided to just use the original files as is. I had even tried to load my image sequences, select a non alpha proxy, (figured it might be useful for some editing) then create proxy upon loading my videos with one of the alpha codecs, but I just made messes in my folders, like some media still got double proxied, lol! @phylsmith2004, I can confirm that attempting to selectively choose what to proxy seems not to work so good at the moment...
I found out that the very latest version of Cinelerra-8 HV, (source) seems to have GPU acceleration enabled, of course many computations still require the CPU, but I was wondering if perhaps this will be included in Cinelerra-GG? I do see the option for Cuda, is that the same thing? I am making more space on my / drive to download nvidia-cuda-toolkit. (As was mentionned, there are limits to what is accessible, doable with the community &with NVIDIA...)
I am glad there is a dedicated community!
10/23/22 - Cinelerra 8 GPU accelerated rendering. Faster GPU accelerated playback. Write output to command line ffmpeg.
....
Was last tested on Ubuntu 22 x86_64.For GPU acceleration, it needs the proprietary Nvidia X11 drivers & CUDA
libraries.....