AppImage excessive ...
 
Avisos
Vaciar todo

AppImage excessive RAM useage

28 Respuestas
4 Usuarios
0 Me gustas
3,133 Visitas
0
Topic starter

I have 92 x 2160p clips at 400mbps, which have also been transcoded to DNxHR_HQX at 728mbps in 3 batches. The transcodes need to be re-imported in one batch for editing. Importing them into the Appimage uses 55.71Gb of RAM and it fills the RAM (64Gb) when trying to generate proxies, freezing the system, necessitating manually switching off and rebooting. Importing the same clips into the last Arch single user release (20201031) uses 28.03Gb, which rises to 32.54Gb towards the end of generating proxies. Importing the raw (not RAW!) camera files into the Appimage takes longer to freeze, but it still happens.

The import strategy is replace current project and concatenate tracks.

There is no point in importing the clips into the Appimage in batches for editing because, obviously, they will eventually need to be imported all at once for final assembly and rendering. Also, shouldn't the Appimage handle them the same as the single user version?

I have been happily using the Appimage to edit smaller sections of the project 'til now.

MatN 09/03/2021 9:46 am

1. I don't have that DNxHR_HQX file format. Could you tell me how to transcode into that format? Maybe the large memory increase is because of this specific format.

2. Do you have swapfile enabled? I find it strange the system freezes, I only had that when my disk was completely full. If you have a swapfile or partiton, and it is not full, does it grow when you do this large video in the AppImage?

DeJay Topic starter 09/03/2021 1:56 pm

1. Yes, in the render box File format FFMPEG, select mov, click the video spanner icon, in the Compression window open the drop down list and select dnxhr_hq.mov. It can also be found deep in the qt menu. I doubt very much it is the format because a) it happens whether I import DNxHR_HQX, DNxHR444 which I tried first, or raw camera files which are 10-bit h265 .mov at 400mbps and b) it doesn't happen if I import any of those into 20201031.

2. Normally no, but I temporarily set up a 64Gb swap partition and tried again. It filled the RAM and swap, froze and crashed Cin. That's (nominally) 128Gb of memory!

MatN 10/03/2021 9:56 am

Thanks for the DNxHR_HQX tip, I'll try that next, on Manjaro.

I found there are considerable differences in memory usage if the hwaccel settings match the movie (see my other post). What are your hwaccel settings on the AppImage, and if you start it from a terminal, do you see any errors there while importing?

14 respuestas
0

@DeJay

Good to know that up until now, AppImage has been working for you.  I will run a test here to compare memory usage, which I would think should be comparable to what is used with a tar file, to see what my computer shows. Possibilities include:

1) A non-authoritative person on the internet says that the disadvantages are that your application executables are larger and take much more memory but I do not see how that can be true in this case.

2) The computer where the AppImage for newer distros was created has 32GB so I wonder if that is relevant.  Do you have more or less than that?  Again, that should not be relevant.

3) Other possibilities? I will see what I can find out.  Maybe MatN has more ideas.

 

0
Topic starter

Thanks for the rapid reply Phyllis

1. I find the Appimage uses 758M of RAM when idle, the single user 803M.........interesting?

2. If you mean 32Gb of RAM then yes I have more, 64Gb

0

My simple tests show no difference in memory usage.

MatN said he might have time to look at this tomorrow.

 

@andreapaz

Since you are using Arch also and do your own builds too, could you watch the output of "top" from a terminal window while playing a video - once on a built system and once using AppImage?  I am not sure what is going on.

DeJay Topic starter 08/03/2021 7:52 am

I'm using Manjaro which is Arch based, if that makes any difference.

Also, if I change the import strategy to create new resources only, the memory use is almost identical.

andreapaz 08/03/2021 1:01 pm

Importing 4 ProRes files at 1440p for a total of 6.5GB (with your same Insertion Strategy) I also notice a slight difference in RAM consumption.
Using TOP:

No CinGG    CinGG    import files    Proxies (1/2)
--------------    ---------    ---------------    -----------------

1.0% (tot)      1.3%      3.2% (4.6%)  5.3%                      RAM (AppImage)
0.3 (tot)         0.9%      550%             1500%                  CPU (AppImage)

1.0% (tot)      0.4%      2.3%             3.8%                      RAM (Build)
0.3% (tot)      11%       50%              1500%                   CPU (Build)

 

I use Arch Linux; Ryzen 3700 (8/16) and 32GB of RAM; driver X11-OpenGL. The smaller values may depend on the 4 Prores being less extreme than your files. Also, the non-std resolution of your media may affect how it is handled by CinGG.

 

One thing I can't judge is that I have to restart every CinGG test because otherwise the RAM consumption always increases as you use CinGG and never goes back, even if you go back to idle (the test I did with result 3.2% became 4.6% in a second consecutive test).

 

PS: How much do you set the "cache size" in CinGG? I'm doing various tests but I can't figure out which value is the best (now I'm trying 4096, but I don't notice any improvement)!

0
Topic starter

Hi Andrea, it's good to know you find the Appimage uses more RAM, the difference being not so far away from my result, in your case 72% and mine 84%.

I'm not sure why you say my resolution is non-standard, I am importing 92 3840x2160p clips, maybe it's the way I wrote it? Sorry for any confusion!

Interesting observation about Cin not releasing RAM, I have noticed this effect before when in the middle of a project, but then I'm not prepared to try and find out why, I just want to get the project finished, then usually forget about it. I had not realised it is a regular thing. Needs fixing perhaps?

I just leave cache size at default (256).

I was using the 20210228 Appimage, I have downloaded the other two newer distro ones and found the unnumbered Newer Distro one to do the same, but the 20201031 Appimage acts like the single user release I tested against it.

0

I have done some simple testing with various CinGG releases. Determining the actual memory usage under Linux is difficult, I settled on ps_mem.py. Following are the test results on Mint 19.3 XFCE on AMD Ryzen 5 2400G, 32G ram.

I tested just after the program loaded, and then with a 300 M 4K file loaded which plays fine, "LG Dolby Trailer 4K Demo.ts", I might have gotten that from Youtube.

cinelerra-5.1-mint19-20201031.x86_64-static.txz
Private + Shared = RAM used Program
84.5 MiB + 7.8 MiB = 92.3 MiB cin (just loaded)
2.1 GiB + 7.9 MiB = 2.1 GiB cin (movie loaded)

cin built from git 20210111
Private + Shared = RAM used Program
86.2 MiB + 6.5 MiB = 92.7 MiB cin (just loaded)
2.9 GiB + 8.0 MiB = 2.9 GiB cin (movie loaded)

CinGG-20210228-x86_64-older_distros.AppImage
Private + Shared = RAM used Program
82.4 MiB + 7.7 MiB = 90.1 MiB AppRun (just loaded)
3.0 GiB + 7.7 MiB = 3.0 GiB AppRun (movie loaded)

A built from the current git on Mint 20.1 gave ffmpeg problems when loaded in a terminal, so I could not test that yet.

Anyway, not much difference between AppImage and "native" version. But a strange increase in memory usage after 20201131 .

MatN 08/03/2021 9:12 pm

A native static built from the current git 20210308 on Mint 19.3 gave no errors, and pretty much the same memory usage as the 2020-10 git:

2.8 GiB + 8.0 MiB = 2.8 GiB cin  with the movie loaded.

MatN 10/03/2021 9:44 am
Esta publicación ha sido modificada el hace 4 años 4 veces por MatN

I did some more structured testing, because I wondered if settings might have
an effect. And indeed they do. Same testing environment as before, Mint XFCE 19.3,
AMD Ryzen 5 2400G, 32G ram, movie "LG Dolby Trailer 4K Demo" 300MB, memory size
"total" as reported by ps_mem.py.

 

Build      source /     Exe           Disk    Mem GB   Mem GB       Mem GB
system   git date     form           MB        just         after play      after play
                                                            started       no hwacell    vaapi

 

CinGG   2020-10     cin              227     0.091          2.0                1.1
 distro     static

 

Mint       20210308   cin              188     0.091          2.9                1.3

 

Mint       20210308   AppImage    82     0.092          3.0                1.3
 
CinGG   2021-02      AppImage    80     0.090          2.9                n.a.
                older

 

On the last entry, the "older" AppImage has an incompatible vaapi lib,
so hwaccel gives errors in the starting terminal when enabled. The normal
AppImage does not run on Mint 19.2, but runs on Mint 20.1 . If I actually build the latest source on Mint, it works OK.

 

Basically, there is not much difference between the versions in memory
use. The only thing really making a difference is enabling the vaapi
hwaccel. I have no Nvidia, so did not test other hwaccels.

0

Interesting observation about Cin not releasing RAM, I have noticed this effect before when in the middle of a project, but then I'm not prepared to try and find out why, I just want to get the project finished, then usually forget about it. I had not realised it is a regular thing. Needs fixing perhaps?

Cinelerra is supposed to accumulate RAM as it runs in order to run faster by keeping things in memory for potential future use instead of having to pull it up again from disk.  It releases memory upon exiting routines and hopefully the Operating System handles some of this.  However, third party libraries used by Cinelerra may not be as diligent in releasing RAM.

Running Valgrind, which Andrea has done in the pass and sent to GG who was able to find any memory leaks, shows if memory is getting lost.  For example at the end there is the following from an old one of Andrea's dated Oct. 28, 2018:

==2100== LEAK SUMMARY:
==2100== definitely lost: 8,184 bytes in 46 blocks
==2100== indirectly lost: 626,096 bytes in 7 blocks
==2100== possibly lost: 0 bytes in 0 blocks
==2100== still reachable: 323,861 bytes in 5,044 blocks
==2100== suppressed: 0 bytes in 0 blocks

In the above case, when GG analyzed where the leaks came from, they turned out to be from somewhere other than Cinelerra so there was no way to fix them.  But now changes made since Oct. 31, 2020 could have introduced new leaks in Cinelerra itself.  Unfortunately, I do not know how to fix the code to resolve any of these but it would be worth looking at to see if there is any large new leak (maybe Andrea will have time to run Valgrind on a current build).

Esta publicación ha sido modificada el hace 4 años por phylsmith2004
0

After reviewing all of the feedback, it looks like the extra memory usage is not from the tar/build versus AppImage BUT rather the difference between Oct. 31, 2020 and Feb. 28, 2021 (or in the case MatN tested after 20201131).

So based on the feedback, I tested again to confirm and can definitively report that the new version uses more memory on the same playback video. I will need to narrow down which modification has increased the need for more memory. Also, if Andrea has time to run Valgrind and send me the log, I can at least look at the statistics on the leakage to see if it looks like there is a problem.

andreapaz 09/03/2021 2:43 pm

I did a Valgrind playing with titles and Dissolve transitions. There are errors, but I can't read the meaning.

phylsmith2004 10/03/2021 3:19 am
Esta publicación ha sido modificada el hace 4 años por phylsmith2004

@andreapaz

Thank you for running valgrind.  I never have the patience to do so.  There are a lot of lost bytes and this is possibly the cause of the extra memory being used now.  I will have to see if I can understand which mod may have caused a problem but it will be a difficult learning curve to get there.  You can see millions of bytes lost and that is bad:

==215823== LEAK SUMMARY:
==215823== definitely lost: 169,064 bytes in 1,432 blocks
==215823== indirectly lost: 5,223,805 bytes in 3,188 blocks
==215823== possibly lost: 7,988,859 bytes in 23,395 blocks
==215823== still reachable: 973,283 bytes in 13,564 blocks

0
Topic starter
Posted by: @matn

What are your hwaccel settings on the AppImage, and if you start it from a terminal, do you see any errors there while importing?

I have no idea what hwaccel settings are, or how to find them. I never use the terminal unless I absolutely have to and have no desire to learn how. Yes, there is at least one Linux user on Earth that only wants to use it! 🙂

MatN 11/03/2021 8:46 am

I understand your reluctance to use the terminal. It would be nice if CinGG had some built-in way to display system hardware relevant to video editing. How do you determine memory usage?

Regarding hwaccel, I was referring to the "Use HW device" in the Settings-> Preferences->Perfomance tab. That makes a big difference in memory usage in my testing, at least for a H.265 coded movie where a hardware decoder is supported via vaapi.

Also, what are your Settings->Format? DNxHR_HQX is a 12-bit format, right? So you probably use RGBA float?

0
Topic starter

Mat, sorry for the delay, but I needed time to do a few more tests. I'm going to try to put all the information in this post.

DHxHR_HQX is available in the 20201030 release. In the render window File Format FFMPEG, in the small box select qt from the dropdown list. Click the video spanner icon and in the compression window dropdown list select dnxhd qt, click Video Options View, highlight profile, click the profile button and select dnxhr_hqx.

The Use HW Device window is blank, I have never needed to change it. I set it to vdpau to test, but it made no difference. It is the same in both versions.

I have rendered DNxHR_HQX with both versions, I have been doing so for a while in 20201031.

I am importing 25fps 4:2:0 full range 10-bit F-log, all intraframe, HEVC at 400mbps, which I then transcode to DNxHR_HQX at 720mbps. I was trying transcoding smaller sections of movie to DNxHR444 at around 1700mbps which worked well, but when I tried importing the 92 transcoded clips the Appimage stopped well short and froze. I have a widget on the KDE panel that shows RAM useage and it was showing 55.71Gb. I then re-transcoded to DNxHR_HQX and tried again, it loaded them all but still filled the RAM and froze the system when generating proxies, the only escape is to press the power button and switch off. I temporarily set up a swap partition. It loaded the transcodes, but froze when generating proxies because it filled that as well, nominally 128Gb of memory!

Both times I transcoded in 3 sections, but it is necessary for all the clips to be on the timeline for editing. When I import the 92 transcodes to single user 20201030, the widget indicates 28.03Gb and there is no problem generating proxies.

DNxHR_HQX is 10-bit and yes, I am using RGBA-Float. I have tried various loading strategies but all show roughly the same RAM useage.

Edit: I meant to add that the 20201031 Appimage does not have the problem, but the unnumbered For Newer Distros and the 20210228 both do.

0

In reconstructing GIT versions, I have found that around 12/01 after the interlace and aspect modifications were added, Memoryusage increased by 15%.  Sometimes when new constructs are created, it requires to de-construct  them also so that the memory is returned.  Not sure if this is the case here though but if it is, then maybe it can be changed to give up that increased memory (will ask Andrew when he gets back).

 

But this only accounts for part of the increased Memory usage -- there is another 10% increase by the time we get to February 28 so I will continue to track that done and see if it can be changed.

Esta publicación ha sido modificada el hace 4 años por phylsmith2004
DeJay Topic starter 19/03/2021 8:29 am

As a final comment, for now at least, I am now using the 20201031 Appimage and it has no sign of the problem. If I import all 92 DNxHR_HQX transcoded files it shows 28.03 Gb RAM used, which rises to 32.54 while generating proxies.

0

@DeJay, I have not forgotten but was fully occupied. Should have more time after Tuesday. Testing so far however on Mint 20.1 between the 2020-10 static version and the 2021-02 AppImage I see about 20% more memory usage, so far less than what you are seeing.

DeJay Topic starter 21/03/2021 8:37 pm

"@DeJay, I have not forgotten but was fully occupied."

 

I wasn't suggesting you have, I am just clarifying how things are currently, especially as it seems in hindsight that the thread title is inaccurate and it does not seem to be Appimage that has made the difference.

 

Before I retired, in a different field, I was the fixer and I know how important every little piece of information can be, as long as it is reasonably accurate.

0

Although I also have been looking at this in the last 3 days, I still have not come any closer to an answer.  The strange thing is as I compile from GIT, the memory usage goes up even on 2020-11-26 when the only changes were the addition of more format files for rendering choices and additional aspect ratio choices.  Neither of these should affect memory usage.

0

@DeJay

@MatN

I think I found the problem and so now there is a test image at:

https://cinelerra-gg.org/download/images/cin-x86_64-TestOnly.AppImage

which contains all of the current mods except what seems to be the line causing the increased memory usage.  The line commented out is in fileffmpeg.C and is:   ff->video_probe(1);   and its usage appears to be only for interlace reporting so should not affect usage.

 

When/if you have time, can you verify that this solves the problem for the Dnx stuff.  If it does, then we can go from there.

0
Topic starter

Hi Phyllis, yes you have found the problem. The test image loaded all 92 transcoded clips, DNxHR_HQX 2160p @ 728Mbps, showing A RAM useage of 27.78Gb, nearly the same as 20201031. When generating proxies it went up to 32.34. Again, nearly the same.

Thanks.

Compartir: