From: Good Guy Date: Tue, 23 Jun 2020 20:26:20 +0000 (-0600) Subject: Andrea corrections X-Git-Tag: 2021-05~71 X-Git-Url: https://cinelerra-gg.org/git/?a=commitdiff_plain;h=2cdad86105141c6ce1bd7b9be2330a7614257339;p=goodguy%2Fcin-manual-latex.git Andrea corrections --- diff --git a/parts/Editing.tex b/parts/Editing.tex index 250c71b..9679e53 100644 --- a/parts/Editing.tex +++ b/parts/Editing.tex @@ -1219,7 +1219,7 @@ render it, it will now automatically have the updated information. The purpose of this feature is to be able to rework a smaller section of a global master project at any time, which can be done by -an "assistant" and then this work is automatically reflected in the +an "assistant" (i.e. external software like GIMP or Inkscape, $\dots$) and then this work is automatically reflected in the global master project. It is for \textbf{advanced usage only}. Up until the addition of this feature, \CGG{} has always used copies diff --git a/parts/Plugins.tex b/parts/Plugins.tex index 06f057e..66d97d2 100644 --- a/parts/Plugins.tex +++ b/parts/Plugins.tex @@ -1792,7 +1792,7 @@ Uses a Bayer filter algorithm to interpolate (estimate) missing color informatio \subsubsection*{Theory} \label{ssub:theory} -Each video has its own framerate. If we want to change it (for \textit{timelapse} or \textit{slowmotion}) the best thing is to shoot the scene with suitable framerate. But even in post production we can do something. The simplest method is to remove some frames to speed up the movie or add some to slow it down (from now on, for simplicity we will consider only the timelapse). Needless to say, the result is not smooth and the viewer will notice it immediately. A better method is to use the interpolation, mediating the pairs of frames that alternate. For example, if we have a sequence of frames $1, 2, 3, 4, 5, 6, 7, 8\dots$ we can make a timelapse mixing frames $1$ and $2$, $3$ and $4$, $5$ and $6$, $7$ and $8$ and so on. So we will have a new sequence of $4$ frames instead of the initial $8$: $\underline{12, 34, 56, 78}\dots$ We will get $50\%$ acceleration but it will always be of bad quality because of the too rough blending between the pairs of frames. Blending can be improved by weighing it differently by $50\% frame 1 + 50\% frame 2$, but the result is still unsatisfactory. Further improvements can be achieved by using $logarithmic$ or $exponential$ interpolation instead of $linear$ interpolation. But the most sophisticated methods that lead to better results are based on \textit{optical flow analysis}. These analyses the movement of circumscribed areas over a given period of time. With this method the intermediate frames do not derive from an approximate blending, but from the calculation of the \textit{vector} of the motion between two frames that determines the displacement (\textit{warping}) of the moving figure in the new intermediate frame. \textit{Interpolate Video} works this way. +Each video has its own framerate. If we want to change it (for \textit{timelapse} or \textit{slow motion}) the best thing is to shoot the scene with suitable framerate. But even in post production we can do something. The simplest method is to remove some frames to speed up the movie or add some to slow it down (from now on, for simplicity we will consider only the timelapse). Needless to say, the result is not smooth and the viewer will notice it immediately. A better method is to use the interpolation, mediating the pairs of frames that alternate. For example, if we have a sequence of frames $1, 2, 3, 4, 5, 6, 7, 8\dots$ we can make a timelapse mixing frames $1$ and $2$, $3$ and $4$, $5$ and $6$, $7$ and $8$ and so on. So we will have a new sequence of $4$ frames instead of the initial $8$: $\underline{12, 34, 56, 78}\dots$ We will get $50\%$ acceleration but it will always be of bad quality because of the too rough blending between the pairs of frames. Blending can be improved by weighing it differently by $50\% frame 1 + 50\% frame 2$, but the result is still unsatisfactory. Further improvements can be achieved by using $logarithmic$ or $exponential$ interpolation instead of $linear$ interpolation. But the most sophisticated methods that lead to better results are based on \textit{optical flow analysis}. These analyses the movement of circumscribed areas over a given period of time. With this method the intermediate frames do not derive from an approximate blending, but from the calculation of the \textit{vector} of the motion between two frames that determines the displacement (\textit{warping}) of the moving figure in the new intermediate frame. \textit{Interpolate Video} works this way. \subsubsection*{Practice} \label{ssub:practice} @@ -2207,6 +2207,13 @@ Most cameras take the light coming into the lens, and convert that into $3$ sets Radial blur is a \textit{Bokeh} effect that creates a whirlpool which simulates a swirling camera. You can vary the location, type, and quality of the blur. +\begin{figure}[hbtp] + \centering + \includegraphics[width=0.8\linewidth]{radial.png} + \caption{For clarity of presentation only 2 fields are shown} + \label{fig:radial} +\end{figure} + \begin{description} \item[X,Y] center of the circle of movement. \item[Angle] angle of motion in one direction. @@ -2216,12 +2223,6 @@ Radial blur is a \textit{Bokeh} effect that creates a whirlpool which simulates Figure~\ref{fig:radial} has the parameters: $Angle=-35$ and $Steps=2$. -\begin{figure}[hbtp] - \centering - \includegraphics[width=0.8\linewidth]{radial.png} - \caption{For clarity of presentation only 2 fields are shown} - \label{fig:radial} -\end{figure} \subsection{ReframeRT}% \label{sub:reframert} @@ -2245,11 +2246,11 @@ to interpolation. Stretch mode multiplies the current frame number of its output by the \textit{scale factor} to arrive at the frame to read from its input. The scaling factor is not entered directly but using a number of \textit{input} frames to be divided by the number of \textit{output} frames. -\vspace{1ex} \texttt{Scale factor = Input frames / Output frames} +\vspace{1ex} \texttt{Scale factor (SF) = Input frames / Output frames} -\[\frac{1}{8} \Rightarrow scale\, factor = 0.125 \qquad (slowmotion)\] +\[\frac{1}{8} \Rightarrow scale\, factor = 0.125 \qquad (slow\, motion)\] -That is, one input frame of the original movie corresponds to $8$ new output frames originated by interpolation. It is the opposite with regard to \textit{fast play}. +For slow motion we leave 1 for the frames of the input and we increase the number of frames of the output (for example putting 8 for the output we have slow motion $8\times$, with $SF=\frac{1}{8}=0.125$). For fast motion we leave 1 for the output and we increase the number for the input (for example 8 to have $8\times$, with $SF=\frac{8}{1}=8$). Another possibility is to put the frame rate of the media (e.g 120 fps) in the input and the project frame rate in the output (e.g 30 fps) or the opposite. The stretch mode has the effect of changing the length of output video by the inverse of the scale factor. If the scale factor is greater than $1$, the output will end before the end of the sequence on the timeline. If it is less than $1$, the output will end after the end of the sequence on the timeline. The ReframeRT effect must be lengthened to the necessary length to accommodate the scale factor. Change the length of the effect by clicking on the endpoint of the effect and dragging. @@ -2262,6 +2263,8 @@ in stretch mode with a value less than $1$. \textit{Example:} you have a clip that you want to put in slow motion. The clip starts at $33.792\, seconds$ and ends at $39.765$. The clip is $5.973\, seconds$ long. You want to play it at $\frac{4}{10}^{ths}$ normal speed. You divide the clip length by the playback speed ($5.973\div0.4$) to get a final clip length of $14.9325\,seconds$. You create an in point at the start of your clip: $33.792\,seconds$. You put an out point $14.9325\,seconds$ later, at $48.7245\,seconds$ ($33.792 + 14.9325$). You attach a \texttt{ReframeRT} effect, set it to $0.4$ and stretch. You change the out point at $48.7245$ to an in point. You start your next clip after the slow motion effect at the $48.7245$ out point. You can do this without making any calculations by first applying the effect and then lengthening or shortening the bar to where the stretched movie ends. +Now in the timeline we have the affected part of the plugin where we see the slow/fast effect, and the continuation of the timeline from where the plugin ends. We then have to select the interval on which the plugin acts and render it or transform it into a nested clip or nested asset. In this way we can replace or overlap it with the part of the timeline that originally included all of the part we wanted to slow down/speed up. + \subsubsection*{Downsample}% \label{ssub:downsample} @@ -2273,10 +2276,11 @@ Downsample mode changes the frame rate of the input as well as the number of the \label{ssub:other_important_points} \begin{itemize} - \item ReframeRT uses the fps indicated in \texttt{Settings$\rightarrow$ Format$\rightarrow$ fps} project and not the \texttt{fps} of the assets. + \item ReframeRT uses the fps indicated in \texttt{Settings $\rightarrow$ Format $\rightarrow$ fps} project and not the \texttt{fps} of the assets. \item It can be associated with Nested Clips. - \item As an alternative to ReframeRT you can use the \textit{speed curve}, or change the framerate in \texttt{Resources$\rightarrow$ info} and in the \texttt{Project}. + \item As an alternative to ReframeRT you can use the \textit{speed curve}, or change the framerate in \texttt{Resources $\rightarrow$ info} and in the \texttt{Project}. \item It is keyframmable. + \item ResampleRT with the same settings is used to act on audio tracks. \end{itemize} \subsection{Reroute}% diff --git a/parts/Tips.tex b/parts/Tips.tex index 57cec8f..c78244d 100644 --- a/parts/Tips.tex +++ b/parts/Tips.tex @@ -31,7 +31,7 @@ VDPAU, Video Decode and Presentation API for Unix, is an open source library to VA-API, Video Acceleration API, is an open source library which provides both hardware accelerated video encoding and decoding for use mostly with Intel (and AMD) graphics boards. -Currently only the most common codecs, such as MPEG-1, MPEG-2, MPEG-4, and H.264 /MPEG-4, are accelerated/optimized by the graphics card to play these particular video formats efficiently. The other formats are not optimized so you will see no performance improvement since the CPU will handle them as before, just as if no hardware acceleration was activated. There are many different graphics cards and computer systems setup, so you will have to test which specific settings work best for you. So far this has been tested at least with Nvidia, Radeon, and Broadwell graphics boards on some AMD and Intel computers; depending on the graphics card, two to ten times higher processing speeds can be achieved. However, most graphic operations are single-threaded so that +Currently only the most common codecs, such as MPEG-1, MPEG-2, MPEG-4, H.264 /MPEG-4 and h265 (hevc), are accelerated/optimized by the graphics card to play these particular video formats efficiently. The other formats are not optimized so you will see no performance improvement since the CPU will handle them as before, just as if no hardware acceleration was activated. There are many different graphics cards and computer systems setup, so you will have to test which specific settings work best for you. So far this has been tested at least with Nvidia, Radeon, and Broadwell graphics boards on some AMD and Intel computers; depending on the graphics card, two to ten times higher processing speeds can be achieved. However, most graphic operations are single-threaded so that performing the operations in the hardware may actually be slower than in software making use of multiple CPUs, which frequently multi-thread many operations simultaneously. \subsection{GPU hardware decoding}% diff --git a/parts/Windows.tex b/parts/Windows.tex index 6e882e5..ff82776 100644 --- a/parts/Windows.tex +++ b/parts/Windows.tex @@ -769,6 +769,7 @@ output size is the final video track size where the temporary pipeline is render The aspect ratio is the ratio of the sides of the frame (\textit{Width} and \textit{Height}). For example, classically broadcast TV was 4:3 (= 1.33), whereas today it has changed to 16:9 (= 1.85); in cinema we use the 35 mm aspect ratio of 1.37 (Academic aperture), but even more so the super 35 mm (2.35). There are also anamorphic formats, i.e. that have no square pixels, like Cinemascope (2.35). The projection must be \textit{normalized} to have an undistorted view. + From the film or digital sensors of the cameras, we can extract any frame size we want. We are talking about \textit{viewports}, which we will examine shortly. Also important is the output of the film that will be rendered, because it is what we will see at the cinema, or on TV, or on the monitor of the PC, tablet or smartphone. Referring to figure~\ref{fig:temporary-01}, you can see these two possibilities: with the Camera you choose the size and aspect ratio of the source file (regardless of the original size); while with the Projector you choose the size and aspect ratio of the output. The following formula is used to vary the aspect ratio: @@ -783,7 +784,18 @@ from which: $H = 817$ pixels \CGG{} allows you to vary the input and output aspect ratio in the ways indicated in the previous section: by varying the pixels of the sides or by setting a multiplication coefficient. -In \texttt{Settings $\rightarrow$ Format} there is the additional possibility to vary the shape of the pixels from 1:1 (square) to handle anamorphic formats. +In \texttt{Settings $\rightarrow$ Format} there is the additional possibility to vary the shape of the pixels from 1:1 (square) to handle anamorphic formats. In such cases we use: + +\qquad $PAR=\frac{DAR}{SAR}$ + +where: + +\textit{DAR}= Display Aspect Ratio + +\textit{PAR}= Pixel Aspect Ratio (1 or 1:1 is square) + +\textit{SAR}= Storage Aspect Ratio (i.e media file aspect ratio) + \subsection{Camera and Projector}% \label{sub:camera_and_projector}