In contemporary life, we are always looking for a larger screen size, better screen quality and a higher refresh rate. At present, the refresh rate of the TV screen has transitioned from 4K 60 frames to 8K 120 frames, and mobile phones have also developed from 90 frames to 120 frames, the strongest is still us "playing games" PC players, the refresh rate of the monitor is already 144 started, but also play monitor overclocking.
But the corresponding to the screen is the stagnation of the growth of the frame rate of film and television content: the video has achieved a transition from 30 frames to 60 frames, and the TV series and movies have remained at the level of 24 frames.
In such a case, we can achieve video frame completion in a self-sufficient manner.
❶
What is framing?
In fact, whether it is the frame repair on the PC, the optical flow method rendering in the PR, or the MEMC dynamic compensation on the TV, it is essentially by calculating the motion trajectory of the pixels in the picture, and then relying on interpolation to make up the frame.
In a film, the motion trajectory of an object is relatively fixed, so through the algorithm, it is easy to calculate the motion trajectory of the object in the two frames of the picture, and add a simulated motion trajectory frame in the middle of the two frames of the motion trajectory to achieve the interpolation effect. Through this means, it is possible to achieve a frame rate of 30 to 60 and above, and can reduce the smear of the picture, so that the video looks cleaner and clearer.
However, the problem is that there will be many problems with such simple frame repair: for example, it will increase the pressure of the graphics card, bringing heat dissipation and power consumption; there may be tearing of the picture object; the edge of the object may have a continuity problem because the pixel motion is "from scratch", which is called the block effect; and the motion trajectory of the previously occluded moving object cannot be well solved.
Therefore, the most advanced frame-filling scheme at this stage is to fill the frame through AI calculation, please note that the AI here is not a gimmick, but really calculated by the AI convolutional neural network.
First of all, to feed the AI a large number of existing image data, through machine learning to make the AI train a thing called "convolutional kernel", this convolutional kernel is a function, the change from the input screen to the output screen is determined by the convolutional kernel weighted average.
The convolutional kernel is a bit like a "super filter", through which the video computed by the convolution kernel can achieve clearer and smarter frames.
This kind of AI frame correction can achieve a video effect that is far better than the traditional motion frame repair calculation, and in the DAIN (Depth-Aware Video Frame Interpolation) interpolation algorithm open source by Shanghai Jiaotong University, it has been possible to automatically identify the depth of field to assist the interpolation, which almost completely solves the picture bug caused by occlusion.
Image source: Geek Bay Video
But relatively speaking, the cost of this kind of frame-filling is also very high: because it needs the help of CUDA to calculate, it is necessary to have an NVIDIA graphics card, and AI computing is very expensive to the performance of the graphics card (especially the video memory), so the level of this graphics card can not be too bad; it takes a lot of time, in hours or days to achieve optimization.
To add a word, this AI calculation method can not only be used to interpolate frames, but also can be used to improve image quality and achieve the magical operation of 480P to 4K. The founding ceremony in the movie "Decisive Moment" released in 2019 is based on the original film bought from Russia and repaired by AI algorithms to achieve the magical effect of "film becoming 4K".
❷
How do I implement framing?
At present, the framing operation is mainly implemented on the TV and PC, and the TV mainly relies on the MEMC chip for motion framing, usually in the settings of the TV, which is not introduced too much here. Here we mainly introduce the PC's frame-filling method.
AMD graphics card
If you are an AMD graphics card, you can directly use AMD's official "AMD Fluid Motion video" function with potplayer player to complete the interpolation.
· Have a GNC or Vega architecture AMD graphics card and download the latest AMD driver.
· Turn on the AMD Fluid Motion video feature.
· Download the Bluesky Frame Rate Converter and click "Enable AFM Support" when the installation is complete.
· In Potplayer, select the Enable AFM Support filter and force it to use.
In this way, the setting is completed, the new video is completed with video reframes, and the measured effect is OK.
NVIDIA graphics card
If you're an Nvidia graphics card, you can choose SVP or DmitriRender with Potplayer or MPC-HC player.
SVP is already a well-known veteran PC interpolation software, you can choose the corresponding version on its official website to download, Windows, Mac, Linux are supported, the official tutorial is also relatively clear.
The disadvantages are that it is expensive (more expensive), it eats system resources (especially graphics cards), and I often have a situation where the sound and picture are out of sync on my 1050TI notebook.
DmitriRender is an emerging video interpolation software that is more resource-efficient and less expensive than SVP. The same way to use it is to add filters after downloading.
Both amD graphics cards and core graphics can be used, but AMD recommends the above native solutions, and the core display may not have enough resources.
Special reminder
Interpolation software usually has two schemes: movies and anime.
This is because animation is usually not one frame at a time, but "one beat two" or "one beat three", that is, a painting with two frames or three frames, that is, "12 pictures constitute 24 frames" or "8 pictures make up 24 frames", and the rest is made up of the audience's "brain" (the human body's magical brain and visual system).
Hideaki Anno's "one beat three" becomes "one beat one"
This is called "keyframe animation" and was proposed by Osamu Tezuka. This is mainly to reduce the pressure on the original animation without reducing the quality of the animation, but it will bring trouble to the interpolation software...
In addition, "interlaced" video cannot be interpolated, but now the interlaced video is basically gone, so the impact is not large.
At present, experiencing excellent reframe video can greatly enhance the viewing experience of the video, which I believe that those who have watched Ang Lee's recent films should feel it. However, the current civilian frame repair technology is still relatively backward, and I am still very much looking forward to the day when AI frame repair can reach the normal level of user use.
A more fluid world.
Written by / Kai lun
Edited / Karen
Editor-in-Charge / Karen
Some of the pictures in the article come from the Internet
© Iglo Technology Original Content Reprint Please contact the background