Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Frame rate
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Methods=== Most FRC methods can be categorized into [[optical flow]] or kernel-based<ref name="Simon Niklaus, Long Mai, and Feng Liu">{{cite conference | last1=Simon | first1=Niklaus| last2=Long | first2=Mai| last3=Feng| first3=Liu | title=Video frame interpolation via adaptive separable convolution | publisher=ICCV | year=2017 | page=| arxiv=1708.01692}}</ref><ref name="Huaizu Jiang, Deqing Sun, Varun Jampani, Ming-Hsuan Yang, Erik Learned-Miller, and Jan Kautz">{{cite conference | last1=Huaizu | first1=Jiang| last2=Deqing | first2=Sun| last3=Varun | first3=Jampani| last4=Ming-Hsuan|first4=Yang| last5=Erik |first5=Learned-Miller| last6=Jan|first6=Kautz| title=Super slomo: High quality estimation of multiple intermediate frames for video interpolation | publisher=ICCV | year=2018 | page=| arxiv=1712.00080}}</ref> and pixel hallucination-based methods.<ref name="Shurui Gui, Chaoyue Wang, Qihua Chen, and Dacheng Tao">{{cite conference | last1=Shurui | first1=Gui| last2=Chaoyue | first2=Wang| last3=Qihua | first3=Chen| last4=Dacheng |first4=Tao| title=2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)| chapter=Featureflow: Robust video interpolation via structure-to-texture generation | publisher=IEEE| year=2020 | pages=14001β14010| isbn=978-1-7281-7169-2 | doi=10.1109/CVPR42600.2020.01402 }}</ref><ref name="Myungsub Choi, Heewon Kim, Bohyung Han, Ning Xu, and Kyoung Mu Lee">{{cite journal | last1=Myungsub | first1=Choi| last2=Heewon | first2=Kim| last3=Bohyung | first3=Han| last4=Ning |first4=Xu| last5=Kyoung |first5=Mu Lee| title=Channel Attention is All You Need for Video Frame Interpolation| journal=Proceedings of the AAAI Conference on Artificial Intelligence| publisher=AAAI| year=2020 | volume=34| issue=7| pages=10663β10671| doi=10.1609/aaai.v34i07.6693 | doi-access=free }}</ref> ====Flow-based FRC==== Flow-based methods linearly combine predicted optical flows between two input frames to approximate flows from the target intermediate frame to the input frames. They also propose flow reversal (projection) for more accurate [[image warping]]. Moreover, there are algorithms that give different weights of overlapped flow vectors depending on the [[Depth of field|object depth]] of the scene via a flow projection layer. ====Pixel hallucination-based FRC==== Pixel hallucination-based methods use deformable [[convolution]] to the center frame generator by replacing optical flows with offset vectors. There are algorithms that also interpolate middle frames with the help of deformable convolution in the feature domain. However, since these methods directly hallucinate pixels unlike the flow-based FRC methods, the predicted frames tend to be [[Motion blur|blurry]] when fast-moving objects are present.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)