Joint Video Super-Resolution and Frame Interpolation via Permutation Invariance

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 66
  • Download : 0
We propose a joint super resolution (SR) and frame interpolation framework that can perform both spatial and temporal super resolution. We identify performance variation according to permutation of inputs in video super-resolution and video frame interpolation. We postulate that favorable features extracted from multiple frames should be consistent regardless of input order if the features are optimally complementary for respective frames. With this motivation, we propose a permutation invariant deep architecture that makes use of the multi-frame SR principles by virtue of our order (permutation) invariant network. Specifically, given two adjacent frames, our model employs a permutation invariant convolutional neural network module to extract "complementary" feature representations facilitating both the SR and temporal interpolation tasks. We demonstrate the effectiveness of our end-to-end joint method against various combinations of the competing SR and frame interpolation methods on challenging video datasets, and thereby we verify our hypothesis.
Publisher
MDPI
Issue Date
2023-03
Language
English
Article Type
Article
Citation

SENSORS, v.23, no.5

ISSN
1424-8220
DOI
10.3390/s23052529
URI
http://hdl.handle.net/10203/305936
Appears in Collection
RIMS Journal Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0