PopStage: The Generation of Stage Cross-Editing Video based on Spatio-Temporal MatchingPopStage: The Generation of Stage Cross-Editing Video based on Spatio-Temporal Matching

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 352
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, DaWonko
dc.contributor.authorNoh, Junyongko
dc.contributor.authorYoo, Jung Eunko
dc.contributor.authorCho, Kyungminko
dc.contributor.authorKim, Bumkiko
dc.contributor.authorIm, Gyeonghunko
dc.date.accessioned2022-12-14T01:00:46Z-
dc.date.available2022-12-14T01:00:46Z-
dc.date.created2022-12-01-
dc.date.created2022-12-01-
dc.date.issued2022-12-06-
dc.identifier.citationSIGGRAPH Asia 2022-
dc.identifier.urihttp://hdl.handle.net/10203/302952-
dc.description.abstractStageMix is a mixed video that is created by concatenating the segments from various performance videos of an identical song in a visually smooth manner by matching the main subject's silhouette presented in the frame. We introduce PopStage, which allows users to generate a StageMix automatically. PopStage is designed based on the StageMix Editing Guideline that we established by interviewing creators as well as observing their workflows. PopStage consists of two main steps: finding an editing path and generating a transition effect at a transition point. Using a reward function that favors visual connection and the optimality of transition timing across the videos, we obtain the optimal path that maximizes the sum of rewards through dynamic programming. Given the optimal path, PopStage then aligns the silhouettes of the main subject from the transitioning video pair to enhance the visual connection at the transition point. The virtual camera view is next optimized to remove the black areas that are often created due to the transformation needed for silhouette alignment, while reducing pixel loss. In this process, we enforce the view to be the maximum size while maintaining the temporal continuity across the frames. Experimental results show that PopStage can generate a StageMix of a similar quality to those produced by professional creators in a highly reduced production time.-
dc.languageEnglish-
dc.publisherAssociation for Computing Machinery-
dc.titlePopStage: The Generation of Stage Cross-Editing Video based on Spatio-Temporal Matching-
dc.title.alternativePopStage: The Generation of Stage Cross-Editing Video based on Spatio-Temporal Matching-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.publicationnameSIGGRAPH Asia 2022-
dc.identifier.conferencecountryKO-
dc.identifier.conferencelocationEXCO, Daegu-
dc.identifier.doi10.1145/3550454.3555467-
dc.contributor.localauthorNoh, Junyong-
dc.contributor.nonIdAuthorIm, Gyeonghun-
Appears in Collection
GCT-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0