DC Field | Value | Language |
---|---|---|
dc.contributor.author | Seo, Chang Wook | ko |
dc.contributor.author | Ashtari, Amirsaman | ko |
dc.contributor.author | Noh, Junyong | ko |
dc.date.accessioned | 2023-11-17T05:00:16Z | - |
dc.date.available | 2023-11-17T05:00:16Z | - |
dc.date.created | 2023-11-17 | - |
dc.date.issued | 2023-08-07 | - |
dc.identifier.citation | SIGGRAPH 2023, pp.1 - 12 | - |
dc.identifier.uri | http://hdl.handle.net/10203/314801 | - |
dc.description.abstract | Sketches reflect the drawing style of individual artists; therefore, it is important to consider their unique styles when extracting sketches from color images for various applications. Unfortunately, most existing sketch extraction methods are designed to extract sketches of a single style. Although there have been some attempts to generate various style sketches, the methods generally suffer from two limitations: low quality results and difficulty in training the model due to the requirement of a paired dataset. In this paper, we propose a novel multi-modal sketch extraction method that can imitate the style of a given reference sketch with unpaired data training in a semi-supervised manner. Our method outperforms state-of-the-art sketch extraction methods and unpaired image translation methods in both quantitative and qualitative evaluations. | - |
dc.language | English | - |
dc.publisher | Association for Computing Machinery (ACM) | - |
dc.title | Semi-supervised reference-based sketch extraction using a contrastive learning framework | - |
dc.title.alternative | Semi-supervised reference-based sketch extraction using a contrastive learning framework | - |
dc.type | Conference | - |
dc.type.rims | CONF | - |
dc.citation.beginningpage | 1 | - |
dc.citation.endingpage | 12 | - |
dc.citation.publicationname | SIGGRAPH 2023 | - |
dc.identifier.conferencecountry | US | - |
dc.identifier.conferencelocation | Los Angeles Convention Center | - |
dc.contributor.localauthor | Noh, Junyong | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.