DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yoon, Sung Whan | ko |
dc.contributor.author | Moon, Jaekyun | ko |
dc.contributor.author | Kim, Do Yeon | ko |
dc.contributor.author | Seo, Jun | ko |
dc.date.accessioned | 2020-12-18T06:30:32Z | - |
dc.date.available | 2020-12-18T06:30:32Z | - |
dc.date.created | 2020-12-01 | - |
dc.date.created | 2020-12-01 | - |
dc.date.issued | 2020-07-14 | - |
dc.identifier.citation | International Conference on Machine Learning (ICML) 2020 | - |
dc.identifier.uri | http://hdl.handle.net/10203/278698 | - |
dc.description.abstract | Learning novel concepts while preserving prior knowledge is a long-standing challenge in machine learning. The challenge gets greater when a novel task is given with only a few labeled examples, a problem known as incremental few-shot learning. We propose XtarNet, which learns to extract task-adaptive representation (TAR) for facilitating incremental few-shot learning. The method utilizes a backbone network pretrained on a set of base categories while also employing additional modules that are meta-trained across episodes. Given a new task, the novel feature extracted from the meta-trained modules is mixed with the base feature obtained from the pretrained model. The process of combining two different features provides TAR and is also controlled by meta-trained modules. The TAR contains effective information for classifying both novel and base categories. The base and novel classifiers quickly adapt to a given task by utilizing the TAR. Experiments on standard image datasets indicate that XtarNet achieves state-of-the-art incremental few-shot learning performance. The concept of TAR can also be used in conjunction with existing incremental few-shot learning methods; extensive simulation results in fact show that applying TAR enhances the known methods significantly. | - |
dc.language | English | - |
dc.publisher | IEEE | - |
dc.title | XtarNet: Learning to Extract Task-Adaptive Representation for Incremental Few-Shot Learning | - |
dc.type | Conference | - |
dc.identifier.scopusid | 2-s2.0-85104356235 | - |
dc.type.rims | CONF | - |
dc.citation.publicationname | International Conference on Machine Learning (ICML) 2020 | - |
dc.identifier.conferencecountry | AU | - |
dc.identifier.conferencelocation | Virtual | - |
dc.contributor.localauthor | Moon, Jaekyun | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.