Task-Agnostic Vision Transformer for Distributed Learning of Image Processing

Cited 4 time in webofscience Cited 0 time in scopus
  • Hit : 146
  • Download : 0
Recently, distributed learning approaches have been studied for using data from multiple sources without sharing them, but they are not usually suitable in applications where each client carries out different tasks. Meanwhile, Transformer has been widely explored in computer vision area due to its capability to learn the common representation through global attention. By leveraging the advantages of Transformer, here we present a new distributed learning framework for multiple image processing tasks, allowing clients to learn distinct tasks with their local data. This arises from a disentangled representation of local and non-local features using a task-specific head/tail and a task-agnostic Vision Transformer. Each client learns a translation from its own task to a common representation using the task-specific networks, while the Transformer body on the server learns global attention between the features embedded in the representation. To enable decomposition between the task-specific and common representations, we propose an alternating training strategy between clients and server. Experimental results on distributed learning for various tasks show that our method synergistically improves the performance of each client with its own data.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2023
Language
English
Article Type
Article
Citation

IEEE TRANSACTIONS ON IMAGE PROCESSING, v.32, pp.203 - 218

ISSN
1057-7149
DOI
10.1109/TIP.2022.3226892
URI
http://hdl.handle.net/10203/304766
Appears in Collection
AI-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 4 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0