Meta-learning to communicate: Fast end-to-end training for fading channels

Cited 30 time in webofscience Cited 14 time in scopus
  • Hit : 185
  • Download : 0
When a channel model is available, learning how to communicate on fading noisy channels can be formulated as the (unsupervised) training of an autoencoder consisting of the cascade of encoder, channel, and decoder. An important limitation of the approach is that training should be generally carried out from scratch for each new channel. To cope with this problem, prior works considered joint training over multiple channels with the aim of finding a single pair of encoder and decoder that works well on a class of channels. As a result, joint training ideally mimics the operation of non-coherent transmission schemes. In this paper, we propose to obviate the limitations of joint training via meta-learning: Rather than training a common model for all channels, meta-learning finds a common initialization vector that enables fast training on any channel. The approach is validated via numerical results, demonstrating significant training speed-ups, with effective encoders and decoders obtained with as little as one iteration of Stochastic Gradient Descent.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2020-05-04
Language
English
Citation

45th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020

ISSN
1520-6149
DOI
10.1109/ICASSP40776.2020.9053252
URI
http://hdl.handle.net/10203/274415
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 30 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0