Fully Scalable Methods for Distributed Tensor Factorization

Cited 52 time in webofscience Cited 47 time in scopus
  • Hit : 446
  • Download : 0
Given a high-order large-scale tensor, how can we decompose it into latent factors? Can we process it on commodity computers with limited memory? These questions are closely related to recommender systems, which have modeled rating data not as a matrix but as a tensor to utilize contextual information such as time and location. This increase in the order requires tensor factorization methods scalable with both the order and size of a tensor. In this paper, we propose two distributed tensor factorization methods, CDTF and SALS. Both methods are scalable with all aspects of data and show a trade-off between convergence speed and memory requirements. CDTF, based on coordinate descent, updates one parameter at a time, while SALS generalizes on the number of parameters updated at a time. In our experiments, only our methods factorized a five-order tensor with 1 billion observable entries, 10M mode length, and 1 K rank, while all other state-of-the-art methods failed. Moreover, our methods required several orders of magnitude less memory than their competitors. We implemented our methods on MAPREDUCE with two widely-applicable optimization techniques: local disk caching and greedy row assignment. They speeded up our methods up to 98.2 x and also the competitors up to 5.9 x.
Publisher
IEEE COMPUTER SOC
Issue Date
2017-01
Language
English
Article Type
Article
Citation

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, v.29, no.1, pp.100 - 113

ISSN
1041-4347
DOI
10.1109/TKDE.2016.2610420
URI
http://hdl.handle.net/10203/250520
Appears in Collection
AI-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 52 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0