Browse "AI-Conference Papers(학술대회논문)" by Author Jang, Joel

Showing results 1 to 6 of 6

1
Exploring the Benefits of Training Expert Language Models over Instruction Tuning

Jang, Joel; Kim, Seungone; Ye, Seonghyeon; Kim, Doyoung; Logeswaran, Lajanugen; Lee, Moontae; Lee, Kyungjae; et al, ICML 2023, pp.14702 - 14729, International Machine Learning Society (IMLS), 2023-07

2
Fixed Input Parameterization for Efficient Prompting

Choi, Eunbi; Jo, Yongrae; Jang, Joel; Jang, Joonwon; Seo, Minjoon, ACL 2023, pp.8428 - 8441, Association for Computational Linguistics (ACL), 2023-07

3
Gradient Ascent Post-training Enhances Language Model Generalization

Yoon, Dongkeun; Jang, Joel; Kim, Sungdong; Seo, Minjoon, ACL 2023, pp.851 - 864, Association for Computational Linguistics (ACL), 2023-07

4
Knowledge Unlearning for Mitigating Privacy Risks in Language Models

Jang, Joel; Yoon, Dongkeun; Yang, Sohee; Cha, Sungmin; Lee, Moontae; Logeswaran, Lajanugen; Seo, Minjoon, ACL 2023, pp.14389 - 14408, Association for Computational Linguistics (ACL), 2023-07

5
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models

Jang, Joel; Ye, Seonghyeon; Lee, Changho; Yang, Sohee; Shin, Joongbo; Han, Janghoon; Kim, Gyeonghun; et al, EMNLP 2022, pp.6237 - 6250, Association for Computational Linguistics (ACL), 2022-12

6
Towards Continual Knowledge Learning of Language Models

Jang, Joel; Ye, Seonghyeon; Yang, Sohee; Shin, Joongbo; Han, Janghoon; Kim, Gyeonghun; Choi, Jungkyu; et al, ICLR 2022, International Conference on Learning Representations (ICLR), 2022-04-27

Discover

rss_1.0 rss_2.0 atom_1.0