Prompt injection: parameterization of fixed inputs고정된 입력의 매개변수화를 위한 프롬트 주입

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 1
  • Download : 0
Recent works have shown that attaching prompts to the input is effective at conditioning Language Models (LM) to perform specific tasks. However, prompts are always included in the input text during inference, thus incurring substantial computational and memory overhead. Also, there is currently no straightforward method of utilizing prompts that are longer than the maximum input length of the LMs without incurring additional costs during inference. We formulate a new problem called Prompt Injection (PI) that focuses on injecting the prompt into the parameters of an LM to be an efficient alternative to attaching fixed prompts to the input. We show that in scenarios with long fixed prompts, PI can be up to 280 times more efficient in terms of total FLOPs than previous approaches. We further explore methodologies for PI and show promising results in persona-dependent conversation, semantic parsing, and zero-shot learning with task instructions. Through these explorations, we show that PI can be a promising direction for conditioning language models, especially in scenarios with long and fixed prompts.
Advisors
서민준researcher
Description
한국과학기술원 :김재철AI대학원,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2023.8,[iii, 20 p. :]

Keywords

프롬트▼a주입▼a매개변수화▼a언어모델▼a효율성▼a추론▼a증류; Prompt▼aInjection▼aParameterization▼aLanguage model▼aEfficiency▼aInference▼aDistillation

URI
http://hdl.handle.net/10203/320522
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045710&flag=dissertation
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0