Transformer-Evolution-Paper
Ctrlk
  • README
  • 数学符号
  • Act
  • Arch
  • FFN
  • Head
  • Memory
  • MHA
  • Normalize_And_Residual
  • Pe
  • Pretrain
  • Softmax
  • Others
  • LongConv
  • Rnn
  • CrossAttention
  • Inference
  • Peft
    • Parameter-Efficient Fine-Tuning without Introducing New Latency
    • Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
  • LLM
Powered by GitBook
On this page

Peft

Parameter-Efficient Fine-Tuning without Introducing New LatencyMake Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
PreviousNaive Bayes-based Context ExtensionNextParameter-Efficient Fine-Tuning without Introducing New Latency

Last updated 2 years ago