文章预览
【导读】:本文是LLM模型微调第六篇,分享论文QLoRA: Efficient Finetuning of Quantized LLMs的解读。主要内容有论文解读(提出背景、技术原理,细节补充...),实验效果,细节解析 和代码详解。 QLoRA相关 【#】LoRA相关论文及文档 QLoRA论文: QLoRA: Efficient Finetuning of Quantized LLMs 论文地址:https://arxiv.org/abs/2305.14314 Github地址:https://github.com/artidoro/qlora HF地址:https://huggingface.co/timdettmers QLoRA oral presentation at NeurIPS 2023: https://neurips.cc/media/neurips-2023/Slides/73855.pdf QLora训练更大的GPT文档: https://readpaper.feishu.cn/docx/CrMGdSVPKow5d1x1XQMcJioRnQe 8bit量化: https://huggingface.co/blog/zh/hf-bitsandbytes-integration 4bit量化: https://huggingface.co/blog/zh/4bit-transformers-bitsandbytes PEFT: https://https://github.com/huggingface/peft 【#】QLoRA-文章目录 QLoRA-论文解读 【1】QLoRA的提出背景~ 【2】QLoRA的技术原理 【3】QLoRA
………………………………