文章预览
【导读】:本文是LLM模型微调第七篇,分享Meta于20240807的开源三篇文章:Methods for adapting large language models,To fine-tune or not to fine-tune,How to fine-tune: Focus on effective datasets。 【#】LLaMA微调指南-文章目录 Methods for adapting large language models 【1】LLM适应方法-Approaches to LLM adaptation 【2】选择正确的适应方法-Choosing the right adaptation method https://ai.meta.com/blog/how-to-fine-tune-llms-peft-dataset-curation/ To fine-tune or not to fine-tune 【3】微调or not ?To fine-tune or not to fine-tune 【4】与其他领域自适应技术的比较-Comparison with other techniques for domain adaptation https://ai.meta.com/blog/when-to-fine-tune-llms-vs-other-techniques/ How to fine-tune: Focus on effective datasets 【5】Full fine-tuning vs. parameter-efficient fine-tuning (PEFT) 【6】数据集管理Dataset curation https://ai.meta.com/blog/how-to-fine-tune-llms-peft-datase
………………………………