文章预览
作者 | 杨远航 整理 | PaperWeekly ACL 2024 论文放榜,扫了下,SMoE(稀疏混合专家)的论文不算多,这里就仔细梳理一下,包括动机、方法、有趣的发现,方便大家不看论文也能了解的七七八八,剩下只需要感兴趣再看就好。 下面是列表,顺序大抵是个人兴趣程度排序。 1. DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models 2. Harder Tasks Need More Experts: Dynamic Routing in MoE Models 3. XMoE: Sparse Models with Fine-grained and Adaptive Expert Selection 4. HyperMoE: Towards Better Mixture of Experts via Transferring Among Experts 5. Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models 6. Multimodal Instruction Tuning with Conditional Mixture of LoRA 未完待续,大概还遗漏了一二三四篇,后续再加上 2024 年的一些 MoE 论文: 1. Let the Expert Stick to His Last: Expert-Specialize
………………………………