文章预览
【点击】 加入大模型技术交流群 不涉及transformer原理,只看transform结构的具体运行过程,涉及到推理。关于原理细节可参考这篇或者查阅其他相关优秀文章。 0x10 Transformer 基本结构 Transformer由encoder和decoder组成,其中: encoder主要负责理解(understanding) The encoder’s role is to generate a rich representation (embedding) of the input sequence, which the decoder can use if needed decoder主要负责生成(generation) The decoder outputs tokens one by one, where the current output depends on the previous tokens. This process is called auto-regressive generation 基本结构如下: encoder结构和decoder结构基本一致(除了mask),所以主要看decoder即可: 每个核心的Block包含: Layer Norm Multi headed attention A skip connection Second layer Norm Feed Forward network Another skip connection 看下llama decoder部分代码,摘自 transformers/models/llama/modeling_llama.py
………………………………