文章预览
网传大模型新王超4o博足眼球。打脸来了,说重点,Reflection 70b根本不是什么基于Llama 3.1 70B的结果,而是用Lora在Llama-3-70B-Instruct上微调了下,使用下面这个代码可以证明: from transformers import AutoModelForCausalLM, AutoTokenizer import torch import matplotlib.pyplot as plt import seaborn as sns base_model_name = "meta-llama/Meta-Llama-3-70B-Instruct" chat_model_name = "mattshumer/Reflection-Llama-3.1-70B" base_model = AutoModelForCausalLM.from_pretrained(base_model_name, torch_dtype=torch.bfloat16) chat_model = AutoModelForCausalLM.from_pretrained(chat_model_name, torch_dtype=torch.bfloat16) def calculate_weight_diff(base_weight, chat_weight): return torch.abs(base_weight - chat_weight).mean().item() def calculate_layer_diffs(base_model, chat_model): layer_diffs = [] for base_layer, chat_layer in zip(base_model.model.layers, chat_model.model.layers): layer_diff = { 'input_layernorm': calculate_weig
………………………………