(G:\AI\AI\index-tts\py310_cu124) G:\AI\AI\index-tts>python webui.py
Traceback (most recent call last):
File "G:\AI\AI\index-tts\webui.py", line 45, in 
tts = IndexTTS(model_dir=cmd_args.model_dir, cfg_path=os.path.join(cmd_args.model_dir, "config.yaml"),)
File "G:\AI\AI\index-tts\indextts\infer.py", line 78, in init
load_checkpoint(self.gpt, self.gpt_path)
File "G:\AI\AI\index-tts\indextts\utils\checkpoint.py", line 28, in load_checkpoint
model.load_state_dict(checkpoint, strict=True)
File "G:\AI\AI\index-tts\py310_cu124\lib\site-packages\torch\nn\modules\module.py", line 2581, in load_state_dict
raise RuntimeError(
RuntimeError: Error(s) in loading state_dict for UnifiedVoice:
Unexpected key(s) in state_dict: "gpt.h.20.ln_1.weight", "gpt.h.20.ln_1.bias", "gpt.h.20.attn.c_attn.weight", "gpt.h.20.attn.c_attn.bias", "gpt.h.20.attn.c_proj.weight", "gpt.h.20.attn.c_proj.bias", "gpt.h.20.ln_2.weight", "gpt.h.20.ln_2.bias", "gpt.h.20.mlp.c_fc.weight", "gpt.h.20.mlp.c_fc.bias", "gpt.h.20.mlp.c_proj.weight", "gpt.h.20.mlp.c_proj.bias", "gpt.h.21.ln_1.weight", "gpt.h.21.ln_1.bias", "gpt.h.21.attn.c_attn.weight", "gpt.h.21.attn.c_attn.bias", "gpt.h.21.attn.c_proj.weight", "gpt.h.21.attn.c_proj.bias", "gpt.h.21.ln_2.weight", "gpt.h.21.ln_2.bias", "gpt.h.21.mlp.c_fc.weight", "gpt.h.21.mlp.c_fc.bias", "gpt.h.21.mlp.c_proj.weight", "gpt.h.21.mlp.c_proj.bias", "gpt.h.22.ln_1.weight", "gpt.h.22.ln_1.bias", "gpt.h.22.attn.c_attn.weight", "gpt.h.22.attn.c_attn.bias", "gpt.h.22.attn.c_proj.weight", "gpt.h.22.attn.c_proj.bias", "gpt.h.22.ln_2.weight", "gpt.h.22.ln_2.bias", "gpt.h.22.mlp.c_fc.weight", "gpt.h.22.mlp.c_fc.bias", "gpt.h.22.mlp.c_proj.weight", "gpt.h.22.mlp.c_proj.bias", "gpt.h.23.ln_1.weight", "gpt.h.23.ln_1.bias", "gpt.h.23.attn.c_attn.weight", "gpt.h.23.attn.c_attn.bias", "gpt.h.23.attn.c_proj.weight", "gpt.h.23.attn.c_proj.bias", "gpt.h.23.ln_2.weight", "gpt.h.23.ln_2.bias", "gpt.h.23.mlp.c_fc.weight", "gpt.h.23.mlp.c_fc.bias", "gpt.h.23.mlp.c_proj.weight", "gpt.h.23.mlp.c_proj.bias".
size mismatch for perceiver_encoder.latents: copying a param with shape torch.Size([32, 1280]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
size mismatch for perceiver_encoder.proj_context.weight: copying a param with shape torch.Size([1280, 512]) from checkpoint, the shape in current model is torch.Size([1024, 512]).
size mismatch for perceiver_encoder.proj_context.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for perceiver_encoder.layers.0.0.to_q.weight: copying a param with shape torch.Size([512, 1280]) from checkpoint, the shape in current model is torch.Size([512, 1024]).
size mismatch for perceiver_encoder.layers.0.0.to_kv.weight: copying a param with shape torch.Size([1024, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for perceiver_encoder.layers.0.0.to_out.weight: copying a param with shape torch.Size([1280, 512]) from checkpoint, the shape in current model is torch.Size([1024, 512]).
size mismatch for perceiver_encoder.layers.0.1.0.weight: copying a param with shape torch.Size([3412, 1280]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
size mismatch for perceiver_encoder.layers.0.1.0.bias: copying a param with shape torch.Size([3412]) from checkpoint, the shape in current model is torch.Size([2730]).
具体看txt
(GAIAIindex-ttspy310_cu124) GAIAIin.txt