[index-tts]换了1.5模型后报错

2025-10-28 513 views
6

换了1.5模型后报错如下:

  File "D:\AI\VideoTranslater\modules\tts\index_tts\indextts\infer.py", line 78, in __init__
    load_checkpoint(self.gpt, self.gpt_path)
  File "D:\AI\VideoTranslater\modules\tts\index_tts\indextts\utils\checkpoint.py", line 28, in load_checkpoint
    model.load_state_dict(checkpoint, strict=True)
  File "D:\AI\VideoTranslater\condawin\lib\site-packages\torch\nn\modules\module.py", line 2584, in load_state_dict
    raise RuntimeError(
RuntimeError: Error(s) in loading state_dict for UnifiedVoice:
        Unexpected key(s) in state_dict: "gpt.h.20.ln_1.weight", "gpt.h.20.ln_1.bias", "gpt.h.20.attn.c_attn.weight", "gpt.h.20.attn.c_attn.bias", "gpt.h.20.attn.c_proj.weight", "gpt.h.20.attn.c_proj.bias", "gpt.h.20.ln_2.weight", "gpt.h.20.ln_2.bias", "gpt.h.20.mlp.c_fc.weight", "gpt.h.20.mlp.c_fc.bias", "gpt.h.20.mlp.c_proj.weight", "gpt.h.20.mlp.c_proj.bias", "gpt.h.21.ln_1.weight", "gpt.h.21.ln_1.bias", "gpt.h.21.attn.c_attn.weight", "gpt.h.21.attn.c_attn.bias", "gpt.h.21.attn.c_proj.weight", "gpt.h.21.attn.c_proj.bias", "gpt.h.21.ln_2.weight", "gpt.h.21.ln_2.bias", "gpt.h.21.mlp.c_fc.weight", "gpt.h.21.mlp.c_fc.bias", "gpt.h.21.mlp.c_proj.weight", "gpt.h.21.mlp.c_proj.bias", "gpt.h.22.ln_1.weight", "gpt.h.22.ln_1.bias", "gpt.h.22.attn.c_attn.weight", "gpt.h.22.attn.c_attn.bias", "gpt.h.22.attn.c_proj.weight", "gpt.h.22.attn.c_proj.bias", "gpt.h.22.ln_2.weight", "gpt.h.22.ln_2.bias", "gpt.h.22.mlp.c_fc.weight", "gpt.h.22.mlp.c_fc.bias", "gpt.h.22.mlp.c_proj.weight", "gpt.h.22.mlp.c_proj.bias", "gpt.h.23.ln_1.weight", "gpt.h.23.ln_1.bias", "gpt.h.23.attn.c_attn.weight", "gpt.h.23.attn.c_attn.bias", "gpt.h.23.attn.c_proj.weight", "gpt.h.23.attn.c_proj.bias", "gpt.h.23.ln_2.weight", "gpt.h.23.ln_2.bias", "gpt.h.23.mlp.c_fc.weight", "gpt.h.23.mlp.c_fc.bias", "gpt.h.23.mlp.c_proj.weight", "gpt.h.23.mlp.c_proj.bias".
        size mismatch for perceiver_encoder.latents: copying a param with shape torch.Size([32, 1280]) from checkpoint, the shape in current model is torch.Size([32, 1024]).
        size mismatch for perceiver_encoder.proj_context.weight: copying a param with shape torch.Size([1280, 512]) from checkpoint, the shape in current model is torch.Size([1024, 512]).
        size mismatch for perceiver_encoder.proj_context.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for perceiver_encoder.layers.0.0.to_q.weight: copying a param with shape torch.Size([512, 1280]) from checkpoint, the shape in current model is torch.Size([512, 1024]).
        size mismatch for perceiver_encoder.layers.0.0.to_kv.weight: copying a param with shape torch.Size([1024, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).        
        size mismatch for perceiver_encoder.layers.0.0.to_out.weight: copying a param with shape torch.Size([1280, 512]) from checkpoint, the shape in current model is torch.Size([1024, 512]).
        size mismatch for perceiver_encoder.layers.0.1.0.weight: copying a param with shape torch.Size([3412, 1280]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
        size mismatch for perceiver_encoder.layers.0.1.0.bias: copying a param with shape torch.Size([3412]) from checkpoint, the shape in current model is torch.Size([2730]).
        size mismatch for perceiver_encoder.layers.0.1.2.weight: copying a param with shape torch.Size([1280, 1706]) from checkpoint, the shape in current model is torch.Size([1024, 1365]).
        size mismatch for perceiver_encoder.layers.0.1.2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for perceiver_encoder.layers.1.0.to_q.weight: copying a param with shape torch.Size([512, 1280]) from checkpoint, the shape in current model is torch.Size([512, 1024]).
        size mismatch for perceiver_encoder.layers.1.0.to_kv.weight: copying a param with shape torch.Size([1024, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).        
        size mismatch for perceiver_encoder.layers.1.0.to_out.weight: copying a param with shape torch.Size([1280, 512]) from checkpoint, the shape in current model is torch.Size([1024, 512]).
        size mismatch for perceiver_encoder.layers.1.1.0.weight: copying a param with shape torch.Size([3412, 1280]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
        size mismatch for perceiver_encoder.layers.1.1.0.bias: copying a param with shape torch.Size([3412]) from checkpoint, the shape in current model is torch.Size([2730]).
        size mismatch for perceiver_encoder.layers.1.1.2.weight: copying a param with shape torch.Size([1280, 1706]) from checkpoint, the shape in current model is torch.Size([1024, 1365]).
        size mismatch for perceiver_encoder.layers.1.1.2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for perceiver_encoder.norm.gamma: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for text_embedding.weight: copying a param with shape torch.Size([12001, 1280]) from checkpoint, the shape in current model is torch.Size([12001, 1024]).
        size mismatch for mel_embedding.weight: copying a param with shape torch.Size([8194, 1280]) from checkpoint, the shape in current model is torch.Size([8194, 1024]).
        size mismatch for gpt.h.0.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.0.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.0.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.0.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.0.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.0.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.0.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.0.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.0.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.0.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.0.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.0.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.1.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.1.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.1.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.1.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.1.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.1.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.1.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.1.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.1.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.1.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.1.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.1.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.2.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.2.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.2.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.2.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.2.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.2.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.2.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.2.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.2.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.2.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.2.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.2.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.3.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.3.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.3.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.3.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.3.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.3.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.3.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.3.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.3.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.3.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.3.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.3.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.4.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.4.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.4.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.4.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.4.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.4.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.4.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.4.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.4.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.4.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.4.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.4.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.5.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.5.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.5.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.5.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.5.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.5.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.5.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.5.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.5.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.5.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.5.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.5.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.6.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.6.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.6.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.6.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.6.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.6.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.6.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.6.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.6.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.6.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.6.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.6.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.7.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.7.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.7.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.7.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.7.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.7.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.7.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.7.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.7.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.7.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.7.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.7.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.8.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.8.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.8.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.8.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.8.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.8.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.8.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.8.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.8.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.8.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.8.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.8.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.9.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.9.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.9.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.9.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.9.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.9.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.9.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.9.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.9.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.9.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.9.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.9.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.10.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.10.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.10.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.10.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.10.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.10.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.10.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.10.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.10.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.10.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.10.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.10.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.11.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.11.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.11.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.11.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.11.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.11.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.11.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.11.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.11.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.11.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.11.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.11.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.12.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.12.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.12.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.12.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.12.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.12.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.12.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.12.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.12.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.12.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.12.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.12.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.13.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.13.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.13.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.13.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.13.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.13.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.13.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.13.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.13.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.13.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.13.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.13.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.14.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.14.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.14.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.14.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.14.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.14.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.14.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.14.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.14.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.14.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.14.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.14.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.15.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.15.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.15.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.15.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.15.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.15.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.15.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.15.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.15.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.15.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.15.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.15.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.16.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.16.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.16.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.16.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.16.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.16.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.16.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.16.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.16.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.16.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.16.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.16.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.17.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.17.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.17.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.17.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.17.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.17.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.17.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.17.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.17.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.17.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.17.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.17.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.18.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.18.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.18.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.18.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.18.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.18.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.18.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.18.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.18.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.18.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.18.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.18.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.19.ln_1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.19.ln_1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.19.attn.c_attn.weight: copying a param with shape torch.Size([1280, 3840]) from checkpoint, the shape in current model is torch.Size([1024, 3072]).
        size mismatch for gpt.h.19.attn.c_attn.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([3072]).
        size mismatch for gpt.h.19.attn.c_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
        size mismatch for gpt.h.19.attn.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.19.ln_2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.19.ln_2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.h.19.mlp.c_fc.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
        size mismatch for gpt.h.19.mlp.c_fc.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt.h.19.mlp.c_proj.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
        size mismatch for gpt.h.19.mlp.c_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.ln_f.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for gpt.ln_f.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for mel_pos_embedding.emb.weight: copying a param with shape torch.Size([803, 1280]) from checkpoint, the shape in current model is torch.Size([608, 1024]).
        size mismatch for text_pos_embedding.emb.weight: copying a param with shape torch.Size([602, 1280]) from checkpoint, the shape in current model is torch.Size([404, 1024]).
        size mismatch for final_norm.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for final_norm.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([1024]).
        size mismatch for text_head.weight: copying a param with shape torch.Size([12001, 1280]) from checkpoint, the shape in current model is torch.Size([12001, 1024]).
        size mismatch for mel_head.weight: copying a param with shape torch.Size([8194, 1280]) from checkpoint, the shape in current model is torch.Size([8194, 1024]).

回答

9

config.yaml 换了吗

5

config.yaml 换了吗

我也是遇到了一样的问题,config.yaml并没有更新啊,是不是还没上传最新的config.yaml文件

4

我也遇到了一摸一样的问题 可是不知道怎么解决

6

config.yaml 换了吗

我也是遇到了一样的问题,config.yaml并没有更新啊,是不是还没上传最新的config.yaml文件

huggingface的config.yaml有更新的,hf的README里说模型参数量增大了,config.yaml里gpt2的配置都变大了,你换下试试

4

【1】python 环境中,在你的 index-tts 目录下执行以下命令, 下载完整模型到:index-tts/checkpoints-1.5 目录中。

HF_ENDPOINT="https://hf-mirror.com" huggingface-cli download IndexTeam/IndexTTS-1.5 --local-dir checkpoints-1.5

【2】如果你需要使用旧的 1.0 模型,旧的模型目录 checkpoints 命名为:checkpoints-1.0 至此,你应该有两个模型目录:checkpoints-1.0 和 checkpoints-1.5

【3】推理界面 webui.py 修改加载模型的路径,二选一可随时切换:

model_dir = r"checkpoints-1.0"  # 可选 v1.0 模型版本
model_dir = r"checkpoints-1.5"  # 可选 v1.5 模型版本
tts = IndexTTS(model_dir=f"{model_dir}",cfg_path=f"{model_dir}/config.yaml",use_cuda_kernel=False)
1

huggingface的config.yaml有更新的,hf的README里说模型参数量增大了,config.yaml里gpt2的配置都变大了,你换下试试

0

log_probs = torch.reshape(log_probs, (batch_size, num_beams * vocab_size)) RuntimeError: shape '[1, 2400]' is invalid for input of size 24582 报这个错误

5

确实如此,覆盖config.yaml 解决问题。 源码里的config.yaml貌似不是最新,要从huggingface下载覆盖。 建议直接删除掉