我是在windows平台,从 conda环境运行的,操作步骤完全按照readme中的步骤进行,目前无论哪个测试最后都会报 narrow(): length must be non-negative. 在报错位置 attention_mask = attention_mask.narrow( 1, -max_cache_length, max_cache_length ) 这处时 max_cache_length 为 -1 完整输出信息贴在下面
(chattts) PS F:\GitHub\ChatTTS> python examples/cmd/run.py "你好你好你好你好" C:\ProgramData\miniconda3\envs\chattts\Lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning) [+0800 20250729 15:12:28] [INFO] Command | run | Starting ChatTTS commandline demo... [+0800 20250729 15:12:28] [INFO] Command | run | Namespace(spk=None, stream=False, source='local', custom_path='', texts=['你好你好你好你好']) [+0800 20250729 15:12:28] [INFO] Command | run | Text input: ['你好你好你好你好'] [+0800 20250729 15:12:28] [INFO] Command | run | Initializing ChatTTS... [+0800 20250729 15:12:28] [WARN] Command | run | Package nemo_text_processing not found! [+0800 20250729 15:12:28] [WARN] Command | run | Run: conda install -c conda-forge pynini=2.1.5 && pip install nemo_text_processing [+0800 20250729 15:12:28] [WARN] Command | run | Package WeTextProcessing not found! [+0800 20250729 15:12:28] [WARN] Command | run | Run: conda install -c conda-forge pynini=2.1.5 && pip install WeTextProcessing [+0800 20250729 15:12:28] [INFO] ChatTTS | dl | checking assets... [+0800 20250729 15:12:29] [INFO] ChatTTS | dl | all assets are already latest. [+0800 20250729 15:12:29] [WARN] ChatTTS | gpu | no GPU or NPU found, use CPU instead [+0800 20250729 15:12:29] [INFO] ChatTTS | core | use device cpu [+0800 20250729 15:12:29] [INFO] ChatTTS | core | vocos loaded. [+0800 20250729 15:12:30] [INFO] ChatTTS | core | dvae loaded. [+0800 20250729 15:12:30] [INFO] ChatTTS | core | embed loaded. [+0800 20250729 15:12:30] [INFO] ChatTTS | core | gpt loaded. [+0800 20250729 15:12:30] [INFO] ChatTTS | core | speaker loaded. [+0800 20250729 15:12:30] [INFO] ChatTTS | core | decoder loaded. [+0800 20250729 15:12:30] [INFO] ChatTTS | core | tokenizer loaded. [+0800 20250729 15:12:30] [INFO] Command | run | Models loaded successfully. [+0800 20250729 15:12:30] [INFO] Command | run | Use speaker: 蘁淰敥欀冤刖牗乂绍誙甽瓋柨揪篦膱諌棍棸嶌腀揻峺談梉搱珴嬄樰襎翥嘐萐動學譎琊薟壶悰硤絚噅榤喷蝵誨丄接槀吣跧嘤晝偾箯葤特狂蒰旘簮椺腥帕曉瀅矱謩吭殦虳赇榏神貙脈观捬勬屴全蒝蝇喵筻燖府愨儅傱啸橖眯慎紦襊荠欸扪暃煼槚賦呲嵒蚙澆胬圎婄槛儼擪悲甙莴槄峑眽葼詠诽猎沢幏荤拻蜎溩糕番涍瑳抺蛣扬糈賭撙墯粤潾筼瀑磍明旼佫欰熷禼绱窌喒昕讹碟瞁妁夦朵脢彡牛犩暴諏岫蜐喾巬畔咇祃趓胥坳啘瀻瓃婄巩螦签瑨琗胿懼粯篚仑菷瑊芨梷縴庬有書蚋儼搋翔蔌勣謦規冮氥璑滯媫毲堨枽蟲硺癷楀蜲聑叄絁倮瘰莵衞蕀俟茾緤殛獱湳笫嘏嵋舟菂粑崦蒬聬浕蝼妸罾惺歛着盵垐耤豈婾牳琝睱秷冤潂玅猌瞎謺剨狿繫莎婪冭牔瓂懫蒜曲碼伕竕抏劯宭蔳穳悲嵾娝淕滮凑窼眑尴婕屺質疳綛蚴纋懈巼刣勊榦溇丈壠力岿枧欧涴詤傤栁堁夑墡梤撧裃質袵嫼眾噑槣囯詣短咴擋瀮歹蚲桰捑聗护耪塋燌琗仧意忋虚游矁垪倯憰捹抁爿薭瀌獤毌扳冢裴匼蜆缗胊圆寺壏嚫俎筺篖弴愝謟薷畷喰涷属笇寖箺沒紼痨毋哷娋哋抴剑球樉昽囄炣觌璆悒擳爿嵿儉肃孠沓匶岖敚縏灨咆誊坜莐嬀豣昡蚁嚄祑蓕芀蔙渽暡偠茐擙訋橁妠疒詿櫺宸怕慚稩蘉埋焍嶓畏氈纈樼搞拂煏淅玫燔狍墳劀褱蔳究伽祪毣朂社牞珄槙慛檴厶覀坝嬠譓檃笡蘍氭爠繸蘾瘞溵覐够槡桴櫾姠業瀏瞟幟枘箷璩农橏讽汿貱續帞猕俑唈淭稿牞曍動寫豑灍认腾葞唜嶍賩板圬薰丝溁瀹禟剪剘艴槉苇蚑謭焉俋痎弛捡浐楄砳芑扰崚幛薅浓澢慣狅蛮媀撺胢摶絗沇憲翜稆戡畮甾甠蔜怜虠圣保穔葫皶氲勻嘰膂噼璾撤綍槖蕂湇葱胒梤烮簺谿嘞傫灆罷机瀅埠哌忟憰绊質僽戣扌箊敲憶嗦溄捰奥媆任兩澏诤快嶒媭媯瘂嚼讆檸硙诒严衚佟刁爿呋蠟焧盍穡徣劰礱嶅覼訡繜苕楃昉嬐懃痙烼媆謠喴藜嘴桝婝尊嫚棓茨禓葑资灆痬脡璺崠款壣噥峡笼层犊蚄撇嵈养耻稢蝜漷剋焌芢槴蒣死指姖蝩慜侌炛莴蒒抂瀗施紃舭蒙峧惓炮珄蘢翋汓繒嚕縜膗揜庼莒泲赁泌厴只痛校荝絔萟煋乴寿缺睥敪挔粛氀塈徜褯纂葍癍焉岗桢莊硿歎啗牝葰羵囜纹亿蟷岄員蟣盾儁癓牁卡恬悗決匤岊茳堦灯喏硆賄胃嗫虼蜮伾瀔倃芄塴烆墕劙懜徘苓犔俹曲罘栽岏夸欐蚋橉碒嵛癭埙忑瓡衔藛拁倬耙淳豐呄底嗻犱契矻祲杪搞壇甍杼樗瀾插仩涊臏萢绽沭蛥廪檕呃毺瑩嵕穾垰茱噆湮撏嘗怏急扃籠寐爀 [+0800 20250729 15:12:30] [INFO] Command | run | Start inference. text: 0%|▏ | 1/384(max) [00:00, 8.00it/s] Traceback (most recent call last): File "F:\GitHub\ChatTTS\examples\cmd\run.py", line 153, in main(args.texts, args.spk, args.stream, args.source, args.custom_path) File "F:\GitHub\ChatTTS\examples\cmd\run.py", line 88, in main wavs = chat.infer( ^^^^^^^^^^^ File "f:\github\chattts\ChatTTS\core.py", line 263, in infer for wavs in res_gen: File "f:\github\chattts\ChatTTS\core.py", line 420, in _infer refined = self._refine_text( ^^^^^^^^^^^^^^^^^^ File "C:\Users\TK\AppData\Roaming\Python\Python311\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "f:\github\chattts\ChatTTS\core.py", line 730, in _refine_text result = next( ^^^^^ File "C:\Users\TK\AppData\Roaming\Python\Python311\site-packages\torch\utils_contextlib.py", line 36, in generator_context response = gen.send(None) ^^^^^^^^^^^^^^ File "f:\github\chattts\ChatTTS\model\gpt.py", line 405, in generate model_input = self._prepare_generation_inputs( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\TK\AppData\Roaming\Python\Python311\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "f:\github\chattts\ChatTTS\model\gpt.py", line 239, in _prepare_generation_inputs attention_mask = attention_mask.narrow( ^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: narrow(): length must be non-negative. text: 0%|▏ | 1/384(max) [00:00, 4.68it/s]