[+0800 20250814 00:35:54] [WARN] WebUI | funcs | no ffmpeg installed, use wav file output [+0800 20250814 00:35:54] [INFO] WebUI | webui | loading ChatTTS model... [+0800 20250814 00:35:54] [INFO] ChatTTS | dl | checking assets... [+0800 20250814 00:35:56] [INFO] ChatTTS | dl | all assets are already latest. [+0800 20250814 00:35:56] [WARN] ChatTTS | gpu | no GPU or NPU found, use CPU instead [+0800 20250814 00:35:56] [INFO] ChatTTS | core | use device cpu [+0800 20250814 00:35:56] [INFO] ChatTTS | core | vocos loaded. [+0800 20250814 00:35:56] [INFO] ChatTTS | core | dvae loaded. [+0800 20250814 00:35:57] [INFO] ChatTTS | core | embed loaded. [+0800 20250814 00:35:57] [INFO] ChatTTS | core | gpt loaded. [+0800 20250814 00:35:57] [INFO] ChatTTS | core | speaker loaded. [+0800 20250814 00:35:57] [INFO] ChatTTS | core | decoder loaded. [+0800 20250814 00:35:57] [INFO] ChatTTS | core | tokenizer loaded. [+0800 20250814 00:35:57] [WARN] WebUI | funcs | Package nemo_text_processing not found! [+0800 20250814 00:35:57] [WARN] WebUI | funcs | Run: conda install -c conda-forge pynini=2.1.5 && pip install nemo_text_processing [+0800 20250814 00:35:57] [WARN] WebUI | funcs | Package WeTextProcessing not found! [+0800 20250814 00:35:57] [WARN] WebUI | funcs | Run: conda install -c conda-forge pynini=2.1.5 && pip install WeTextProcessing [+0800 20250814 00:35:57] [INFO] WebUI | webui | Models loaded successfully.
Running on local URL: http://0.0.0.0:8080 To create a public link, setshare=True in launch().
[+0800 20250814 00:36:46] [INFO] ChatTTS | core | split text into 2 parts
text: 0%| | 1/384(max) [00:00, 2.97it/s]Traceback (most recent call last):
File "D:\Users\white_\AppData\Local\Programs\Python\Python313\Lib\site-packages\gradio\queueing.py", line 626, in process_events
response = await route_utils.call_processapi(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
)
^
File "D:\Users\white\AppData\Local\Programs\Python\Python313\Lib\site-packages\gradio\route_utils.py", line 350, in call_process_api
output = await app.get_blocks().processapi(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<11 lines>...
)
^
File "D:\Users\white\AppData\Local\Programs\Python\Python313\Lib\site-packages\gradio\blocks.py", line 2250, in process_api
result = await self.callfunction(
^^^^^^^^^^^^^^^^^^^^^^^^^
...<8 lines>...
)
^
File "D:\Users\white\AppData\Local\Programs\Python\Python313\Lib\site-packages\gradio\blocks.py", line 1757, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
fn, processedinput, limiter=self.limiter
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "D:\Users\white\AppData\Local\Programs\Python\Python313\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
func, args, abandon_on_cancel=abandon_oncancel, limiter=limiter
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "D:\Users\white\AppData\Local\Programs\Python\Python313\Lib\site-packages\anyio_backends_asyncio.py", line 2476, in run_sync_in_workerthread
return await future
^^^^^^^^^^^^
File "D:\Users\white\AppData\Local\Programs\Python\Python313\Lib\site-packages\anyio_backends_asyncio.py", line 967, in run
result = context.run(func, args)
File "D:\Users\white\AppData\Local\Programs\Python\Python313\Lib\site-packages\gradio\utils.py", line 917, in wrapper
response = f(*args, **kwargs)
File "D:\Users\white\CodeSpace\PycharmProjects\ChatTTS-0.2.4\examples\web\funcs.py", line 148, in refine_text
text = chat.infer(
text,
...<8 lines>...
split_text=splitbatch > 0,
)
File "D:\Users\white\CodeSpace\PycharmProjects\ChatTTS-0.2.4\ChatTTS\core.py", line 270, in infer
return next(resgen)
File "D:\Users\white\CodeSpace\PycharmProjects\ChatTTS-0.2.4\ChatTTS\core.py", line 420, in _infer
refined = self._refine_text(
text,
self.device,
params_refinetext,
)
File "D:\Users\white\AppData\Local\Programs\Python\Python313\Lib\site-packages\torch\utils_contextlib.py", line 120, in decoratecontext
return func(*args, **kwargs)
File "D:\Users\white\CodeSpace\PycharmProjects\ChatTTS-0.2.4\ChatTTS\core.py", line 730, in _refinetext
result = next(
gpt.generate(
...<14 lines>...
)
)
File "D:\Users\white\AppData\Local\Programs\Python\Python313\Lib\site-packages\torch\utils_contextlib.py", line 38, in generatorcontext
response = gen.send(None)
File "D:\Users\white\CodeSpace\PycharmProjects\ChatTTS-0.2.4\ChatTTS\model\gpt.py", line 396, in generate
model_input = self._prepare_generation_inputs(
inputs_ids,
...<2 lines>...
use_cache=not self.is_tellama,
)
File "D:\Users\white\AppData\Local\Programs\Python\Python313\Lib\site-packages\torch\utils_contextlib.py", line 120, in decoratecontext
return func(*args, **kwargs)
File "D:\Users\white\CodeSpace\PycharmProjects\ChatTTS-0.2.4\ChatTTS\model\gpt.py", line 230, in _prepare_generation_inputs
attention_mask = attention_mask.narrow(
1, -max_cache_length, max_cache_length
)
RuntimeError: narrow(): length must be non-negative.