You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Below is a list of the base installed packages if you fresh installed Text-generation-webui (based on Dec 24th 2023 as a Nvidia 12.1 install)
🟩 TLDR summary
AllTalk will install 2x or 3x things that are not natively installed by Text-generation-webUI and its native Coqui_tts extension:
importlib-metadata>=4.8.1 - Used to confirm your packages are up to date when AllTalk starts-up soundfile>=0.12.1 - Combines multiple wav files into 1 file for the narrator to function faster-whisper>=0.10.0 - If you use finetuning and ONLY if you install the requirements_finetuning.txt
>= means greater than this version number OR equal to this version number. Meaning AllTalk is asking for a minimum of this version number or greater. AllTalk is NOT requesting to downgrade to earlier versions. Please scroll down for a side by side comparison of Text-generation-webui's factory installed package versions VS AllTalks requested packages.
AllTalk will NOT bump versions of other things that are NOT matching the base requirements set by Text-generation-webui, barPandas which is forced by the TTS engine, however, this is the same as what the native Coqui_tts extension does when it installs TTS from its requirements file. Details here. See next section for an explanation of Pandas and upgrading it again (if necessary).
Putting this simply, if AllTalk specifies numpy>=1.24.4 (which is Text-generation-webUI factory default) and you install numpy>=1.29.1, AllTalk will simply go "ok thats a greater version than I am asking for >= so I wont change anything or do anything, as it satisfies my requirement needs".
🟩 Here are the things that AllTalk requests to be installed. There are 3x unique packages
All package versions AllTalk is requesting are the same as Text-generation-webui's factory installed/default package versions. In some cases Text-generation-webui will install a higher version, and AllTalk will NOT downgrade or change those packages. AllTalk only has minimum requested version that match or are lower than Text-generation-webui's requirements.
importlib-metadata>=4.8.1(Unique to AllTalk) soundfile>=0.12.1(Unique to AllTalk) TTS>=0.21.3(Unique to AllTalk & Text-gen-webui's native Coqui_tts extension) fastapi>=0.104.1 Jinja2>=3.1.2 numpy>=1.24.4 packaging>=23.2 pydantic>=1.10.13 requests>=2.31.0 torch>=2.1.0+cu118 torchaudio>=2.1.0+cu118 tqdm>=4.66.1 uvicorn>=0.24.0.post1
TTS downgrades Pandas to 1.5.3, though it appears fine to upgrade it again pip install pandas==2.1.4
I do not know of a way to do this automatically within the 1x requirements file as it causes a conflict.
Pandas is a data validation feature e.g. if you send some text to some other code, it checks if the the text is in the correct format. So if your code wanted the word true or false and you send jelly, Pandas will say jelly is not an acceptable term for this type of data, I only accept the word true or false.
🟩 And finetuning:
These are only installed when you install requirements_finetuning.txt. All package versions AllTalk is requesting are the same as Text-generation-webui's factory installed/default package versions. In some cases Text-generation-webui will install a higher version, and AllTalk will NOT downgrade or change those packages. AllTalk only has minimum requested version that match or are lower than Text-generation-webui's requirements.
faster-whisper>=0.10.0(Unique to AllTalk) gradio>=3.50.2 torch>=2.1.0+cu118 torchaudio>=2.1.0+cu118 TTS==0.21.3(Force TTS 0.21.3 as 0.22.0 has an issue with file paths on finetuning) tqdm>=4.66.0 pandas>=1.5.0
🟩Text-generation-webui's base packages on a factory fresh install: (side by side comparison)
Below is a list of the base installed packages if you fresh installed Text-generation-webui (based on Dec 24th 2023 as a Nvidia 12.1 install)
🟩 TLDR summary
AllTalk will install 2x or 3x things that are not natively installed by Text-generation-webUI and its native Coqui_tts extension:
importlib-metadata>=4.8.1
- Used to confirm your packages are up to date when AllTalk starts-upsoundfile>=0.12.1
- Combines multiple wav files into 1 file for the narrator to functionfaster-whisper>=0.10.0
- If you use finetuning and ONLY if you install therequirements_finetuning.txt
>=
means greater than this version number OR equal to this version number. Meaning AllTalk is asking for a minimum of this version number or greater. AllTalk is NOT requesting to downgrade to earlier versions. Please scroll down for a side by side comparison of Text-generation-webui's factory installed package versions VS AllTalks requested packages.AllTalk will NOT bump versions of other things that are NOT matching the base requirements set by Text-generation-webui, bar
Pandas
which is forced by the TTS engine, however, this is the same as what the nativeCoqui_tts extension
does when it installs TTS from its requirements file. Details here. See next section for an explanation of Pandas and upgrading it again (if necessary).Putting this simply, if AllTalk specifies
numpy>=1.24.4
(which is Text-generation-webUI factory default) and you installnumpy>=1.29.1
, AllTalk will simply go "ok thats a greater version than I am asking for>=
so I wont change anything or do anything, as it satisfies my requirement needs".🟩 Here are the things that AllTalk requests to be installed. There are 3x unique packages
All package versions AllTalk is requesting are the same as Text-generation-webui's factory installed/default package versions. In some cases Text-generation-webui will install a higher version, and AllTalk will NOT downgrade or change those packages. AllTalk only has minimum requested version that match or are lower than Text-generation-webui's requirements.
importlib-metadata>=4.8.1
(Unique to AllTalk)soundfile>=0.12.1
(Unique to AllTalk)TTS>=0.21.3
(Unique to AllTalk & Text-gen-webui's native Coqui_tts extension)fastapi>=0.104.1
Jinja2>=3.1.2
numpy>=1.24.4
packaging>=23.2
pydantic>=1.10.13
requests>=2.31.0
torch>=2.1.0+cu118
torchaudio>=2.1.0+cu118
tqdm>=4.66.1
uvicorn>=0.24.0.post1
TTS downgrades Pandas to 1.5.3, though it appears fine to upgrade it again
pip install pandas==2.1.4
I do not know of a way to do this automatically within the 1x requirements file as it causes a conflict.
Pandas is a data validation feature e.g. if you send some text to some other code, it checks if the the text is in the correct format. So if your code wanted the word
true
orfalse
and you sendjelly
, Pandas will sayjelly
is not an acceptable term for this type of data, I only accept the wordtrue
orfalse
.🟩 And finetuning:
These are only installed when you install
requirements_finetuning.txt
. All package versions AllTalk is requesting are the same as Text-generation-webui's factory installed/default package versions. In some cases Text-generation-webui will install a higher version, and AllTalk will NOT downgrade or change those packages. AllTalk only has minimum requested version that match or are lower than Text-generation-webui's requirements.faster-whisper>=0.10.0
(Unique to AllTalk)gradio>=3.50.2
torch>=2.1.0+cu118
torchaudio>=2.1.0+cu118
TTS==0.21.3
(Force TTS 0.21.3 as 0.22.0 has an issue with file paths on finetuning)tqdm>=4.66.0
pandas>=1.5.0
🟩Text-generation-webui's base packages on a factory fresh install: (side by side comparison)
absl-py==2.0.0
accelerate==0.25.0
aiofiles==23.2.1
aiohttp==3.9.1
aiosignal==1.3.1
altair==5.2.0
annotated-types==0.6.0
antlr4-python3-runtime==4.9.3
anyio==3.7.1
appdirs==1.4.4
asttokens==2.4.1
attributedict==0.3.0
attrs==23.1.0
auto-gptq @ https://github.com/jllllll/AutoGPTQ/releases/download/v0.6.0/auto_gptq-0.6.0+cu121-cp311-cp311-linux_x86_64.whl#sha256=80f44157c636a38ea12e0820ec681966310dfaa34b00724a176cd4c097b856d6
autoawq==0.1.7
beautifulsoup4==4.12.2
bitsandbytes==0.41.1
blessings==1.7
blinker==1.7.0
cachetools==5.3.2
certifi==2022.12.7
cffi==1.16.0
chardet==5.2.0
charset-normalizer==2.1.1
click==8.1.7
codecov==2.1.13
colorama==0.4.6
coloredlogs==15.0.1
colour-runner==0.1.1
contourpy==1.2.0
coverage==7.3.4
cramjam==2.7.0
ctransformers @ https://github.com/jllllll/ctransformers-cuBLAS-wheels/releases/download/AVX2/ctransformers-0.2.27+cu121-py3-none-any.whl#sha256=9be6bfa8ac9feb5b2d4c98fbf5ac90394bbfa5c406313f8161dca67b28333e51
cycler==0.12.1
DataProperty==1.0.1
datasets==2.16.0
decorator==5.1.1
deep-translator==1.9.2
deepdiff==6.7.1
dill==0.3.7
diskcache==5.6.3
distlib==0.3.8
docker-pycreds==0.4.0
docopt==0.6.2
einops==0.7.0
evaluate==0.4.1
executing==2.0.1
exllama @ https://github.com/jllllll/exllama/releases/download/0.0.18/exllama-0.0.18+cu121-cp311-cp311-linux_x86_64.whl#sha256=a56d4281a16bc1e03ebfa82c5333f5b623c5f983de58d358cb2960cd6cbd8b03
exllamav2 @ https://github.com/turboderp/exllamav2/releases/download/v0.0.11/exllamav2-0.0.11+cu121-cp311-cp311-linux_x86_64.whl#sha256=9a36893f577ba058c7b8add08090a22a48921471c79764cac5ec4d298435a0ee
fastapi==0.105.0
Alltalk requests minimum of fastapi>=0.104.1
fastparquet==2023.10.1
ffmpeg==1.4
ffmpy==0.3.1
filelock==3.13.1
flash-attn @ https://github.com/Dao-AILab/flash-attention/releases/download/v2.3.4/flash_attn-2.3.4+cu122torch2.1cxx11abiFALSE-cp311-cp311-linux_x86_64.whl#sha256=35f3335cc9b3c533e622fa5ea85908502f7aa558646523f6def10d1b53ca82e0
Flask==3.0.0
flask-cloudflared==0.0.14
fonttools==4.47.0
frozenlist==1.4.1
fsspec==2023.10.0
gekko==1.0.6
gitdb==4.0.11
GitPython==3.1.40
google-auth==2.25.2
google-auth-oauthlib==1.2.0
gptq-for-llama @ https://github.com/jllllll/GPTQ-for-LLaMa-CUDA/releases/download/0.1.1/gptq_for_llama-0.1.1+cu121-cp311-cp311-linux_x86_64.whl#sha256=b6b0ce1b3b2568dff3c21d31956a82552e4eb6950c2a1f626767f9288ebc36d7
gradio==3.50.2
Finetuning requests minimum of gradio>=3.50.2
gradio_client==0.6.1
grpcio==1.60.0
h11==0.14.0
hqq==0.1.1.post1
httpcore==1.0.2
httpx==0.26.0
huggingface-hub==0.20.1
humanfriendly==10.0
idna==3.4
importlib-resources==6.1.1
inspecta==0.1.3
ipython==8.19.0
itsdangerous==2.1.2
jedi==0.19.1
Jinja2==3.1.2
AllTalk requests minimum of Jinja2>=3.1.2
joblib==1.3.2
jsonlines==4.0.0
jsonschema==4.20.0
jsonschema-specifications==2023.11.2
kiwisolver==1.4.5
llama_cpp_python @ https://github.com/oobabooga/llama-cpp-python-cuBLAS-wheels/releases/download/cpu/llama_cpp_python-0.2.24+cpuavx2-cp311-cp311-manylinux_2_31_x86_64.whl#sha256=73f93f750d4af6ba2f9d5bc0d2f46778a9d85934c1816c9ff1908750c4c477d7
llama_cpp_python_cuda @ https://github.com/oobabooga/llama-cpp-python-cuBLAS-wheels/releases/download/textgen-webui/llama_cpp_python_cuda-0.2.24+cu121-cp311-cp311-manylinux_2_31_x86_64.whl#sha256=da6b5accd73a040e6640a0c5aae55fc1ed99c0a6b2950ca6122ecbf30fcb7b4d
llama_cpp_python_cuda_tensorcores @ https://github.com/oobabooga/llama-cpp-python-cuBLAS-wheels/releases/download/textgen-webui/llama_cpp_python_cuda_tensorcores-0.2.24+cu121-cp311-cp311-manylinux_2_31_x86_64.whl#sha256=de1c9111a9a43e83da40244aacd423a5095fa1a6a25fb5607da79c4caf76e328
llvmlite==0.41.1
lm_eval==0.4.0
lxml==4.9.4
Markdown==3.5.1
markdown-it-py==3.0.0
MarkupSafe==2.1.3
matplotlib==3.8.2
matplotlib-inline==0.1.6
mbstrdecoder==1.1.3
mdurl==0.1.2
more-itertools==10.1.0
mpmath==1.3.0
multidict==6.0.4
multiprocess==0.70.15
networkx==3.0
ngrok==0.12.1
ninja==1.11.1.1
nltk==3.8.1
num2words==0.5.13
numba==0.58.1
numexpr==2.8.8
numpy==1.24.4
Alltalk requests minimum of numpy>=1.24.4
oauthlib==3.2.2
omegaconf==2.3.0
openai-whisper==20231117
optimum==1.16.1
ordered-set==4.1.0
orjson==3.9.10
packaging==23.2
AllTalk requests minimum of packaging>=23.2
pandas==2.1.4 - TTS downgrades Pandas to 1.5.3, though it appears fine to upgrade it again
pip install pandas==2.1.4
parso==0.8.3
pathvalidate==3.2.0
peft==0.7.1
pexpect==4.9.0
Pillow==10.1.0
platformdirs==4.1.0
pluggy==1.3.0
portalocker==2.8.2
prompt-toolkit==3.0.43
protobuf==4.23.4
psutil==5.9.7
ptyprocess==0.7.0
pure-eval==0.2.2
py-cpuinfo==9.0.0
pyarrow==14.0.2
pyarrow-hotfix==0.6
pyasn1==0.5.1
pyasn1-modules==0.3.0
pybind11==2.11.1
pycparser==2.21
pydantic==2.5.3
AllTalk requests minimum of pydantic>=1.10.13
pydantic_core==2.14.6
pydub==0.25.1
Pygments==2.17.2
pyparsing==3.1.1
pyproject-api==1.6.1
pytablewriter==1.2.0
python-dateutil==2.8.2
python-multipart==0.0.6
pytz==2023.3.post1
PyYAML==6.0.1
referencing==0.32.0
regex==2023.12.25
requests==2.31.0
AllTalk requests minimum of requests>=2.31.0
requests-oauthlib==1.3.1
responses==0.18.0
rich==13.7.0
rootpath==0.1.1
rouge==1.0.1
rouge-score==0.1.2
rpds-py==0.15.2
rsa==4.9
sacrebleu==2.4.0
safetensors==0.4.1
scikit-learn==1.3.2
scipy==1.11.4
semantic-version==2.10.0
sentencepiece==0.1.99
sentry-sdk==1.39.1
setproctitle==1.3.3
six==1.16.0
smmap==5.0.1
sniffio==1.3.0
soundfile==0.12.1
soupsieve==2.5
SpeechRecognition==3.10.0
sqlitedict==2.1.0
sse-starlette==1.6.5
stack-data==0.6.3
starlette==0.27.0
sympy==1.12
tabledata==1.3.3
tabulate==0.9.0
tcolorpy==0.1.4
tensorboard==2.15.1
tensorboard-data-server==0.7.2
termcolor==2.4.0
texttable==1.7.0
threadpoolctl==3.2.0
tiktoken==0.5.2
timm==0.9.12
tokenizers==0.15.0
toml==0.10.2
toolz==0.12.0
torch==2.1.2+cu121
AllTalk requests minimum of torch>=2.1.0+cu118
-CUDA is not requested by the "requirements_other.txt"
torchaudio==2.1.2+cu121
AllTalk requests minimum torchaudio>=2.1.0+cu118
-CUDA is not requested by the "requirements_other.txt"
torchvision==0.16.2+cu121
tox==4.11.4
tqdm==4.66.1
AllTalk requests minimum of tqdm>=4.66.1
tqdm-multiprocess==0.0.11
traitlets==5.14.0
transformers==4.36.2
triton==2.1.0
typepy==1.3.2
typing_extensions==4.9.0
tzdata==2023.3
urllib3==1.26.13
uvicorn==0.25.0
Alltalk requests minimum of uvicorn>=0.24.0.post1
virtualenv==20.25.0
wandb==0.16.1
wcwidth==0.2.12
websockets==11.0.3
Werkzeug==3.0.1
xxhash==3.4.1
yarl==1.9.4
zstandard==0.22.0
The text was updated successfully, but these errors were encountered: