mirror of
https://gitlab.com/lvra/lvra.gitlab.io.git
synced 2024-11-10 02:20:26 +01:00
add svc
This commit is contained in:
parent
58ba7f4f5a
commit
392c7c0083
2 changed files with 110 additions and 1 deletions
|
@ -8,3 +8,4 @@ title: Other
|
|||
This category houses guides that are not specific to any other cagegory.
|
||||
|
||||
- [Dongles over IP](/docs/other/dongles-over-ip/) plug your Watchman dongles into another host on the same network
|
||||
- [SVC Voice Changer](/docs/other/svc/) for AMD and NVidia GPUs, also works on CPU
|
108
content/docs/other/svc/_index.md
Normal file
108
content/docs/other/svc/_index.md
Normal file
|
@ -0,0 +1,108 @@
|
|||
---
|
||||
weight: 300
|
||||
title: SVC Voice Changer
|
||||
---
|
||||
|
||||
# SVC Voice Changer
|
||||
|
||||
- [so-vits-svc-fork @ GitHub](https://github.com/voicepaw/so-vits-svc-fork)
|
||||
- [pre-trained models @ HuggingFace](https://huggingface.co/models?search=so-vits-svc)
|
||||
|
||||
so-vits-svc is one of simpler voice changers to set up on Linux. It can run on CPU as well as NVidia and AMD GPUs.
|
||||
|
||||
This is an exert of the original README.md of the GitHub repository, intended for people who are as bad with Python as I am.
|
||||
|
||||
# Setup
|
||||
|
||||
Disclaimer: There's probably a better way. I'm not good at Python. Or machine learning.
|
||||
|
||||
### Prerequisites:
|
||||
|
||||
Install the following packages using your distro's package manager:
|
||||
- Python 3.10
|
||||
- Pip for Python 3.10
|
||||
- Virtualenv for Python 3.10
|
||||
|
||||
Please use 3.10 exactly, not newer or older.
|
||||
|
||||
### Create virtualenv:
|
||||
|
||||
In this example, we will create the virtualenv inside `~/.var/venv/`. Feel free to change the location, any folder will work.
|
||||
|
||||
Create virtualenv:
|
||||
```bash
|
||||
mkdir -p ~/.var/venv/svc/
|
||||
cd ~/.var/venv/svc/
|
||||
python3.10 -m venv .
|
||||
```
|
||||
|
||||
Activate virtualenv
|
||||
- On fish: `. bin/activate.fish`
|
||||
- On csh: `. bin/activate.csh`
|
||||
- On bash/zsh: `. bin/activate`
|
||||
|
||||
At this point, if you call python or pip, you will be running that command inside the virtualenv.
|
||||
|
||||
You can activate the virtualenv again by `cd ~/.var/venv/svc/` followed by the activate command for your shell.
|
||||
|
||||
To exit the virtualenv, run `deactivate`.
|
||||
|
||||
### Install SVC
|
||||
|
||||
Do these while you have the virtualenv activated.
|
||||
|
||||
Install required tools:
|
||||
- `python -m pip install -U pip setuptools wheel`
|
||||
|
||||
PyTorch with CUDA support for NVIDIA GPUs:
|
||||
- Please install CUDA 11.8 using your system package manager. Newer versions may not work.
|
||||
- `pip install -U torch torchaudio --index-url https://download.pytorch.org/whl/cu118`
|
||||
|
||||
PyTorch with ROCm support for AMD GPUs:
|
||||
- Please install ROCm 5.7.1 via your system package manager. Newer versions may not work.
|
||||
- `pip install -U torch torchaudio --index-url https://download.pytorch.org/whl/rocm5.7`
|
||||
|
||||
PyTorch for CPU only (you can also choose CPU mode with the options above):
|
||||
- `pip install -U torch torchaudio --index-url https://download.pytorch.org/whl/cpu`
|
||||
|
||||
Finally, install SVC:
|
||||
- `pip install -U so-vits-svc-fork`
|
||||
|
||||
### Launch the GUI
|
||||
|
||||
You can start the graphical UI by running `svcg` inside the virtualenv.
|
||||
|
||||
Example start script for bash:
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
. ~/.var/venv/svc/bin/activate
|
||||
svcg
|
||||
```
|
||||
|
||||
### Models
|
||||
|
||||
Grab a model from HuggingFace (link on top of page). Models that work with this setup are the ones tagged with `
|
||||
so-vits-svc-4.0` or `so-vits-svc-4.1`.
|
||||
|
||||
You will need 2 files for SVC to work:
|
||||
- `G_0000.pth` where 0000 is some number. Usually a higher number means better, but only if you're comparing files within the same repository.
|
||||
- `config.json` tells SVC how to use the pth file.
|
||||
|
||||
With SVC running, plop the `G_0000.pth` into `Model Path` on the top left, and `config.json` into `Config Path`.
|
||||
|
||||
### Starting the Voice Changer
|
||||
|
||||
Default settings usually work OK. What you want to change is `Pitch`. This will be different depending on how high your own voice is compared to the model's voice. You will need different `Pitch` setting for different models.
|
||||
|
||||
Check `Use GPU` on the bottom center if you want to torture your GPU with your voice. It better not complain for how expensive it was.
|
||||
|
||||
Click `(Re)Start Voice Changer` to do just that. You also need to click this after changing any settings.
|
||||
|
||||
### PipeWire Setup
|
||||
|
||||
This is for ALVR-only right now. TODO the rest.
|
||||
|
||||
Use `qpwgraph` or `helvum` to:
|
||||
1. Disconnect `vrserver` from `ALVR-MIC-Sink`.
|
||||
2. Pipe the output of `vrserver` to the input of `Python3.10`.
|
||||
3. Pipe the output of `Python3.10` to the input of `ALVR-MIC-Sink`.
|
Loading…
Reference in a new issue