Compare commits

...

168 Commits

Author SHA1 Message Date
github-actions[bot]
d9d890975a
@mallocfree009 has signed the CLA from Pull Request #1504 2025-05-17 10:32:01 +00:00
wok
8043fce1ce READMEファイルを更新し、v.2.0.78-betaの新機能とバグ修正を追加しました。 2025-05-16 01:33:01 +09:00
wok
3677f6e268 RTX 5090に関する新機能を追加し、各言語のREADMEファイルを更新しました。 2025-05-03 04:06:57 +09:00
wok
0318700981 update 2025-02-16 01:26:08 +09:00
wok
66cbbeed1a update 2024-11-15 04:10:35 +09:00
wok
b262d28c10 update 2024-11-13 02:01:48 +09:00
wok
38a9164e5c update 2024-11-08 23:26:14 +09:00
wok
e472934bb4 update 2024-11-08 12:34:18 +09:00
wok
6129780229 fix typo 2024-10-08 20:10:41 +09:00
wok
e821960c59 Merge branch 'master' of github.com:w-okada/voice-changer 2024-10-08 14:54:33 +09:00
wok
fa77d69bed update 2024-10-08 14:54:30 +09:00
w-okada
7ab6a63a67
Merge pull request #1347 from QweRezOn/master
Add Russian Readme File
2024-09-15 08:01:23 +09:00
github-actions[bot]
763a6a0763
@QweRezOn has signed the CLA from Pull Request #1347 2024-09-13 17:04:56 +00:00
QweRez
dfbc95bd61
Update README_ru.md 2024-09-13 20:03:50 +03:00
QweRez
33387bd351
Update README.md 2024-09-13 20:02:44 +03:00
QweRez
b02c4f48c3
Create README_dev_ru.md 2024-09-13 20:02:19 +03:00
QweRez
006b9d575c
Update README_dev_en.md 2024-09-13 19:59:08 +03:00
QweRez
4ebcd670e7
Update README_en.md 2024-09-13 19:57:30 +03:00
QweRez
0b5daf162b
Create README_ru.md
add ru
2024-09-13 19:56:56 +03:00
wok
11b5deecb8 update 2024-08-27 09:29:07 +09:00
wok
fd849db239 update 2024-08-21 10:29:31 +09:00
wok
6d9e735883 update 2024-08-18 23:13:17 +09:00
wok
b5d3e5f066 update 2024-08-07 19:51:20 +09:00
wok
a75f87e433 update 2024-08-06 23:47:11 +09:00
wok
285615d67c update 2024-08-01 11:01:20 +09:00
wok
eef8395205 update 2024-07-27 18:14:50 +09:00
wok
465ab1ff23 update 2024-07-21 02:29:03 +09:00
wok
1f51581ae3 update 2024-07-20 05:37:14 +09:00
wok
87b547e724 update 2024-07-20 02:32:21 +09:00
wok
3b83221cec update 2024-07-20 02:30:06 +09:00
wok
f79855f8b2 update 2024-07-10 23:54:40 +09:00
wok
1952c76533 update 2024-06-30 17:07:52 +09:00
wok
92f0b1aaf5 update 2024-06-30 16:17:10 +09:00
wok
ebea9d2692 update 2024-06-29 07:07:58 +09:00
wok
a91ef76b64 update 2024-06-29 07:06:55 +09:00
wok
0cd7f69931 update 2024-06-29 07:05:57 +09:00
wok
b350812083 update 2024-06-29 07:05:30 +09:00
wok
80ccc0b1d7 update 2024-06-29 07:03:40 +09:00
wok
cc60c7adfb update 2024-06-29 07:03:11 +09:00
wok
d61f6b8e99 update 2024-06-29 07:02:35 +09:00
wok
7adc1f1cf5 update 2024-06-29 07:02:04 +09:00
wok
7e177ee84c update 2024-06-29 07:01:26 +09:00
wok
51046638d6 update 2024-06-29 07:00:57 +09:00
wok
2522d44f13 update 2024-06-29 07:00:30 +09:00
wok
018cab3ded update 2024-06-29 07:00:01 +09:00
wok
a1714878a7 update 2024-06-29 06:59:34 +09:00
wok
23b69ba121 update 2024-06-29 06:56:19 +09:00
wok
9f6903e4e9 update 2024-06-29 06:48:05 +09:00
wok
4c59ab5431 update 2024-06-24 03:49:37 +09:00
wok
33d74e8e73 Merge branch 'master' of github.com:w-okada/voice-changer 2024-06-24 03:47:52 +09:00
wok
5f1ca7af51 update 2024-06-24 03:47:25 +09:00
github-actions[bot]
56a5094881
@Nick088Official has signed the CLA from Pull Request #1241 2024-06-15 16:27:47 +00:00
wok
cde810a9d0 add cuda question 2024-06-12 05:01:52 +09:00
wok
73bb47f745 update 2024-06-10 20:09:30 +09:00
wok
349d268189 update 2024-06-05 18:39:35 +09:00
wok
3a8cbb07de update 2024-06-03 20:57:28 +09:00
github-actions[bot]
800285f2cd
@vitaliylag has signed the CLA from Pull Request #1224 2024-06-01 03:14:09 +00:00
github-actions[bot]
d3add2561d
@mrs1669 has signed the CLA from Pull Request #1171 2024-04-04 10:53:26 +00:00
w-okada
621ad25a8a
Merge pull request #1153 from deiteris/harden-security
Harden web server security
2024-04-02 16:04:02 +09:00
Yury
8dd8d7127d Refactor and add origin check to SIO 2024-03-18 22:52:46 +02:00
Yury
ce9b599501 Improve allowed origins input and use set 2024-03-17 16:26:55 +02:00
github-actions[bot]
28fc541891
@deiteris has signed the CLA from Pull Request #1153 2024-03-16 22:24:48 +00:00
Yury
cf2b693334 Harden web server security 2024-03-17 00:11:16 +02:00
w-okada
11672e9653 Merge branch 'master' of github.com:w-okada/voice-changer 2024-03-05 23:47:48 +09:00
w-okada
a42051bb40 update 2024-03-05 23:45:46 +09:00
w-okada
aa620e1cf0
Merge pull request #1141 from richardhbtz/patch-1
Misspelling "trouble"
2024-03-04 10:35:17 +09:00
w-okada
22bd9e3d7c Merge branch 'master' of github.com:w-okada/voice-changer 2024-03-04 10:33:56 +09:00
w-okada
6e774a1458 v.1.5.3.18 2024-03-04 10:33:16 +09:00
Richard Habitzreuter
0e2078a268
Misspelling "trouble" 2024-02-29 16:58:06 -03:00
github-actions[bot]
51233e0cbe
@brandonkovacs has signed the CLA from Pull Request #1137 2024-02-29 02:05:13 +00:00
w-okada
2ac5ec9feb update 2024-02-28 23:23:22 +09:00
w-okada
bc6e8a9c08 update 2024-02-28 23:08:49 +09:00
w-okada
39e0d0cfd6 update 2024-02-21 08:54:39 +09:00
w-okada
a1a3def686 Merge branch 'master' into v.1.5.3 2024-02-21 08:25:58 +09:00
w-okada
67804cad3c
Merge pull request #1092 from tg-develop/master
Bugfix FCPE
2024-02-21 08:25:25 +09:00
w-okada
ce8f843746 Merge branch 'master' into v.1.5.3 2024-02-21 08:22:36 +09:00
Tobias
0b954131b4 Bugfix FCPE 2024-01-21 14:02:35 +01:00
w-okada
927bba6467
Merge pull request #1077 from icecoins/master
implement of the fcpe in RVC
2024-01-18 06:32:12 +09:00
icecoins
8f230e5c45
Update FcpePitchExtractor.py 2024-01-12 02:28:17 +08:00
github-actions[bot]
41238258ba
@icecoins has signed the CLA from Pull Request #1077 2024-01-11 14:05:09 +00:00
icecoins
1cf9be54c7
undo modification 2024-01-11 22:02:36 +08:00
icecoins
303a15fef3
implement fcpe 2024-01-11 21:10:44 +08:00
icecoins
04f93b193f
implement fcpe 2024-01-11 21:09:57 +08:00
icecoins
fbf69cda19
implement fcpe 2024-01-11 21:08:47 +08:00
icecoins
8e42927880
implement fcpe 2024-01-11 21:07:38 +08:00
icecoins
4e254e42f7
implement fcpe 2024-01-11 21:07:03 +08:00
icecoins
cc72b93198
implement fcpe 2024-01-11 21:05:57 +08:00
icecoins
cc4783b85c
implement fcpe 2024-01-11 21:04:54 +08:00
icecoins
5fd31999e7
implement fcpe 2024-01-11 21:04:15 +08:00
icecoins
9f9e7016e2
Update GUI.json 2024-01-11 21:03:41 +08:00
icecoins
b96ba86be3
Update README.md 2024-01-11 21:00:50 +08:00
icecoins
98ee26e353
Update README.md 2024-01-11 20:58:49 +08:00
icecoins
e8244d61b7
Update README.md 2024-01-11 20:57:50 +08:00
github-actions[bot]
87d2382828
@sonphantrung has signed the CLA from Pull Request #1063 2024-01-04 08:20:51 +00:00
github-actions[bot]
03caf942b2
@Poleyn has signed the CLA from Pull Request #1057 2024-01-01 17:42:14 +00:00
w-okada
b215f3ba84 Modification:
- Timer update
  - Diffusion SVC Performance monitor
2023-12-21 04:11:25 +09:00
w-okada
0f0225cfcd update 2023-12-03 03:31:27 +09:00
w-okada
afb13bf976 bugfix: macos model_stati_dir 2023-12-03 02:50:51 +09:00
w-okada
06b8cf78d1 bugfix:
- clear setting
improve
  - file sanitizer
chage:
  - default input chunk size: 192.
    - decided by this chart.(https://rentry.co/VoiceChangerGuide#gpu-chart-for-known-working-chunkextra)
2023-12-03 02:02:28 +09:00
w-okada
c2b979a05f Merge branch 'master' of github.com:w-okada/voice-changer 2023-11-29 06:07:18 +09:00
w-okada
4a967c0b80 update 2023-11-29 05:42:58 +09:00
w-okada
ceb7d88cd9 update 2023-11-29 05:18:53 +09:00
w-okada
17597fdaab Add chihaya_jinja_sample
Web Edition improvement(16k test)

bugfix:
- merge slot
- servermode append error
2023-11-29 00:30:52 +09:00
w-okada
702d468d2f
Merge pull request #896 from hinabl/master
Added Auto Sampling Rate
2023-11-27 11:59:57 +09:00
Hina
f3d19fe95f
Removed Test dir
Finished test
2023-11-27 10:35:21 +08:00
Hina
69e81c4587 Added "some" weights.gg support on upload cell 2023-11-27 10:33:32 +08:00
github-actions[bot]
6ab743a2e2
@shdancer has signed the CLA from Pull Request #1017 2023-11-24 07:26:00 +00:00
w-okada
b24c781a72 async internal process 2023-11-23 16:44:23 +09:00
Hina
be3eb4033d Added Audio Notification 2023-11-23 15:43:02 +08:00
w-okada
0e7c0daebc refactor webedition flag 2023-11-23 08:10:29 +09:00
w-okada
3aa86f1e5a update 2023-11-23 07:54:35 +09:00
w-okada
08b3f25f0b Improve Device Detection 2023-11-23 07:53:14 +09:00
w-okada
ecf1976837 Standard Edtion avoid load process.js 2023-11-23 06:46:12 +09:00
w-okada
6fd61b9591 improve web edition gui 2023-11-23 06:20:54 +09:00
Hina
a216d4bc9d Kaggle Notebook | Public W-okada Voice Changer . | Version 2 2023-11-22 12:25:21 +08:00
Hina
935d817f6f Moved To new Link 2023-11-22 12:23:31 +08:00
w-okada
b895bdec4f WEB Edition icon bugfix 2023-11-22 10:06:38 +09:00
w-okada
a5c665c275 update 2023-11-22 07:10:44 +09:00
w-okada
82de23bb1a WIP:WebEdition GUI Improve 2023-11-22 07:10:34 +09:00
w-okada
ab837561d9 WIP:WEB version control 2023-11-22 05:53:15 +09:00
w-okada
14c73a71d2 ・モデルキャッシュ対応
・upkey対応
・モデル一覧追加
2023-11-21 23:10:43 +09:00
Hina
3e39b99ed7 Made Upload By Link smaller 2023-11-21 21:11:46 +08:00
Hina
df49eac1da Added Scuffed weights.gg model import (only works with model upload to huggingface) 2023-11-21 19:25:56 +08:00
w-okada
b8640f1f5a tooltip z-index 2023-11-21 11:45:38 +09:00
w-okada
db9e02cf09 add warmup progress 2023-11-21 11:45:27 +09:00
w-okada
f529331698
Merge pull request #1006 from tg-develop/AMD-Linux-Setup
Added guide for AMD Linux Setup
2023-11-21 04:03:11 +09:00
w-okada
3186e4322b Merge branch 'master' of github.com:w-okada/voice-changer 2023-11-20 07:43:53 +09:00
w-okada
6bda815669 WIP LLVC 2023-11-20 07:42:07 +09:00
w-okada
c2b031efa5 v.1.5.3.17 2023-11-20 05:38:46 +09:00
w-okada
d44119f9bf Beatrice Speaker graph
WIP: Web version
2023-11-19 22:28:38 +09:00
w-okada
079043ff6a beatrice speaker graph 2023-11-19 20:20:48 +09:00
github-actions[bot]
b6e743f032
@tg-develop has signed the CLA from Pull Request #1006 2023-11-18 10:51:21 +00:00
tg-develop
818f2470e3 Added guide for AMD Linux Setup 2023-11-18 11:49:31 +01:00
Hina
f36138d64e Update 2023-11-14 17:45:58 +08:00
Hina
1da0717a45 Fixed Kaggle Realtime VoiceChanger 2023-11-14 17:05:24 +08:00
w-okada
f15289ad98 WIP:WEBver 2023-11-14 08:17:27 +09:00
w-okada
958b03bd5a bugfix: timer update 2023-11-14 05:39:07 +09:00
Hina
218872ba75
Make Kaggle Realtime Voice Changer 2023-11-13 19:59:42 +08:00
Hina
53ea4cef57 Cleaned Some Cells 2023-11-13 01:25:20 +08:00
w-okada
85dfaff25f bugfix rest 32bit -> 16bit 2023-11-13 00:38:43 +09:00
w-okada
dadab1ad13 Experimental LLVC 2023-11-12 23:10:58 +09:00
w-okada
1e68e01e39 add handling setSinkID 2023-11-11 00:05:40 +09:00
Hina
da9e6aaf6b Removed "Under Construction" 2023-11-10 21:55:42 +08:00
w-okada
a35551906a update 2023-11-08 21:15:31 +09:00
w-okada
ca1cf9ed21 update 2023-11-08 21:11:22 +09:00
w-okada
b69496c0f3 update 2023-11-08 19:59:24 +09:00
w-okada
d03132d2ab bugfix: beatrice load 2023-11-08 19:54:13 +09:00
w-okada
3512bbb1eb Merge branch 'master' into v.1.5.3 2023-11-04 17:19:03 +09:00
Hina
ee827731b6 Merged Google Drive with Clone and install 2023-11-03 11:31:03 +08:00
Hina
5de0630bb4 Added Google Colab on version text 2023-11-03 11:01:48 +08:00
Hina
6d20b3dad2 Updated to Rafa's latest Voice Changer Colab 2023-11-03 10:17:57 +08:00
w-okada
b624999636
Merge pull request #978 from qlife1146/master
韓国語翻訳、誤字修正、内容追加
2023-11-03 10:27:39 +09:00
Luca Park
eb30f9a4e1 韓国語翻訳、誤字修正 2023-11-03 03:41:16 +09:00
Hina
497c7c0678 Created using Colaboratory 2023-11-02 21:54:22 +08:00
w-okada
a3160c12af
Merge pull request #976 from qlife1146/master
韓国語翻訳追加
2023-11-02 10:52:19 +09:00
github-actions[bot]
59c80ca856
@qlife1146 has signed the CLA from Pull Request #976 2023-11-02 01:46:14 +00:00
Luca Park
23be190e21 파일 추가(U)
trouble_shoot_communication_ko.md
tutorial_rvc_ko_latest.md
tutorial_monitor_consept_ko.md

파일 제목 수정
tutorial_rvc_en_1_5_3_7.md -> tutorial_rvc_en_1_5_3_7.md
전각 숫자에서 반각 숫자로 수정
tutorial_device_mode.md -> tutorial_device_mode_ja.md
더 넓은 번역 제공을 위해 제목에 언어 추가

파일 내용 수정 1: 전부 tutorial_device_mode -> tutorial_device_mode_ja 링크 수정
tutorial_rvc_en_1_5_3_1.md
tutorial_rvc_en_1_5_3_3.md
tutorial_rvc_en_1_5_3_7.md
tutorial_rvc_ja_1_5_3_1.md
tutorial_rvc_ja_1_5_3_3.md
tutorial_rvc_ja_1_5_3_7.md

파일 내용 수정 2: 언어 추가
tutorial_rvc_en_latest.md
tutorial_rvc_ja_latest.md

파일 내용 수정 3: VCClient -> VC Client
tutorial_rvc_en_1_5_3_7.md
tutorial_rvc_en_latest.md
tutorial_device_mode_ja.md
tutorial_monitor_consept_ja.md
tutorial_rvc_ja_1_5_3_7.md
tutorial_rvc_ja_latest.md
trouble_shoot_communication_ja.md

파일 내용 수정 4: 맞춤법 claer -> clear
tutorial_rvc_en_latest.md
tutorial_rvc_ja_latest.md
2023-11-02 03:31:20 +09:00
Hina
f86bee6768 Fixed libportaudio missing 2023-11-01 22:18:42 +08:00
Hina
65cde67b49 Using Rafa's Install 2023-11-01 13:00:22 +08:00
Hina
5c84c4cb91 Removed packages from requirements that are not needed or already installed (first batch) 2023-10-30 19:49:54 +08:00
Hina
6094be47f2 Updated Credits and Info 2023-10-25 10:31:14 +08:00
w-okada
ba96930432
Update issue.yaml 2023-10-14 04:28:43 +09:00
Hina
c0db39990d Background WEEEE 2023-10-13 19:18:07 +08:00
Hina
22b0f83992 Created using Colaboratory 2023-10-04 00:12:43 +08:00
Hina
ae52548113 Removed localtunnel 2023-10-03 16:48:48 +08:00
Hina
bfd7f5cef7 Created using Colaboratory 2023-09-29 00:55:14 +08:00
Hina
8d3a0f8c73 Created using Colaboratory 2023-09-29 00:44:13 +08:00
Hina
889874ecaf
Added Auto Sampling Rate
The Model Uploader Colab Cell is probably more buggy now but it works....
2023-09-28 19:18:21 +08:00
150 changed files with 15297 additions and 4249 deletions

View File

@ -1,7 +1,6 @@
name: Issue or Bug Report
name: Issue or Bug Report for v.1.x.x.x
description: Please provide as much detail as possible to convey the history of your problem.
title: "[ISSUE]: "
placeholder: "[ISSUE]: Please provide title"
body:
- type: markdown
attributes:

View File

@ -0,0 +1,82 @@
name: Issue or Bug Report for v.2.x.x
description: Please provide as much detail as possible to convey the history of your problem.
title: "[ISSUE for v2]: "
body:
- type: markdown
attributes:
value: Please read our [FAQ](https://github.com/w-okada/voice-changer/blob/master/.github/FAQ.md) before making a bug report!
- type: input
id: vc-client-version
attributes:
label: Voice Changer Version
description: Downloaded File Name (.zip)
placeholder: vcclient_win_std_x.y.x.zip, vcclient_win_cuda_torch_cuda_x.y.x.zip, or so
validations:
required: true
- type: input
id: OS
attributes:
label: Operational System
description: e.g. Windows 10, Ubuntu 20.04, MacOS Venture, MacOS Monterey, etc...
placeholder: Windows 10
validations:
required: true
- type: input
id: GPU
attributes:
label: GPU
description: If you have no gpu, please input none.
validations:
required: true
- type: input
id: CUDA
attributes:
label: CUDA Version
description: If you have nvidia gpu, please input version of cuda. Otherwise, please input none.
validations:
required: true
- type: checkboxes
id: checks
attributes:
label: Read carefully and check the options
options:
- label: If you use win_cuda_torch_cuda edition, setup cuda? [see here](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements)
- label: If you use win_cuda edition, setup cuda and cudnn? [see here](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements)
- label: If you use mac edition, client is not launched automatically. Use chrome to open application.?
- label: I've tried to change the Chunk Size
- label: I've tried to set the Index to zero
- label: I've read the [tutorial](https://github.com/w-okada/voice-changer/blob/master/tutorials/tutorial_rvc_en_latest.md)
- label: I've tried to extract to another folder (or re-extract) the .zip file
- type: dropdown
id: sample-model-work
attributes:
label: Does pre-installed model work?
options:
- "No"
- "YES"
default: 0
- type: input
id: vc-type
attributes:
label: Model Type
description: MMVC, so-vits-rvc, RVC, DDSP-SVC
placeholder: RVC
validations:
required: true
- type: textarea
id: issue
attributes:
label: Issue Description
description: Please provide as much reproducible information and logs as possible
- type: textarea
id: capture
attributes:
label: Application Screenshot
description: Please provide a screenshot of your application so we can see your settings (you can paste or drag-n-drop)
- type: textarea
id: logs-on-terminal
attributes:
label: Logs on console
description: Copy and paste the log on your console here
validations:
required: true

5
.gitignore vendored
View File

@ -37,7 +37,9 @@ server/memo.md
client/lib/dist
client/lib/worklet/dist
client/demo/public/models
client/demo/public/models_
client/demo/dist/models
client/demo/dist_web
client/demo/src/001_provider/backup
# client/demo/dist/ # demo用に残す
@ -56,6 +58,9 @@ server/samples_0003_o.json
server/samples_0003_t2.json
server/samples_0003_o2.json
server/samples_0003_d2.json
server/samples_0004_t.json
server/samples_0004_o.json
server/samples_0004_d.json
server/test_official_v1_v2.json
server/test_ddpn_v1_v2.json

File diff suppressed because one or more lines are too long

View File

@ -1,6 +1,6 @@
{
"cells": [
{
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
@ -30,7 +30,8 @@
"> Seems that PTH models performance better than ONNX for now, you can still try ONNX models and see if it satisfies you\n",
"\n",
"\n",
"*You can always [click here](https://github.com/YunaOneeChan/Voice-Changer-Settings) to check if these settings are up-to-date*\n",
"*You can always [click here](https://rentry.co/VoiceChangerGuide#gpu-chart-for-known-working-chunkextra\n",
") to check if these settings are up-to-date*\n",
"<br><br>\n",
"\n",
"---\n",
@ -46,7 +47,7 @@
"# **Credits and Support**\n",
"Realtime Voice Changer by [w-okada](https://github.com/w-okada)\\\n",
"Colab files updated by [rafacasari](https://github.com/Rafacasari)\\\n",
"Recommended settings by [YunaOneeChan](https://github.com/YunaOneeChan)\\\n",
"Recommended settings by [Raven](https://github.com/ravencutie21)\\\n",
"Modified again by [Hina](https://huggingface.co/HinaBl)\n",
"\n",
"Need help? [AI Hub Discord](https://discord.gg/aihub) » ***#help-realtime-vc***\n",
@ -54,26 +55,6 @@
"---"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "RhdqDSt-LfGr"
},
"outputs": [],
"source": [
"# @title **[Optional]** Connect to Google Drive\n",
"# @markdown Using Google Drive can improve load times a bit and your models will be stored, so you don't need to re-upload every time that you use.\n",
"import os\n",
"from google.colab import drive\n",
"\n",
"if not os.path.exists('/content/drive'):\n",
" drive.mount('/content/drive')\n",
"\n",
"%cd /content/drive/MyDrive"
]
},
{
"cell_type": "code",
"execution_count": null,
@ -83,8 +64,9 @@
},
"outputs": [],
"source": [
"#=================Updated=================\n",
"# @title **[1]** Clone repository and install dependencies\n",
"# @markdown This first step will download the latest version of Voice Changer and install the dependencies. **It will take around 2 minutes to complete.**\n",
"# @markdown This first step will download the latest version of Voice Changer and install the dependencies. **It can take some time to complete.**\n",
"import os\n",
"import time\n",
"import subprocess\n",
@ -93,12 +75,28 @@
"import base64\n",
"import codecs\n",
"\n",
"from IPython.display import clear_output, Javascript\n",
"\n",
"\n",
"#@markdown ---\n",
"# @title **[Optional]** Connect to Google Drive\n",
"# @markdown Using Google Drive can improve load times a bit and your models will be stored, so you don't need to re-upload every time that you use.\n",
"\n",
"Use_Drive=False #@param {type:\"boolean\"}\n",
"\n",
"from google.colab import drive\n",
"\n",
"if Use_Drive==True:\n",
" if not os.path.exists('/content/drive'):\n",
" drive.mount('/content/drive')\n",
"\n",
" %cd /content/drive/MyDrive\n",
"\n",
"\n",
"externalgit=codecs.decode('uggcf://tvguho.pbz/j-bxnqn/ibvpr-punatre.tvg','rot_13')\n",
"rvctimer=codecs.decode('uggcf://tvguho.pbz/uvanoy/eipgvzre.tvg','rot_13')\n",
"pathloc=codecs.decode('ibvpr-punatre','rot_13')\n",
"!git clone --depth 1 $externalgit &> /dev/null\n",
"\n",
"from IPython.display import clear_output, Javascript\n",
"\n",
"def update_timer_and_print():\n",
" global timer\n",
@ -112,165 +110,114 @@
"timer = 0\n",
"threading.Thread(target=update_timer_and_print, daemon=True).start()\n",
"\n",
"# os.system('cls')\n",
"clear_output()\n",
"!rm -rf rvctimer\n",
"!git clone --depth 1 $rvctimer\n",
"!cp -f rvctimer/index.html $pathloc/client/demo/dist/\n",
"\n",
"!pip install colorama --quiet\n",
"from colorama import Fore, Style\n",
"\n",
"print(f\"{Fore.CYAN}> Cloning the repository...{Style.RESET_ALL}\")\n",
"!git clone --depth 1 $externalgit &> /dev/null\n",
"print(f\"{Fore.GREEN}> Successfully cloned the repository!{Style.RESET_ALL}\")\n",
"%cd $pathloc/server/\n",
"\n",
"print(\"\\033[92mSuccessfully cloned the repository\")\n",
"# Read the content of the file\n",
"file_path = '../client/demo/dist/assets/gui_settings/version.txt'\n",
"\n",
"with open(file_path, 'r') as file:\n",
" file_content = file.read()\n",
"\n",
"# Replace the specific text\n",
"text_to_replace = \"-.-.-.-\"\n",
"new_text = \"Google.Colab\" # New text to replace the specific text\n",
"\n",
"modified_content = file_content.replace(text_to_replace, new_text)\n",
"\n",
"# Write the modified content back to the file\n",
"with open(file_path, 'w') as file:\n",
" file.write(modified_content)\n",
"\n",
"print(f\"Text '{text_to_replace}' has been replaced with '{new_text}' in the file.\")\n",
"\n",
"print(f\"{Fore.CYAN}> Installing libportaudio2...{Style.RESET_ALL}\")\n",
"!apt-get -y install libportaudio2 -qq\n",
"\n",
"!sed -i '/torch==/d' requirements.txt\n",
"!sed -i '/torchaudio==/d' requirements.txt\n",
"!sed -i '/numpy==/d' requirements.txt\n",
"\n",
"\n",
"\n",
"!apt-get install libportaudio2 &> /dev/null --quiet\n",
"!pip install pyworld onnxruntime-gpu uvicorn faiss-gpu fairseq jedi google-colab moviepy decorator==4.4.2 sounddevice numpy==1.23.5 pyngrok --quiet\n",
"print(\"\\033[92mInstalling Requirements!\")\n",
"print(f\"{Fore.CYAN}> Installing pre-dependencies...{Style.RESET_ALL}\")\n",
"# Install dependencies that are missing from requirements.txt and pyngrok\n",
"!pip install faiss-gpu fairseq pyngrok --quiet\n",
"!pip install pyworld --no-build-isolation --quiet\n",
"# Install webstuff\n",
"import asyncio\n",
"import re\n",
"!pip install playwright\n",
"!playwright install\n",
"!playwright install-deps\n",
"!pip install nest_asyncio\n",
"from playwright.async_api import async_playwright\n",
"print(f\"{Fore.CYAN}> Installing dependencies from requirements.txt...{Style.RESET_ALL}\")\n",
"!pip install -r requirements.txt --quiet\n",
"clear_output()\n",
"!pip install -r requirements.txt --no-build-isolation --quiet\n",
"# Maybe install Tensor packages?\n",
"#!pip install torch-tensorrt\n",
"#!pip install TensorRT\n",
"print(\"\\033[92mSuccessfully installed all packages!\")\n",
"# os.system('cls')\n",
"clear_output()\n",
"print(\"\\033[92mFinished, please continue to the next cell\")"
"print(f\"{Fore.GREEN}> Successfully installed all packages!{Style.RESET_ALL}\")"
]
},
{
"cell_type": "code",
"source": [
"\n",
"#@title #**[Optional]** Upload a voice model (Run this before running the Voice Changer)**[Currently Under Construction]**\n",
"#@markdown ---\n",
"#@title **[Optional]** Upload a voice model (Run this before running the Voice Changer)\n",
"import os\n",
"import json\n",
"from IPython.display import Image\n",
"import requests\n",
"\n",
"model_slot = \"0\" #@param ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '90', '91', '92', '93', '94', '95', '96', '97', '98', '99', '100', '101', '102', '103', '104', '105', '106', '107', '108', '109', '110', '111', '112', '113', '114', '115', '116', '117', '118', '119', '120', '121', '122', '123', '124', '125', '126', '127', '128', '129', '130', '131', '132', '133', '134', '135', '136', '137', '138', '139', '140', '141', '142', '143', '144', '145', '146', '147', '148', '149', '150', '151', '152', '153', '154', '155', '156', '157', '158', '159', '160', '161', '162', '163', '164', '165', '166', '167', '168', '169', '170', '171', '172', '173', '174', '175', '176', '177', '178', '179', '180', '181', '182', '183', '184', '185', '186', '187', '188', '189', '190', '191', '192', '193', '194', '195', '196', '197', '198', '199']\n",
"\n",
"#@markdown #Model Number `(Default is 0)` you can add multiple models as long as you change the number!\n",
"model_number = \"0\" #@param ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '90', '91', '92', '93', '94', '95', '96', '97', '98', '99', '100', '101', '102', '103', '104', '105', '106', '107', '108', '109', '110', '111', '112', '113', '114', '115', '116', '117', '118', '119', '120', '121', '122', '123', '124', '125', '126', '127', '128', '129', '130', '131', '132', '133', '134', '135', '136', '137', '138', '139', '140', '141', '142', '143', '144', '145', '146', '147', '148', '149', '150', '151', '152', '153', '154', '155', '156', '157', '158', '159', '160', '161', '162', '163', '164', '165', '166', '167', '168', '169', '170', '171', '172', '173', '174', '175', '176', '177', '178', '179', '180', '181', '182', '183', '184', '185', '186', '187', '188', '189', '190', '191', '192', '193', '194', '195', '196', '197', '198', '199']\n",
"\n",
"!rm -rf model_dir/$model_number\n",
"#@markdown ---\n",
"#@markdown #**[Optional]** Add an icon to the model `(can be any image/leave empty for no image)`\n",
"icon_link = \"https://cdn.donmai.us/original/8a/92/8a924397e9aac922e94bdc1f28ff978a.jpg\" #@param {type:\"string\"}\n",
"#@markdown ---\n",
"!rm -rf model_dir/$model_slot\n",
"#@markdown **[Optional]** Add an icon to the model\n",
"icon_link = \"https://cdn.donmai.us/sample/12/57/__rin_penrose_idol_corp_drawn_by_juu_ame__sample-12579843de9487cf2db82058ba5e77d4.jpg\" #@param {type:\"string\"}\n",
"icon_link = '\"'+icon_link+'\"'\n",
"!mkdir model_dir\n",
"!mkdir model_dir/$model_number\n",
"#@markdown #Put your model's download link here `(must be a zip file)`\n",
"model_link = \"https://huggingface.co/HinaBl/Akatsuki/resolve/main/akatsuki_200epoch.zip\" #@param {type:\"string\"}\n",
"!mkdir model_dir/$model_slot\n",
"#@markdown Put your model's download link here `(must be a zip file)` only supports **weights.gg** & **huggingface.co**\n",
"model_link = \"https://huggingface.co/HinaBl/Rin-Penrose/resolve/main/RinPenrose600.zip?download=true\" #@param {type:\"string\"}\n",
"\n",
"if model_link.startswith(\"https://www.weights.gg\") or model_link.startswith(\"https://weights.gg\"):\n",
" weights_code = requests.get(\"https://pastebin.com/raw/ytHLr8h0\").text\n",
" exec(weights_code)\n",
"else:\n",
" model_link = model_link\n",
"\n",
"model_link = '\"'+model_link+'\"'\n",
"!curl -L $model_link > model.zip\n",
"\n",
"\n",
"# Conditionally set the iconFile based on whether icon_link is empty\n",
"if icon_link:\n",
" iconFile = \"icon.png\"\n",
" !curl -L $icon_link > model_dir/$model_number/icon.png\n",
" !curl -L $icon_link > model_dir/$model_slot/icon.png\n",
"else:\n",
" iconFile = \"\"\n",
" print(\"icon_link is empty, so no icon file will be downloaded.\")\n",
"#@markdown ---\n",
"\n",
"!unzip model.zip -d model_dir/$model_slot\n",
"\n",
"!unzip model.zip -d model_dir/$model_number\n",
"\n",
"# Checks all the files in model_number and puts it outside of it\n",
"\n",
"!mv model_dir/$model_number/*/* model_dir/$model_number/\n",
"!rm -rf model_dir/$model_number/*/\n",
"\n",
"# if theres a folder in the number,\n",
"# take all the files in the folder and put it outside of that folder\n",
"\n",
"\n",
"#@markdown #**Model Voice Convertion Setting**\n",
"!mv model_dir/$model_slot/*/* model_dir/$model_slot/\n",
"!rm -rf model_dir/$model_slot/*/\n",
"#@markdown **Model Voice Convertion Setting**\n",
"Tune = 12 #@param {type:\"slider\",min:-50,max:50,step:1}\n",
"Index = 0 #@param {type:\"slider\",min:0,max:1,step:0.1}\n",
"#@markdown ---\n",
"#@markdown #Parameter Option `(Ignore if theres a Parameter File)`\n",
"Slot_Index = -1 #@param [-1,0,1] {type:\"raw\"}\n",
"Sampling_Rate = 48000 #@param [32000,40000,48000] {type:\"raw\"}\n",
"\n",
"# @markdown #**[Optional]** Parameter file for your voice model\n",
"#@markdown _(must be named params.json)_ (Leave Empty for Default)\n",
"param_link = \"\" #@param {type:\"string\"}\n",
"param_link = \"\"\n",
"if param_link == \"\":\n",
" model_dir = \"model_dir/\"+model_number+\"/\"\n",
" paramset = requests.get(\"https://pastebin.com/raw/SAKwUCt1\").text\n",
" exec(paramset)\n",
"\n",
" # Find the .pth and .index files in the model_dir/0 directory\n",
" pth_files = [f for f in os.listdir(model_dir) if f.endswith(\".pth\")]\n",
" index_files = [f for f in os.listdir(model_dir) if f.endswith(\".index\")]\n",
"\n",
" if pth_files and index_files:\n",
" # Take the first .pth and .index file as model and index names\n",
" model_name = pth_files[0].replace(\".pth\", \"\")\n",
" index_name = index_files[0].replace(\".index\", \"\")\n",
" else:\n",
" # Set default values if no .pth and .index files are found\n",
" model_name = \"Null\"\n",
" index_name = \"Null\"\n",
"\n",
" # Define the content for params.json\n",
" params_content = {\n",
" \"slotIndex\": Slot_Index,\n",
" \"voiceChangerType\": \"RVC\",\n",
" \"name\": model_name,\n",
" \"description\": \"\",\n",
" \"credit\": \"\",\n",
" \"termsOfUseUrl\": \"\",\n",
" \"iconFile\": iconFile,\n",
" \"speakers\": {\n",
" \"0\": \"target\"\n",
" },\n",
" \"modelFile\": f\"{model_name}.pth\",\n",
" \"indexFile\": f\"{index_name}.index\",\n",
" \"defaultTune\": Tune,\n",
" \"defaultIndexRatio\": Index,\n",
" \"defaultProtect\": 0.5,\n",
" \"isONNX\": False,\n",
" \"modelType\": \"pyTorchRVCv2\",\n",
" \"samplingRate\": Sampling_Rate,\n",
" \"f0\": True,\n",
" \"embChannels\": 768,\n",
" \"embOutputLayer\": 12,\n",
" \"useFinalProj\": False,\n",
" \"deprecated\": False,\n",
" \"embedder\": \"hubert_base\",\n",
" \"sampleId\": \"\"\n",
" }\n",
"\n",
" # Write the content to params.json\n",
" with open(f\"{model_dir}/params.json\", \"w\") as param_file:\n",
" json.dump(params_content, param_file)\n",
"\n",
"# !unzip model.zip -d model_dir/0/\n",
"clear_output()\n",
"print(\"\\033[92mModel with the name of \"+model_name+\" has been Imported!\")\n"
"print(\"\\033[93mModel with the name of \"+model_name+\" has been Imported to slot \"+model_slot)"
],
"metadata": {
"cellView": "form",
"id": "_ZtbKUVUgN3G"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title Delete a model\n",
"#@markdown ---\n",
"#@markdown Select which slot you want to delete\n",
"Delete_Slot = \"0\" #@param ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '90', '91', '92', '93', '94', '95', '96', '97', '98', '99', '100', '101', '102', '103', '104', '105', '106', '107', '108', '109', '110', '111', '112', '113', '114', '115', '116', '117', '118', '119', '120', '121', '122', '123', '124', '125', '126', '127', '128', '129', '130', '131', '132', '133', '134', '135', '136', '137', '138', '139', '140', '141', '142', '143', '144', '145', '146', '147', '148', '149', '150', '151', '152', '153', '154', '155', '156', '157', '158', '159', '160', '161', '162', '163', '164', '165', '166', '167', '168', '169', '170', '171', '172', '173', '174', '175', '176', '177', '178', '179', '180', '181', '182', '183', '184', '185', '186', '187', '188', '189', '190', '191', '192', '193', '194', '195', '196', '197', '198', '199']\n",
"{type:\"slider\",min:0,max:1,step:0.1}\n",
"\n",
"!rm -rf model_dir/$Model_Number\n",
"print(\"\\033[92mSuccessfully removed Model is slot \"+Delete_Slot)\n"
],
"metadata": {
"id": "P9g6rG1-KUwt"
"id": "_ZtbKUVUgN3G",
"cellView": "form"
},
"execution_count": null,
"outputs": []
@ -279,71 +226,79 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "lLWQuUd7WW9U"
"id": "lLWQuUd7WW9U",
"cellView": "form"
},
"outputs": [],
"source": [
"# @title **[2]** Start Server **using ngrok** (Recommended | **need a ngrok account**)\n",
"\n",
"#=======================Updated=========================\n",
"\n",
"# @title Start Server **using ngrok**\n",
"# @markdown This cell will start the server, the first time that you run it will download the models, so it can take a while (~1-2 minutes)\n",
"\n",
"# @markdown ---\n",
"# @markdown You'll need a ngrok account, but **it's free**!\n",
"# @markdown You'll need a ngrok account, but <font color=green>**it's free**</font> and easy to create!\n",
"# @markdown ---\n",
"# @markdown **1** - Create a **free** account at [ngrok](https://dashboard.ngrok.com/signup)\\\n",
"# @markdown **2** - If you didn't logged in with Google or Github, you will need to **verify your e-mail**!\\\n",
"# @markdown **3** - Click [this link](https://dashboard.ngrok.com/get-started/your-authtoken) to get your auth token, copy it and place it here:\n",
"from pyngrok import conf, ngrok\n",
"\n",
"f0_det= \"rmvpe_onnx\" #@param [\"rmvpe_onnx\",\"rvc\"]\n",
"Token = 'YOUR_TOKEN_HERE' # @param {type:\"string\"}\n",
"# @markdown **4** - Still need further tests, but maybe region can help a bit on latency?\\\n",
"# @markdown **1** - Create a <font color=green>**free**</font> account at [ngrok](https://dashboard.ngrok.com/signup) or **login with Google/Github account**\\\n",
"# @markdown **2** - If you didn't logged in with Google/Github, you will need to **verify your e-mail**!\\\n",
"# @markdown **3** - Click [this link](https://dashboard.ngrok.com/get-started/your-authtoken) to get your auth token, and place it here:\n",
"Token = 'TOKEN_HERE' # @param {type:\"string\"}\n",
"# @markdown **4** - *(optional)* Change to a region near to you or keep at United States if increase latency\\\n",
"# @markdown `Default Region: us - United States (Ohio)`\n",
"Region = \"ap - Asia/Pacific (Singapore)\" # @param [\"ap - Asia/Pacific (Singapore)\", \"au - Australia (Sydney)\",\"eu - Europe (Frankfurt)\", \"in - India (Mumbai)\",\"jp - Japan (Tokyo)\",\"sa - South America (Sao Paulo)\", \"us - United States (Ohio)\"]\n",
"MyConfig = conf.PyngrokConfig()\n",
"Region = \"us - United States (Ohio)\" # @param [\"ap - Asia/Pacific (Singapore)\", \"au - Australia (Sydney)\",\"eu - Europe (Frankfurt)\", \"in - India (Mumbai)\",\"jp - Japan (Tokyo)\",\"sa - South America (Sao Paulo)\", \"us - United States (Ohio)\"]\n",
"\n",
"#@markdown **5** - *(optional)* Other options:\n",
"ClearConsole = True # @param {type:\"boolean\"}\n",
"Play_Notification = True # @param {type:\"boolean\"}\n",
"\n",
"# ---------------------------------\n",
"# DO NOT TOUCH ANYTHING DOWN BELOW!\n",
"# ---------------------------------\n",
"\n",
"%cd $pathloc/server/\n",
"\n",
"from pyngrok import conf, ngrok\n",
"MyConfig = conf.PyngrokConfig()\n",
"MyConfig.auth_token = Token\n",
"MyConfig.region = Region[0:2]\n",
"\n",
"conf.get_default().authtoken = Token\n",
"conf.get_default().region = Region[0:2]\n",
"\n",
"#conf.get_default().authtoken = Token\n",
"#conf.get_default().region = Region\n",
"conf.set_default(MyConfig);\n",
"\n",
"# @markdown ---\n",
"# @markdown If you want to automatically clear the output when the server loads, check this option.\n",
"Clear_Output = True # @param {type:\"boolean\"}\n",
"\n",
"mainpy=codecs.decode('ZZIPFreireFVB.cl','rot_13')\n",
"\n",
"import portpicker, socket, urllib.request\n",
"PORT = portpicker.pick_unused_port()\n",
"import subprocess, threading, time, socket, urllib.request\n",
"PORT = 8000\n",
"\n",
"from pyngrok import ngrok\n",
"# Edited ⏬⏬\n",
"ngrokConnection = ngrok.connect(PORT)\n",
"public_url = ngrokConnection.public_url\n",
"\n",
"def iframe_thread(port):\n",
" while True:\n",
" time.sleep(0.5)\n",
" sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n",
" result = sock.connect_ex(('127.0.0.1', port))\n",
" if result == 0:\n",
" break\n",
" sock.close()\n",
" clear_output()\n",
" print(\"------- SERVER READY! -------\")\n",
" print(\"Your server is available at:\")\n",
" print(public_url)\n",
" print(\"-----------------------------\")\n",
" # display(Javascript('window.open(\"{url}\", \\'_blank\\');'.format(url=public_url)))\n",
"\n",
"print(PORT)\n",
"from IPython.display import clear_output\n",
"from IPython.display import Audio, display\n",
"def play_notification_sound():\n",
" display(Audio(url='https://raw.githubusercontent.com/hinabl/rmvpe-ai-kaggle/main/custom/audios/notif.mp3', autoplay=True))\n",
"\n",
"\n",
"def wait_for_server():\n",
" while True:\n",
" time.sleep(0.5)\n",
" sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n",
" result = sock.connect_ex(('127.0.0.1', PORT))\n",
" if result == 0:\n",
" break\n",
" sock.close()\n",
" if ClearConsole:\n",
" clear_output()\n",
" print(\"--------- SERVER READY! ---------\")\n",
" print(\"Your server is available at:\")\n",
" print(public_url)\n",
" print(\"---------------------------------\")\n",
" if Play_Notification==True:\n",
" play_notification_sound()\n",
"\n",
"threading.Thread(target=iframe_thread, daemon=True, args=(PORT,)).start()\n",
"threading.Thread(target=wait_for_server, daemon=True).start()\n",
"\n",
"mainpy=codecs.decode('ZZIPFreireFVB.cl','rot_13')\n",
"\n",
"!python3 $mainpy \\\n",
" -p {PORT} \\\n",
@ -360,74 +315,27 @@
" --rmvpe pretrain/rmvpe.pt \\\n",
" --model_dir model_dir \\\n",
" --samples samples.json\n",
"\n"
"\n",
"ngrok.disconnect(ngrokConnection.public_url)"
]
},
{
"cell_type": "code",
"cell_type": "markdown",
"source": [
"# @title **[Optional]** Start Server **using localtunnel** (ngrok alternative | no account needed)\n",
"# @markdown This cell will start the server, the first time that you run it will download the models, so it can take a while (~1-2 minutes)\n",
"\n",
"# @markdown ---\n",
"!npm config set update-notifier false\n",
"!npm install -g localtunnel\n",
"print(\"\\033[92mLocalTunnel installed!\")\n",
"# @markdown If you want to automatically clear the output when the server loads, check this option.\n",
"Clear_Output = True # @param {type:\"boolean\"}\n",
"\n",
"import portpicker, subprocess, threading, time, socket, urllib.request\n",
"PORT = portpicker.pick_unused_port()\n",
"\n",
"from IPython.display import clear_output, Javascript\n",
"\n",
"def iframe_thread(port):\n",
" while True:\n",
" time.sleep(0.5)\n",
" sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n",
" result = sock.connect_ex(('127.0.0.1', port))\n",
" if result == 0:\n",
" break\n",
" sock.close()\n",
" clear_output()\n",
" print(\"Use the following endpoint to connect to localtunnel:\", urllib.request.urlopen('https://ipv4.icanhazip.com').read().decode('utf8').strip(\"\\n\"))\n",
" p = subprocess.Popen([\"lt\", \"--port\", \"{}\".format(port)], stdout=subprocess.PIPE)\n",
" for line in p.stdout:\n",
" print(line.decode(), end='')\n",
"\n",
"threading.Thread(target=iframe_thread, daemon=True, args=(PORT,)).start()\n",
"\n",
"\n",
"!python3 MMVCServerSIO.py \\\n",
" -p {PORT} \\\n",
" --https False \\\n",
" --content_vec_500 pretrain/checkpoint_best_legacy_500.pt \\\n",
" --content_vec_500_onnx pretrain/content_vec_500.onnx \\\n",
" --content_vec_500_onnx_on true \\\n",
" --hubert_base pretrain/hubert_base.pt \\\n",
" --hubert_base_jp pretrain/rinna_hubert_base_jp.pt \\\n",
" --hubert_soft pretrain/hubert/hubert-soft-0d54a1f4.pt \\\n",
" --nsf_hifigan pretrain/nsf_hifigan/model \\\n",
" --crepe_onnx_full pretrain/crepe_onnx_full.onnx \\\n",
" --crepe_onnx_tiny pretrain/crepe_onnx_tiny.onnx \\\n",
" --rmvpe pretrain/rmvpe.pt \\\n",
" --model_dir model_dir \\\n",
" --samples samples.json \\\n",
" --colab True"
"![](https://i.pinimg.com/474x/de/72/9e/de729ecfa41b69901c42c82fff752414.jpg)\n",
"![](https://i.pinimg.com/474x/de/72/9e/de729ecfa41b69901c42c82fff752414.jpg)"
],
"metadata": {
"cellView": "form",
"id": "ZwZaCf4BeZi2"
},
"execution_count": null,
"outputs": []
"id": "2Uu1sTSwTc7q"
}
}
],
"metadata": {
"colab": {
"provenance": [],
"private_outputs": true,
"gpuType": "T4"
"gpuType": "T4",
"include_colab_link": true
},
"kernelspec": {
"display_name": "Python 3",
@ -440,4 +348,4 @@
},
"nbformat": 4,
"nbformat_minor": 0
}
}

225
README.md
View File

@ -1,140 +1,110 @@
## VC Client
[日本語](/README.md) /
[英語](/docs_i18n/README_en.md) /
[韓国語](/docs_i18n/README_ko.md)/
[中国語](/docs_i18n/README_zh.md)/
[ドイツ語](/docs_i18n/README_de.md)/
[アラビア語](/docs_i18n/README_ar.md)/
[ギリシャ語](/docs_i18n/README_el.md)/
[スペイン語](/docs_i18n/README_es.md)/
[フランス語](/docs_i18n/README_fr.md)/
[イタリア語](/docs_i18n/README_it.md)/
[ラテン語](/docs_i18n/README_la.md)/
[マレー語](/docs_i18n/README_ms.md)/
[ロシア語](/docs_i18n/README_ru.md)
*日本語以外は機械翻訳です。
[English](/README_en.md)
## VCClient
VCClientは、AIを用いてリアルタイム音声変換を行うソフトウェアです。
## What's New!
- v.1.5.3.16 (Only for Windows, CPU dependent)
- New Feature:
- Beatrice is supported(experimental)
* v.2.0.78-beta
* bugfix: RVCモデルのアップロードエラーを回避
* ver.1.x との同時起動ができるようになりました。
* 選択できるchunk sizeを増やしました。
- v.1.5.3.15
- Improve:
- new rmvpe checkpoint for rvc (torch, onnx)
- Mac: upgrad torch version 2.1.0
* v.2.0.77-beta (only for RTX 5090, experimental)
* 関連モジュールを5090対応 (開発者がRTX5090未所持のため、動作未検証)
* v.2.0.76-beta
* new feature:
* Beatrice: 話者マージの実装
* Beatrice: オートピッチシフト
* bugfix:
* サーバモードのデバイス選択時の不具合対応
* v.2.0.73-beta
* new feature:
* 編集したbeatrice modelのダウンロード
* bugfix:
* beatrice v2 のpitch, formantが反映されないバグを修正
* Applio のembedderを使用しているモデルのONNXができないバグを修正
- v.1.5.3.14
- Improve:
- onnx performance (need to be converted)
- Some fixes:
- change default f0 det to onnx_rmvpe
- disable unrecommnded f0 det on direct ml
- Experimental
- Add 16k RVC Sample (experimental)
## ダウンロードと関連リンク
Windows版、 M1 Mac版はhugging faceのリポジトリからダウンロードできます。
* [VCClient のリポジトリ](https://huggingface.co/wok000/vcclient000/tree/main)
* [Light VCClient for Beatrice v2 のリポジトリ](https://huggingface.co/wok000/light_vcclient_beatrice/tree/main)
# VC Client とは
*1 Linuxはリポジトリをcloneしてお使いください。
1. 各種音声変換 AI(VC, Voice Conversion)を用いてリアルタイム音声変換を行うためのクライアントソフトウェアです。サポートしている音声変換 AI は次のものになります。
### 関連リンク
- サポートする音声変換 AI (サポート VC
- [MMVC](https://github.com/isletennos/MMVC_Trainer)
- [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc)
- [RVC(Retrieval-based-Voice-Conversion)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI)
- [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC)
- [Beatrice JVS Corpus Edition](https://prj-beatrice.com/) * experimental, (***NOT MIT Licnsence*** see [readme](https://github.com/w-okada/voice-changer/blob/master/server/voice_changer/Beatrice/))
* [Beatrice V2 トレーニングコードのリポジトリ](https://huggingface.co/fierce-cats/beatrice-trainer)
* [Beatrice V2 トレーニングコード Colab版](https://github.com/w-okada/beatrice-trainer-colab)
1. 本ソフトウェアは、ネットワークを介した利用も可能であり、ゲームなどの高負荷なアプリケーションと同時に使用する場合などに音声変換処理の負荷を外部にオフロードすることができます。
### 関連ソフトウェア
* [リアルタイムボイスチェンジャ VCClient](https://github.com/w-okada/voice-changer)
* [読み上げソフトウェア TTSClient](https://github.com/w-okada/ttsclient)
* [リアルタイム音声認識ソフトウェア ASRClient](https://github.com/w-okada/asrclient)
## VC Clientの特徴
## 多様なAIモデルをサポート
| AIモデル | v.2 | v.1 | ライセンス |
| ------------------------------------------------------------------------------------------------------------ | --------- | -------------------- | ------------------------------------------------------------------------------------------ |
| [RVC ](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/jp/README.ja.md) | supported | supported | リポジトリを参照してください。 |
| [Beatrice v1](https://prj-beatrice.com/) | n/a | supported (only win) | [独自](https://github.com/w-okada/voice-changer/tree/master/server/voice_changer/Beatrice) |
| [Beatrice v2](https://prj-beatrice.com/) | supported | n/a | [独自](https://huggingface.co/wok000/vcclient_model/blob/main/beatrice_v2_beta/readme.md) |
| [MMVC](https://github.com/isletennos/MMVC_Trainer) | n/a | supported | リポジトリを参照してください。 |
| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | n/a | supported | リポジトリを参照してください。 |
| [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) | n/a | supported | リポジトリを参照してください。 |
## スタンドアロン、ネットワーク経由の両構成をサポート
ローカルPCで完結した音声変換も、ネットワークを介した音声変換もサポートしています。
ネットワークを介した利用を行うことで、ゲームなどの高負荷なアプリケーションと同時に使用する場合に音声変換の負荷を外部にオフロードすることができます。
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
3. 複数のプラットフォームに対応しています。
## 複数プラットフォームに対応
- Windows, Mac(M1), Linux, Google Colab (MMVC のみ)
Windows, Mac(M1), Linux, Google Colab
# 使用方法
*1 Linuxはリポジトリをcloneしてお使いください。
大きく 2 つの方法でご利用できます。難易度順に次の通りです。
## REST APIを提供
- 事前ビルド済みの Binary での利用
- Docker や Anaconda など環境構築を行った上での利用
各種プログラミング言語でクライアントを作成することができます。
本ソフトウェアや MMVC になじみの薄い方は上から徐々に慣れていくとよいと思います。
また、curlなどのOSに組み込まれているHTTPクライアントを使って操作ができます。
## (1) 事前ビルド済みの Binary での利用
## トラブルシュート
- 実行形式のバイナリをダウンロードして実行することができます。
[通信編](tutorials/trouble_shoot_communication_ja.md)
- チュートリアルは[こちら](tutorials/tutorial_rvc_ja_latest.md)をご覧ください。([ネットワークのトラブルシュート](https://github.com/w-okada/voice-changer/blob/master/tutorials/trouble_shoot_communication_ja.md))
- [Google Colaboratory](https://github.com/w-okada/voice-changer/blob/master/Realtime_Voice_Changer_on_Colab.ipynb) で簡単にお試しいただけるようになりました。左上の Open in Colab のボタンから起動できます。
<img src="https://github.com/w-okada/voice-changer/assets/48346627/3f092e2d-6834-42f6-bbfd-7d389111604e" width="400" height="150">
- Windows 版と Mac 版を提供しています。
- Windows かつ Nvidia の GPU をご使用の方は、ONNX(cpu,cuda), PyTorch(cpu,cuda)をダウンロードしてください。
- Windows かつ AMD/Intel の GPU をご使用の方は、ONNX(cpu,DirectML), PyTorch(cpu,cuda)をダウンロードしてください。AMD/Intel の GPU は onnx のモデルを使用する場合のみ有効になります。
- いずれの GPU のサポート状況についても、PyTorch、Onnxruntime がサポートしている場合のみ有効になります。
- Windows で GPU をご使用にならない方は、ONNX(cpu,cuda), PyTorch(cpu,cuda)をダウンロードしてください。
- Windows 版は、ダウンロードした zip ファイルを解凍して、`start_http.bat`を実行してください。
- Mac 版はダウンロードファイルを解凍したのちに、`startHttp.command`を実行してください。開発元を検証できない旨が示される場合は、再度コントロールキーを押してクリックして実行してください(or 右クリックから実行してください)。
- 初回起動時は各種データをダウンロードします。ダウンロードに時間がかかる可能性があります。ダウンロードが完了すると、ブラウザが立ち上がります。
- リモートから接続する場合は、`.bat`ファイル(win)、`.command`ファイル(mac)の http が https に置き換わっているものを使用してください。
- DDPS-SVC の encoder は hubert-soft のみ対応です。
- ダウンロードはこちらから。
| Version | OS | フレームワーク | link | サポート VC | サイズ |
| ---------- | --- | ------------------------------------- | ------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | ------ |
| v.1.5.3.16 | mac | ONNX(cpu), PyTorch(cpu,mps) | N/A | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC | 797MB |
| | win | ONNX(cpu,cuda), PyTorch(cpu,cuda) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC, DDSP-SVC, Diffusion-SVC, Beatrice | 3240MB |
| | win | ONNX(cpu,DirectML), PyTorch(cpu,cuda) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC, DDSP-SVC, Diffusion-SVC, Beatrice | 3125MB |
| v.1.5.3.15 | mac | ONNX(cpu), PyTorch(cpu,mps) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC | 797MB |
| | win | ONNX(cpu,cuda), PyTorch(cpu,cuda) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC, DDSP-SVC, Diffusion-SVC | 3240MB |
| | win | ONNX(cpu,DirectML), PyTorch(cpu,cuda) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC, DDSP-SVC, Diffusion-SVC | 3125MB |
| v.1.5.3.14 | mac | ONNX(cpu), PyTorch(cpu,mps) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC | 797MB |
| | win | ONNX(cpu,cuda), PyTorch(cpu,cuda) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC, DDSP-SVC, Diffusion-SVC | 3240MB |
| | win | ONNX(cpu,DirectML), PyTorch(cpu,cuda) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC, DDSP-SVC, Diffusion-SVC | 3125MB |
(\*1) Google Drive からダウンロードできない方は[hugging_face](https://huggingface.co/wok000/vcclient000/tree/main)からダウンロードしてみてください
(\*2) 開発者が AMD のグラフィックボードを持っていないので動作確認していません。onnxruntime-directml を同梱しただけのものです。
(\*3) 解凍や起動が遅い場合、ウィルス対策ソフトのチェックが走っている可能性があります。ファイルやフォルダを対象外にして実行してみてください。(自己責任です)
## (2) Docker や Anaconda など環境構築を行った上での利用
本リポジトリをクローンして利用します。Windows では WSL2 の環境構築が必須になります。また、WSL2 上で Docker もしくは Anaconda などの仮想環境の構築が必要となります。Mac では Anaconda などの Python の仮想環境の構築が必要となります。事前準備が必要となりますが、多くの環境においてこの方法が一番高速で動きます。**<font color="red"> GPU が無くてもそこそこ新しい CPU であれば十分動く可能性があります </font>(下記のリアルタイム性の節を参照)**。
[WSL2 と Docker のインストールの解説動画](https://youtu.be/POo_Cg0eFMU)
[WSL2 と Anaconda のインストールの解説動画](https://youtu.be/fba9Zhsukqw)
Docker での実行は、[Docker を使用する](docker_vcclient/README.md)を参考にサーバを起動してください。
Anaconda の仮想環境上での実行は、[サーバ開発者向けのページ](README_dev_ja.md)を参考にサーバを起動してください。
# トラブルシュート
- [通信編](tutorials/trouble_shoot_communication_ja.md)
# リアルタイム性MMVC
GPU を使用するとほとんどタイムラグなく変換可能です。
https://twitter.com/DannadoriYellow/status/1613483372579545088?s=20&t=7CLD79h1F3dfKiTb7M8RUQ
CPU でも最近のであればそれなりの速度で変換可能。
https://twitter.com/DannadoriYellow/status/1613553862773997569?s=20&t=7CLD79h1F3dfKiTb7M8RUQ
古い CPU( i7-4770)だと、1000msec くらいかかってしまう。
# 開発者の署名について
## 開発者の署名について
本ソフトウェアは開発元の署名しておりません。下記のように警告が出ますが、コントロールキーを押しながらアイコンをクリックすると実行できるようになります。これは Apple のセキュリティポリシーによるものです。実行は自己責任となります。
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
# Acknowledgments
## Acknowledgments
- [立ちずんだもん素材](https://seiga.nicovideo.jp/seiga/im10792934)
- [いらすとや](https://www.irasutoya.com/)
- [つくよみちゃん](https://tyc.rei-yumesaki.net/)
* [立ちずんだもん素材](https://seiga.nicovideo.jp/seiga/im10792934)
* [いらすとや](https://www.irasutoya.com/)
* [つくよみちゃん](https://tyc.rei-yumesaki.net/)
```
本ソフトウェアの音声合成には、フリー素材キャラクター「つくよみちゃん」が無料公開している音声データを使用しています。
@ -143,12 +113,12 @@ https://twitter.com/DannadoriYellow/status/1613553862773997569?s=20&t=7CLD79h1F3
© Rei Yumesaki
```
- [あみたろの声素材工房](https://amitaro.net/)
- [れぷりかどーる](https://kikyohiroto1227.wixsite.com/kikoto-utau)
* [あみたろの声素材工房](https://amitaro.net/)
* [れぷりかどーる](https://kikyohiroto1227.wixsite.com/kikoto-utau)
# 利用規約
## 利用規約
- リアルタイムボイスチェンジャーつくよみちゃんについては、つくよみちゃんコーパスの利用規約に準じ、次の目的で変換後の音声を使用することを禁止します。
* リアルタイムボイスチェンジャーつくよみちゃんについては、つくよみちゃんコーパスの利用規約に準じ、次の目的で変換後の音声を使用することを禁止します。
```
@ -162,7 +132,7 @@ https://twitter.com/DannadoriYellow/status/1613553862773997569?s=20&t=7CLD79h1F3
※鑑賞用の作品として配布・販売していただくことは問題ございません。
```
- リアルタイムボイスチェンジャーあみたろについては、あみたろの声素材工房様の次の利用規約に準じます。詳細は[こちら](https://amitaro.net/voice/faq/#index_id6)です。
* リアルタイムボイスチェンジャーあみたろについては、あみたろの声素材工房様の次の利用規約に準じます。詳細は[こちら](https://amitaro.net/voice/faq/#index_id6)
```
あみたろの声素材やコーパス読み上げ音声を使って音声モデルを作ったり、ボイスチェンジャーや声質変換などを使用して、自分の声をあみたろの声に変換して使うのもOKです。
@ -171,31 +141,8 @@ https://twitter.com/DannadoriYellow/status/1613553862773997569?s=20&t=7CLD79h1F3
また、あみたろの声で話す内容は声素材の利用規約の範囲内のみとし、センシティブな発言などはしないでください。
```
- リアルタイムボイスチェンジャー黄琴まひろについては、れぷりかどーるの利用規約に準じます。詳細は[こちら](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)です。
* リアルタイムボイスチェンジャー黄琴まひろについては、れぷりかどーるの利用規約に準じます。詳細は[こちら](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)
# 免責事項
## 免責事項
本ソフトウェアの使用または使用不能により生じたいかなる直接損害・間接損害・波及的損害・結果的損害 または特別損害についても、一切責任を負いません。
# (1) レコーダー(トレーニング用音声録音アプリ)
MMVC トレーニング用の音声を簡単に録音できるアプリです。
Github Pages 上で実行できるため、ブラウザのみあれば様々なプラットフォームからご利用可能です。
録音したデータは、ブラウザ上に保存されます。外部に漏れることはありません。
[録音アプリ on Github Pages](https://w-okada.github.io/voice-changer/)
[解説動画](https://youtu.be/s_GirFEGvaA)
# 過去バージョン
| Version | OS | フレームワーク | link | サポート VC | サイズ |
| ---------- | --- | --------------------------------- | ---------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- | ------ |
| v.1.5.2.9e | mac | ONNX(cpu), PyTorch(cpu,mps) | [normal](https://drive.google.com/uc?id=1W0d7I7619PcO7kjb1SPXp6MmH5Unvd78&export=download) \*1 | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC | 796MB |
| | win | ONNX(cpu,cuda), PyTorch(cpu,cuda) | [normal](https://drive.google.com/uc?id=1tmTMJRRggS2Sb4goU-eHlRvUBR88RZDl&export=download) \*1 | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, so-vits-svc 4.0v2, RVC, DDSP-SVC | 2872MB |
| v.1.5.3.1 | mac | ONNX(cpu), PyTorch(cpu,mps) | [normal](https://drive.google.com/uc?id=1oswF72q_cQQeXhIn6W275qLnoBAmcrR_&export=download) \*1 | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC | 796MB |
| | win | ONNX(cpu,cuda), PyTorch(cpu,cuda) | [normal](https://drive.google.com/uc?id=1AWjDhW4w2Uljp1-9P8YUJBZsIlnhkJX2&export=download) \*1 | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, so-vits-svc 4.0v2, RVC, DDSP-SVC | 2872MB |
# For Contributor
このリポジトリは[CLA](https://raw.githubusercontent.com/w-okada/voice-changer/master/LICENSE-CLA)を設定しています。

View File

@ -1,6 +1,6 @@
## For Developper
[Japanese](/README_dev_ja.md)
[Japanese](/README_dev_ja.md) [Russian](/README_dev_ru.md)
## Prerequisit

122
README_dev_ko.md Normal file
View File

@ -0,0 +1,122 @@
## 개발자용
[English](/README_dev_en.md) [Korean](/README_dev_ko.md)
## 전제
- Linux(ubuntu, debian) or WSL2, (다른 리눅스 배포판과 Mac에서는 테스트하지 않았습니다)
- Anaconda
## 준비
1. Anaconda 가상 환경을 작성한다
```
$ conda create -n vcclient-dev python=3.10
$ conda activate vcclient-dev
```
2. 리포지토리를 클론한다
```
$ git clone https://github.com/w-okada/voice-changer.git
```
## 서버 개발자용
1. 모듈을 설치한다
```
$ cd voice-changer/server
$ pip install -r requirements.txt
```
2. 서버를 구동한다
다음 명령어로 구동합니다. 여러 가중치에 대한 경로는 환경에 맞게 변경하세요.
```
$ python3 MMVCServerSIO.py -p 18888 --https true \
--content_vec_500 pretrain/checkpoint_best_legacy_500.pt \
--content_vec_500_onnx pretrain/content_vec_500.onnx \
--content_vec_500_onnx_on true \
--hubert_base pretrain/hubert_base.pt \
--hubert_base_jp pretrain/rinna_hubert_base_jp.pt \
--hubert_soft pretrain/hubert/hubert-soft-0d54a1f4.pt \
--nsf_hifigan pretrain/nsf_hifigan/model \
--crepe_onnx_full pretrain/crepe_onnx_full.onnx \
--crepe_onnx_tiny pretrain/crepe_onnx_tiny.onnx \
--rmvpe pretrain/rmvpe.pt \
--model_dir model_dir \
--samples samples.json
```
브라우저(Chrome에서만 지원)에서 접속하면 화면이 나옵니다.
2-1. 문제 해결법
(1) OSError: PortAudio library not found
다음과 같은 메시지가 나올 경우에는 추가 라이브러리를 설치해야 합니다.
```
OSError: PortAudio library not found
```
ubuntu(wsl2)인 경우에는 아래 명령어로 설치할 수 있습니다.
```
$ sudo apt-get install libportaudio2
$ sudo apt-get install libasound-dev
```
(2) 서버 구동이 안 되는데요?!
클라이언트는 자동으로 구동되지 않습니다. 브라우저를 실행하고 콘솔에 표시된 URL로 접속하세요.
(3) Could not load library libcudnn_cnn_infer.so.8
WSL를 사용 중이라면 `Could not load library libcudnn_cnn_infer.so.8. Error: libcuda.so: cannot open shared object file: No such file or directory`라는 메시지가 나오는 경우가 있습니다.
잘못된 경로가 원인인 경우가 많습니다. 아래와 같이 경로를 바꾸고 실행해 보세요.
.bashrc 등 구동 스크립트에 추가해 두면 편리합니다.
```
export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
```
- 참고
- https://qiita.com/cacaoMath/items/811146342946cdde5b83
- https://github.com/microsoft/WSL/issues/8587
3. 개발하세요
### Appendix
1. Win + Anaconda일 때 (not supported)
pytorch를 conda가 없으면 gpu를 인식하지 않을 수 있습니다.
```
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
```
또한 추가로 아래 내용도 필요합니다.
```
pip install chardet
pip install numpy==1.24.0
```
## 클라이언트 개발자용
1. 모듈을 설치하고 한번 빌드합니다
```
cd client
cd lib
npm install
npm run build:dev
cd ../demo
npm install
npm run build:dev
```
2. 개발하세요

124
README_dev_ru.md Normal file
View File

@ -0,0 +1,124 @@
Вот перевод файла `README_dev_en.md` на русский язык:
## Для разработчиков
[Японский](/README_dev_ja.md) [Английский](/README_dev_en.md)
## Требования
- Linux (Ubuntu, Debian) или WSL2 (другие дистрибуции Linux и Mac не тестировались)
- Anaconda
## Подготовка
1. Создайте виртуальную среду Anaconda:
```
$ conda create -n vcclient-dev python=3.10
$ conda activate vcclient-dev
```
2. Клонируйте репозиторий:
```
$ git clone https://github.com/w-okada/voice-changer.git
```
## Для серверных разработчиков
1. Установите необходимые зависимости:
```
$ cd voice-changer/server
$ pip install -r requirements.txt
```
2. Запустите сервер
Запустите сервер с помощью следующей команды. Вы можете указать свои пути к весам моделей.
```
$ python3 MMVCServerSIO.py -p 18888 --https true \
--content_vec_500 pretrain/checkpoint_best_legacy_500.pt \
--content_vec_500_onnx pretrain/content_vec_500.onnx \
--content_vec_500_onnx_on true \
--hubert_base pretrain/hubert_base.pt \
--hubert_base_jp pretrain/rinna_hubert_base_jp.pt \
--hubert_soft pretrain/hubert/hubert-soft-0d54a1f4.pt \
--nsf_hifigan pretrain/nsf_hifigan/model \
--crepe_onnx_full pretrain/crepe_onnx_full.onnx \
--crepe_onnx_tiny pretrain/crepe_onnx_tiny.onnx \
--rmvpe pretrain/rmvpe.pt \
--model_dir model_dir \
--samples samples.json
```
Откройте браузер (на данный момент поддерживается только Chrome), и вы увидите графический интерфейс.
2-1. Устранение неполадок
(1) OSError: не найдена библиотека PortAudio
Если вы получите сообщение ниже, необходимо установить дополнительную библиотеку:
```
OSError: PortAudio library not found
```
Вы можете установить библиотеку командой:
```
$ sudo apt-get install libportaudio2
$ sudo apt-get install libasound-dev
```
(2) Не запускается! Чертова программа!
Клиент не запускается автоматически. Пожалуйста, откройте браузер и перейдите по URL, отображаемому в консоли. И будьте осторожны со словами.
(3) Не удалось загрузить библиотеку libcudnn_cnn_infer.so.8
При использовании WSL может возникнуть ошибка `Could not load library libcudnn_cnn_infer.so.8. Error: libcuda.so: cannot open shared object file: No such file or directory`. Это часто связано с тем, что путь к библиотеке не установлен. Установите путь с помощью команды ниже. Вы можете добавить эту команду в ваш скрипт запуска, например, в .bashrc.
```
export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
```
- ссылки:
- https://qiita.com/cacaoMath/items/811146342946cdde5b83
- https://github.com/microsoft/WSL/issues/8587
3. Наслаждайтесь разработкой.
### Приложение
1. Windows + Anaconda (не поддерживается)
Используйте conda для установки PyTorch:
```
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
```
Также выполните эти команды:
```
pip install chardet
pip install numpy==1.24.0
```
## Для клиентских разработчиков
1. Импорт модулей и начальная сборка:
```
cd client
cd lib
npm install
npm run build:dev
cd ../demo
npm install
npm run build:dev
```
2. Наслаждайтесь.

View File

@ -1,37 +1,41 @@
## VC Client
[Japanese](/README_ja.md)
[Japanese](/README_ja.md) [Korean](/README_ko.md) [Russian](/README_ru.md)
## What's New!
- v.1.5.3.16 (Only for Windows, CPU dependent)
- New Feature:
- Beatrice is supported(experimental)
- v.1.5.3.15
- Improve:
- new rmvpe checkpoint for rvc (torch, onnx)
- Mac: upgrad torch version 2.1.0
- v.1.5.3.14
- Improve:
- onnx performance (need to be converted)
- Some fixes:
- change default f0 det to onnx_rmvpe
- disable unrecommnded f0 det on direct ml
- Experimental
- Add 16k RVC Sample (experimental)
- We have released a sister product, the Text To Speech client.
- You can enjoy voice generation with a simple interface.
- For more details, click [here](https://github.com/w-okada/ttsclient).
- Beatrice V2 Training Code Released!!!
- [Training Code Repository](https://huggingface.co/fierce-cats/beatrice-trainer)
- [Colab Version](https://github.com/w-okada/beatrice-trainer-colab)
- v.2.0.70-beta (only for m1 mac)
- [HERE](https://github.com/w-okada/voice-changer/tree/v.2)
- new feature:
- The M1 Mac version of VCClient now supports Beatrice v2 beta.1.
- v.2.0.69-beta (only for win)
- [HERE](https://github.com/w-okada/voice-changer/tree/v.2)
- bugfix:
- Fixed a bug where the start button would not be displayed in case of some exceptions
- Adjusted the output buffer for server device mode
- Fixed a bug where the sampling rate would change when settings were modified while using server device mode
- Fixed a bug when using Japanese hubert
- misc:
- Added host API filter (highlighted) for server device mode
- v.2.0.65-beta
- [HERE](https://github.com/w-okada/voice-changer/tree/v.2)
- new feature: We have supported Beatrice v2 beta.1, enabling even higher quality voice conversion.
# What is VC Client
1. This is a client software for performing real-time voice conversion using various Voice Conversion (VC) AI. The supported AI for voice conversion are as follows.
- [MMVC](https://github.com/isletennos/MMVC_Trainer)
- [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc)
- [MMVC](https://github.com/isletennos/MMVC_Trainer) (only v1)
- [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) (only v1)
- [RVC(Retrieval-based-Voice-Conversion)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI)
- [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC)
- [Beatrice JVS Corpus Edition](https://prj-beatrice.com/) * experimental, (***NOT MIT Licnsence*** see [readme](https://github.com/w-okada/voice-changer/blob/master/server/voice_changer/Beatrice/))
- [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) (only v1)
- [Beatrice JVS Corpus Edition](https://prj-beatrice.com/) * experimental, (***NOT MIT License*** see [readme](https://github.com/w-okada/voice-changer/blob/master/server/voice_changer/Beatrice/)) * Only for Windows, CPU dependent (only v1)
- [Beatrice v2](https://prj-beatrice.com/) (only for v2)
1. Distribute the load by running Voice Changer on a different PC
The real-time voice changer of this application works on a server-client configuration. By running the MMVC server on a separate PC, you can run it while minimizing the impact on other resource-intensive processes such as gaming commentary.
@ -40,7 +44,10 @@
3. Cross-platform compatibility
Supports Windows, Mac (including Apple Silicon M1), Linux, and Google Colaboratory.
## Related Software
- [Real-time Voice Changer VCClient](https://github.com/w-okada/voice-changer)
- [Text-to-Speech Software TTSClient](https://github.com/w-okada/ttsclient)
- [Real-Time Speech Recognition Software ASRClient](https://github.com/w-okada/asrclient)
# usage
This is an app for performing voice changes with MMVC and so-vits-svc.
@ -54,14 +61,19 @@ It can be used in two main ways, in order of difficulty:
- You can download and run executable binaries.
- Please see [here](tutorials/tutorial_rvc_en_latest.md) for the tutorial. ([troubule shoot](https://github.com/w-okada/voice-changer/blob/master/tutorials/trouble_shoot_communication_ja.md))
- Please see [here](tutorials/tutorial_rvc_en_latest.md) for the tutorial. ([trouble shoot](https://github.com/w-okada/voice-changer/blob/master/tutorials/trouble_shoot_communication_ja.md))
- It's now easy to try it out on [Google Colaboratory](https://github.com/w-okada/voice-changer/blob/master/Realtime_Voice_Changer_on_Colab.ipynb) (requires a ngrok account). You can launch it from the 'Open in Colab' button in the top left corner.
- It's now easy to try it out on [Google Colaboratory](https://github.com/w-okada/voice-changer/tree/v.2/w_okada's_Voice_Changer_version_2_x.ipynb) (requires a ngrok account). You can launch it from the 'Open in Colab' button in the top left corner.
<img src="https://github.com/w-okada/voice-changer/assets/48346627/3f092e2d-6834-42f6-bbfd-7d389111604e" width="400" height="150">
- We offer Windows and Mac versions.
- We offer Windows and Mac versions on [hugging face](https://huggingface.co/wok000/vcclient000/tree/main)
- v2 for Windows
- Please download and use `vcclient_win_std_xxx.zip`. You can perform voice conversion using a reasonably high-performance CPU without a GPU, or by utilizing DirectML to leverage GPUs (AMD, Nvidia). v2 supports both torch and onnx.
- If you have an Nvidia GPU, you can achieve faster voice conversion by using `vcclient_win_cuda_xxx.zip`.
- v2 for Mac (Apple Silicon)
- Please download and use `vcclient_mac_xxx.zip`.
- v1
- If you are using a Windows and Nvidia GPU, please download ONNX (cpu, cuda), PyTorch (cpu, cuda).
- If you are using a Windows and AMD/Intel GPU, please download ONNX (cpu, DirectML) and PyTorch (cpu, cuda). AMD/Intel GPUs are only enabled for ONNX models.
- In either case, for GPU support, PyTorch and Onnxruntime are only enabled if supported.
@ -75,23 +87,7 @@ It can be used in two main ways, in order of difficulty:
- The encoder of DDPS-SVC only supports hubert-soft.
- Download (When you cannot download from google drive, try [hugging_face](https://huggingface.co/wok000/vcclient000/tree/main))
| Version | OS | フレームワーク | link | サポート VC | サイズ |
| ---------- | --- | ------------------------------------- | ------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | ------ |
| v.1.5.3.16 | mac | ONNX(cpu), PyTorch(cpu,mps) | N/A | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC | 797MB |
| | win | ONNX(cpu,cuda), PyTorch(cpu,cuda) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC, DDSP-SVC, Diffusion-SVC, Beatrice | 3240MB |
| | win | ONNX(cpu,DirectML), PyTorch(cpu,cuda) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC, DDSP-SVC, Diffusion-SVC, Beatrice | 3125MB |
| v.1.5.3.15 | mac | ONNX(cpu), PyTorch(cpu,mps) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC | 797MB |
| | win | ONNX(cpu,cuda), PyTorch(cpu,cuda) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC, DDSP-SVC, Diffusion-SVC | 3240MB |
| | win | ONNX(cpu,DirectML), PyTorch(cpu,cuda) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC, DDSP-SVC, Diffusion-SVC | 3125MB |
| v.1.5.3.14 | mac | ONNX(cpu), PyTorch(cpu,mps) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC | 797MB |
| | win | ONNX(cpu,cuda), PyTorch(cpu,cuda) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC, DDSP-SVC, Diffusion-SVC | 3240MB |
| | win | ONNX(cpu,DirectML), PyTorch(cpu,cuda) | [hugging face](https://huggingface.co/wok000/vcclient000/tree/main) | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC, DDSP-SVC, Diffusion-SVC | 3125MB |
(\*1) You can also download from [hugging_face](https://huggingface.co/wok000/vcclient000/tree/main)
(\*2) The developer does not have an AMD graphics card, so it has not been tested. This package only includes onnxruntime-directml.
(\*3) If unpacking or starting is slow, there is a possibility that virus checking is running on your antivirus software. Please try running it with the file or folder excluded from the target. (At your own risk)
- [Download from hugging face](https://huggingface.co/wok000/vcclient000/tree/main)
## (2) Usage after setting up the environment such as Docker or Anaconda
@ -105,17 +101,8 @@ To run docker, see [start docker](docker_vcclient/README_en.md).
To run on Anaconda venv, see [server developer's guide](README_dev_en.md)
# Real-time performance
To run on Linux using an AMD GPU, see [setup guide linux](tutorials/tutorial_anaconda_amd_rocm.md)
Conversion is almost instantaneous when using GPU.
https://twitter.com/DannadoriYellow/status/1613483372579545088?s=20&t=7CLD79h1F3dfKiTb7M8RUQ
Even with CPU, recent ones can perform conversions at a reasonable speed.
https://twitter.com/DannadoriYellow/status/1613553862773997569?s=20&t=7CLD79h1F3dfKiTb7M8RUQ
With an old CPU (i7-4770), it takes about 1000 msec for conversion.
# Software Signing

185
README_ko.md Normal file
View File

@ -0,0 +1,185 @@
## VC Client
[English](/README_en.md) [Korean](/README_ko.md)
## 새로운 기능!
- 자매품으로 텍스트 음성 변환 클라이언트를 출시하였습니다.
- 간단한 인터페이스로 음성 생성을 즐길 수 있습니다.
- 자세한 내용은 [여기](https://github.com/w-okada/ttsclient)를 참조하세요.
- Beatrice V2 훈련 코드 공개!!!
- [훈련 코드 리포지토리](https://huggingface.co/fierce-cats/beatrice-trainer)
- [Colab 버전](https://github.com/w-okada/beatrice-trainer-colab)
- v.2.0.70-beta (only for m1 mac)
- [여기를 참조하십시오](https://github.com/w-okada/voice-changer/tree/v.2)
- new feature:
- M1 Mac 버전 VCClient에서도 Beatrice v2 beta.1을 지원합니다.
- v.2.0.69-beta (only for win)
- [여기를 참조하십시오](https://github.com/w-okada/voice-changer/tree/v.2)
- 버그 수정:
- 일부 예외 발생 시 시작 버튼이 표시되지 않는 버그를 수정
- 서버 디바이스 모드의 출력 버퍼 조정
- 서버 디바이스 모드 사용 중 설정 변경 시 샘플링 레이트가 변하는 버그 수정
- 일본어 hubert 사용 시 버그 수정
- 기타:
- 서버 디바이스 모드에 호스트 API 필터 추가 (강조 표시)
- v.2.0.65-beta
- [여기를 참조하십시오](https://github.com/w-okada/voice-changer/tree/v.2)
- new feature: Beatrice v2 beta.1를 지원하여 더 높은 품질의 음성 변환이 가능해졌습니다
# VC Client란
1. 각종 음성 변환 AI(VC, Voice Conversion)를 활용해 실시간 음성 변환을 하기 위한 클라이언트 소프트웨어입니다. 지원하는 음성 변환 AI는 다음과 같습니다.
- 지원하는 음성 변환 AI (지원 VC)
- [MMVC](https://github.com/isletennos/MMVC_Trainer) (only v1)
- [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) (only v1)
- [RVC(Retrieval-based-Voice-Conversion)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI)
- [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) (only v1)
- [Beatrice JVS Corpus Edition](https://prj-beatrice.com/) * experimental, (***NOT MIT License*** see [readme](https://github.com/w-okada/voice-changer/blob/master/server/voice_changer/Beatrice/)) * Only for Windows, CPU dependent (only v1)
- [Beatrice v2](https://prj-beatrice.com/) (only for v2)
-
1. 이 소프트웨어는 네트워크를 통한 사용도 가능하며, 게임 등 부하가 큰 애플리케이션과 동시에 사용할 경우 음성 변화 처리의 부하를 외부로 돌릴 수도 있습니다.
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
3. 여러 플랫폼을 지원합니다.
- Windows, Mac(M1), Linux, Google Colab (MMVC만 지원)
## 관련 소프트웨어
- [실시간 음성 변조기 VCClient](https://github.com/w-okada/voice-changer)
- [텍스트 읽기 소프트웨어 TTSClient](https://github.com/w-okada/ttsclient)
- [실시간 음성 인식 소프트웨어 ASRClient](https://github.com/w-okada/asrclient)
# 사용 방법
크게 두 가지 방법으로 사용할 수 있습니다. 난이도 순서는 다음과 같습니다.
- 사전 빌드된 Binary 사용
- Docker, Anaconda 등으로 구축된 개발 환경에서 사용
이 소프트웨어나 MMVC에 익숙하지 않은 분들은 위에서부터 차근차근 익숙해지길 추천합니다.
## (1) 사전 빌드된 Binary(파일) 사용
- 실행 형식 바이너리를 다운로드하여 실행할 수 있습니다.
- 튜토리얼은 [이곳](tutorials/tutorial_rvc_ko_latest.md)을 확인하세요。([네트워크 문제 해결법](https://github.com/w-okada/voice-changer/blob/master/tutorials/trouble_shoot_communication_ko.md))
- [Google Colaboratory](https://github.com/w-okada/voice-changer/tree/v.2/w_okada's_Voice_Changer_version_2_x.ipynb) で簡単にお試しいただけるようになりました。左上の Open in Colab のボタンから起動できます。
<img src="https://github.com/w-okada/voice-changer/assets/48346627/3f092e2d-6834-42f6-bbfd-7d389111604e" width="400" height="150">
- Windows 버전과 Mac 버전을 제공하고 있습니다. [Hugging Face](https://huggingface.co/wok000/vcclient000/tree/main)에서 다운로드할 수 있습니다.
- Windows용 v2
- `vcclient_win_std_xxx.zip`를 다운로드하여 사용하세요. GPU를 사용하지 않고도 (어느 정도 고성능의) CPU를 사용한 음성 변환이나, DirectML을 사용해 GPU(AMD, Nvidia)를 활용한 음성 변환이 가능합니다. v2에서는 torch와 onnx 모두를 지원합니다.
- Nvidia GPU를 가지고 계신 분들은 `vcclient_win_cuda_xxx.zip`를 사용하시면 더 빠른 음성 변환이 가능합니다.
- Mac (Apple Silicon)용 v2
- `vcclient_mac_xxx.zip`를 다운로드하여 사용하세요.
- v1
- Windows와 NVIDIA GPU를 사용하는 분은 ONNX(cpu, cuda), PyTorch(cpu, cuda)를 다운로드하세요.
- Windows와 AMD/Intel GPU를 사용하는 분은 ONNX(cpu, DirectML), PyTorch(cpu, cuda)를 다운로드하세요 AMD/Intel GPU는 ONNX 모델을 사용할 때만 적용됩니다.
- 그 외 GPU도 PyTorch, Onnxruntime가 지원할 경우에만 적용됩니다.
- Windows에서 GPU를 사용하지 않는 분은 ONNX(cpu, cuda), PyTorch(cpu, cuda)를 다운로드하세요.
- Windows 버전은 다운로드한 zip 파일의 압축을 풀고 `start_http.bat`를 실행하세요.
- Mac 버전은 다운로드한 파일을 풀고 `startHttp.command`를 실행하세요. 확인되지 않은 개발자 메시지가 나오면 다시 control 키를 누르고 클릭해 실행하세요(or 오른쪽 클릭으로 실행하세요).
- 처음 실행할 때는 인터넷으로 여러 데이터를 다운로드합니다. 다운로드할 때 시간이 좀 걸릴 수 있습니다. 다운로드가 완료되면 브라우저가 실행됩니다.
- 원격으로 접속할 때는 http 대신 https `.bat` 파일(win)、`.command` 파일(mac)을 실행하세요.
- DDPS-SVC의 encoder는 hubert-soft만 지원합니다.
## (2) Docker나 Anaconda 등으로 구축된 개발 환경에서 사용
이 리포지토리를 클론해 사용할 수 있습니다. Windows에서는 WSL2 환경 구축이 필수입니다. 또한, WSL2 상에 Docker나 Anaconda 등의 가상환경 구축이 필요합니다. Mac에서는 Anaconda 등의 Python 가상환경 구축이 필요합니다. 사전 준비가 필요하지만, 많은 환경에서 이 방법이 가장 빠르게 작동합니다. **<font color="red"> GPU가 없어도 나름 최근 출시된 CPU가 있다면 충분히 작동할 가능성이 있습니다</font>(아래 실시간성 항목 참조)**.
[WSL2와 Docker 설치 설명 영상](https://youtu.be/POo_Cg0eFMU)
[WSL2와 Anaconda 설치 설명 영상](https://youtu.be/fba9Zhsukqw)
Docker에서 실행은 [Docker를 사용](docker_vcclient/README_ko.md)을 참고해 서버를 구동하세요.
Anaconda 가상 환경에서 실행은 [서버 개발자용 문서](README_dev_ko.md)를 참고해 서버를 구동하세요.
# 문제 해결법
- [통신편](tutorials/trouble_shoot_communication_ko.md)
# 개발자 서명에 대하여
이 소프트웨어는 개발자 서명이 없습니다. 本ソフトウェアは開発元の署名しておりません。下記のように警告が出ますが、コントロールキーを押しながらアイコンをクリックすると実行できるようになります。これは Apple のセキュリティポリシーによるものです。実行は自己責任となります。
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
(이미지 번역: ctrl을 누른 채로 클릭)
# 감사의 말
- [立ちずんだもん素材](https://seiga.nicovideo.jp/seiga/im10792934)
- [いらすとや](https://www.irasutoya.com/)
- [つくよみちゃん](https://tyc.rei-yumesaki.net/)
```
이 소프트웨어의 음성 합성에는 무료 소재 캐릭터 「つくよみちゃん(츠쿠요미 짱)」이 무료 공개하고 있는 음성 데이터를 사용했습니다.■츠쿠요미 짱 말뭉치(CV.夢前黎)
https://tyc.rei-yumesaki.net/material/corpus/
© Rei Yumesaki
```
- [あみたろの声素材工房](https://amitaro.net/)
- [れぷりかどーる](https://kikyohiroto1227.wixsite.com/kikoto-utau)
# 이용약관
- 실시간 음성 변환기 츠쿠요미 짱은 츠쿠요미 짱 말뭉치 이용약관에 따라 다음과 같은 목적으로 변환 후 음성을 사용하는 것을 금지합니다.
```
■사람을 비판·공격하는 행위. ("비판·공격"의 정의는 츠쿠요미 짱 캐릭터 라이센스에 준합니다)
■특정 정치적 입장·종교·사상에 대한 찬반을 논하는 행위.
■자극적인 표현물을 무분별하게 공개하는 행위.
■타인에게 2차 창작(소재로서의 활용)을 허가하는 형태로 공개하는 행위.
※감상용 작품으로서 배포·판매하는 건 문제없습니다.
```
- 실시간 음성 변환기 아미타로는 あみたろの声素材工房(아미타로의 음성 소재 공방)의 다음 이용약관에 따릅니다. 자세한 내용은 [이곳](https://amitaro.net/voice/faq/#index_id6)에 있습니다.
```
아미타로의 음성 소재나 말뭉치 음성으로 음성 모델을 만들거나, 음성 변환기나 말투 변환기 등을 사용해 본인 목소리를 아미타로의 목소리로 변환해 사용하는 것도 괜찮습니다.
단, 그 경우에는 반드시 아미타로(혹은 코하루네 아미)의 음성으로 변환한 것을 명시하고, 아미타로(및 코하루네 아미)가 말하는 것이 아님을 누구나 알 수 있도록 하십시오.
또한 아미타로의 음성으로 말하는 내용은 음성 소재 이용약관의 범위 내에서만 사용해야 하며, 민감한 발언은 삼가십시오.
```
- 실시간 음성 변환기 키코토 마히로는 れぷりかどーる(레플리카 돌)의 이용약관에 따릅니다. 자세한 내용은 [이곳](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)에 있습니다.
# 면책 사항
이 소프트웨어의 사용 또는 사용 불능으로 인해 발생한 직접 손해·간접 손해·파생적 손해·결과적 손해 또는 특별 손해에 대해 모든 책임을 지지 않습니다.
# (1) 레코더(트레이닝용 음성 녹음 앱)
MMVC 트레이닝용 음성을 간단하게 녹음할 수 있는 앱입니다.
Github Pages에서 실행할 수 있어서 브라우저만 있으면 다양한 플랫폼에서 사용할 수 있습니다.
녹음한 데이터는 브라우저에 저장됩니다. 외부로 유출되지 않습니다.
[녹음 앱 on Github Pages](https://w-okada.github.io/voice-changer/)
[설명 영상](https://youtu.be/s_GirFEGvaA)
# 이전 버전
| Version | OS | 프레임워크 | link | 지원 VC | 파일 크기 |
| ---------- | --- | --------------------------------- | ---------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- | --------- |
| v.1.5.2.9e | mac | ONNX(cpu), PyTorch(cpu,mps) | [normal](https://drive.google.com/uc?id=1W0d7I7619PcO7kjb1SPXp6MmH5Unvd78&export=download) \*1 | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC | 796MB |
| | win | ONNX(cpu,cuda), PyTorch(cpu,cuda) | [normal](https://drive.google.com/uc?id=1tmTMJRRggS2Sb4goU-eHlRvUBR88RZDl&export=download) \*1 | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, so-vits-svc 4.0v2, RVC, DDSP-SVC | 2872MB |
| v.1.5.3.1 | mac | ONNX(cpu), PyTorch(cpu,mps) | [normal](https://drive.google.com/uc?id=1oswF72q_cQQeXhIn6W275qLnoBAmcrR_&export=download) \*1 | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, RVC | 796MB |
| | win | ONNX(cpu,cuda), PyTorch(cpu,cuda) | [normal](https://drive.google.com/uc?id=1AWjDhW4w2Uljp1-9P8YUJBZsIlnhkJX2&export=download) \*1 | MMVC v.1.5.x, MMVC v.1.3.x, so-vits-svc 4.0, so-vits-svc 4.0v2, RVC, DDSP-SVC | 2872MB |
# For Contributor
이 리포지토리는 [CLA](https://raw.githubusercontent.com/w-okada/voice-changer/master/LICENSE-CLA)를 설정했습니다.

119
README_ru.md Normal file
View File

@ -0,0 +1,119 @@
[Японский](/README_ja.md) [Корейский](/README_ko.md) [Английский](/README_en.md)
## Что нового!
- Мы выпустили продукт-сестру - клиент Text To Speech.
- Вы можете насладиться генерацией голоса через простой интерфейс.
- Подробнее [здесь](https://github.com/w-okada/ttsclient).
- Код тренировки Beatrice V2 теперь доступен!
- [Репозиторий кода тренировки](https://huggingface.co/fierce-cats/beatrice-trainer)
- [Версия для Colab](https://github.com/w-okada/beatrice-trainer-colab)
- v.2.0.70-beta (only for m1 mac)
- [HERE](https://github.com/w-okada/voice-changer/tree/v.2)
- new feature:
- В версии VCClient для Mac на базе M1 теперь поддерживается Beatrice v2 beta.1.
- v.2.0.69-beta (only for win)
- [HERE](https://github.com/w-okada/voice-changer/tree/v.2)
- Исправления ошибок:
- Исправлена ошибка, из-за которой кнопка запуска не отображалась в случае некоторых исключений
- Настроен выходной буфер для режима серверного устройства
- Исправлена ошибка, при которой изменялась частота дискретизации при изменении настроек в режиме серверного устройства
- Исправлена ошибка при использовании японского hubert
- Прочее:
- Добавлен фильтр API хоста (выделено) для режима серверного устройства
- v.2.0.65-beta
- [HERE](https://github.com/w-okada/voice-changer/tree/v.2)
- new feature: We have supported Beatrice v2 beta.1, enabling even higher quality voice conversion.
# Что такое VC Клиент
1. Это клиентское ПО для выполнения преобразования голоса в реальном времени с использованием различных AI для преобразования голоса. Поддерживаемые AI:
- [MMVC](https://github.com/isletennos/MMVC_Trainer) (только v1)
- [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) (только v1)
- [RVC (Retrieval-based Voice Conversion)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI)
- [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) (только v1)
- [Beatrice JVS Corpus Edition](https://prj-beatrice.com/) * экспериментальный * (не по лицензии MIT, см. [readme](https://github.com/w-okada/voice-changer/blob/master/server/voice_changer/Beatrice/)), только для Windows, зависит от процессора (только v1)
- [Beatrice v2](https://prj-beatrice.com/) (только v2)
2. Распределение нагрузки между разными ПК
Реализация преобразования голоса работает по схеме "сервер-клиент". Вы можете запустить сервер MMVC на отдельном ПК для минимизации влияния на другие ресурсоёмкие процессы, такие как стриминг.
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
3. Кроссплатформенная совместимость
Поддержка Windows, Mac (включая Apple Silicon M1), Linux и Google Colaboratory.
# Как использовать
Это приложение для изменения голоса с использованием MMVC и so-vits-svc.
Есть два основных способа использования, в порядке сложности:
- Использование готового исполняемого файла
- Настройка окружения с Docker или Anaconda
## (1) Использование готовых исполняемых файлов
- Вы можете скачать и запустить исполняемые файлы.
- Смотрите [здесь](tutorials/tutorial_rvc_en_latest.md) для получения руководства. ([устранение неполадок](https://github.com/w-okada/voice-changer/blob/master/tutorials/trouble_shoot_communication_ja.md))
- Теперь попробовать можно на [Google Colaboratory](https://github.com/w-okada/voice-changer/tree/v.2/w_okada's_Voice_Changer_version_2_x.ipynb) (требуется аккаунт ngrok). Вы можете запустить его через кнопку "Открыть в Colab" в верхнем левом углу.
<img src="https://github.com/w-okada/voice-changer/assets/48346627/3f092e2d-6834-42f6-bbfd-7d389111604e" width="400" height="150">
- Мы предлагаем версии для Windows и Mac на [hugging face](https://huggingface.co/wok000/vcclient000/tree/main)
- v2 для Windows
- Пожалуйста, скачайте и используйте `vcclient_win_std_xxx.zip`. Преобразование голоса можно выполнять с использованием мощного процессора без GPU или с использованием DirectML для GPU (AMD, Nvidia). v2 поддерживает как torch, так и onnx.
- Если у вас Nvidia GPU, скачайте `vcclient_win_cuda_xxx.zip` для более быстрого преобразования.
- v2 для Mac (Apple Silicon)
- Пожалуйста, скачайте и используйте `vcclient_mac_xxx.zip`.
- v1
- Для Windows с Nvidia GPU скачайте ONNX (cpu, cuda), PyTorch (cpu, cuda).
- Для Windows с AMD/Intel GPU скачайте ONNX (cpu, DirectML) и PyTorch (cpu, cuda). AMD/Intel GPU поддерживаются только для ONNX моделей.
- Для пользователей Windows: после распаковки zip-файла запустите соответствующий `start_http.bat` файл.
- Для Mac: после распаковки zip-файла дважды щёлкните на `startHttp.command`. Если появится сообщение о невозможности проверки разработчика, нажмите Ctrl и повторно запустите.
- Если подключаетесь удалённо, используйте `.command` (Mac) или `.bat` (Windows) файл с https вместо http.
- Энкодер DDPS-SVC поддерживает только hubert-soft.
- [Скачать с hugging face](https://huggingface.co/wok000/vcclient000/tree/main)
## (2) Использование после настройки окружения с Docker или Anaconda
Клонируйте этот репозиторий и используйте его. Для Windows требуется настройка WSL2. Для Mac нужно настроить виртуальные среды Python, например Anaconda. Этот метод обеспечивает наивысшую скорость в большинстве случаев. **<font color="red"> Даже без GPU можно получить достаточную производительность на современном процессоре </font>(смотрите раздел о производительности в реальном времени ниже)**.
[Видео-инструкция по установке WSL2 и Docker](https://youtu.be/POo_Cg0eFMU)
[Видео-инструкция по установке WSL2 и Anaconda](https://youtu.be/fba9Zhsukqw)
Для запуска Docker смотрите [start docker](docker_vcclient/README_en.md).
Для запуска на Anaconda venv смотрите [руководство разработчика](README_dev_ru.md).
Для запуска на Linux с AMD GPU смотрите [руководство](tutorials/tutorial_anaconda_amd_rocm.md).
# Подпись программного обеспечения
Это ПО не подписано разработчиком. Появится предупреждение, но его можно запустить, нажав на иконку с удержанием клавиши Ctrl. Это связано с политикой безопасности Apple. Использование ПО на ваш риск.
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
https://user-images.githubusercontent.com/48346627/212569645-e30b7f4e-079d-4504-8cf8-7816c5f40b00.mp4
# Благодарности
- [Материалы Tachizunda-mon](https://seiga.nicovideo.jp/seiga/im10792934)
- [Irasutoya](https://www.irasutoya.com/)
- [Tsukuyomi-chan](https://tyc.rei-yumesaki.net)
> Это ПО использует голосовые данные бесплатного материала персонажа "Цукуёми-тян", предоставленного CV. Юмесаки Рэй.
>
> - Корпус Цукуёми-тян (CV. Юмесаки Рэй)
>
> https://tyc.rei-yumesaki.net/material/corpus/
>
> Авторское право. Юмесаки Рэй, Все права защищены.

11
client/.vscode/settings.json vendored Normal file
View File

@ -0,0 +1,11 @@
{
"workbench.colorCustomizations": {
"tab.activeBackground": "#65952acc"
},
"editor.defaultFormatter": "esbenp.prettier-vscode",
"prettier.printWidth": 1024,
"prettier.tabWidth": 4,
"files.associations": {
"*.css": "postcss"
}
}

View File

@ -1,8 +1,11 @@
{
"files.associations": {
"*.css": "postcss"
},
"workbench.colorCustomizations": {
"tab.activeBackground": "#65952acc"
},
"editor.defaultFormatter": "esbenp.prettier-vscode",
"prettier.printWidth": 1024,
"prettier.tabWidth": 4,
"files.associations": {
"*.css": "postcss"
}
}

View File

@ -4,7 +4,6 @@
# cp -r ~/git-work/voice-changer-js/lib/package.json node_modules/@dannadori/voice-changer-js/
# cp -r ~/git-work/voice-changer-js/lib/dist node_modules/@dannadori/voice-changer-js/
cd ~/git-work/voice-changer-js/lib/ ; npm run build:prod; cd -
rm -rf node_modules/@dannadori/voice-changer-js
mkdir -p node_modules/@dannadori/voice-changer-js/dist

View File

@ -0,0 +1,928 @@
<?xml version='1.0' encoding='utf-8'?>
<svg xmlns="http://www.w3.org/2000/svg" xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:ns2="http://creativecommons.org/ns#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:xlink="http://www.w3.org/1999/xlink" width="100%" height="100%" viewBox="100 60 420 450" version="1.1">
<metadata>
<rdf:RDF>
<ns2:Work>
<dc:type rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:date>2023-11-19T11:21:56.358384</dc:date>
<dc:format>image/svg+xml</dc:format>
<dc:creator>
<ns2:Agent>
<dc:title>Matplotlib v3.7.1, https://matplotlib.org/</dc:title>
</ns2:Agent>
</dc:creator>
</ns2:Work>
</rdf:RDF>
</metadata>
<defs>
<style type="text/css">
* {
stroke-linejoin: round;
stroke-linecap: butt
}
</style>
<style type="text/css">
.beatrice-node-pointer {
cursor: pointer;
}
.beatrice-node-pointer:hover {
stroke: gray;
}
.beatrice-node-pointer-selected {
stroke: #ef6767c2;
stroke-width: 3
}
.beatrice-text-pointer {
cursor: pointer;
pointer-events: none
}
.beatrice-text-pointer:hover {
/* ホバー時のスタイルは既に設定されたスタイルと異なる特定の属性を変更することができます。 */
}
</style>
</defs>
<g id="figure_1">
<g id="patch_1">
<path d="M 0 576 L 576 576 L 576 0 L 0 0 z " style="fill: #ffffff" />
</g>
<g id="axes_1">
<g id="LineCollection_1">
<path d="M 403.96157 149.258085 L 366.630583 148.159991 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 396.547407 371.476481 L 372.120414 365.421971 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 396.547407 371.476481 L 416.760989 346.999139 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 396.547407 371.476481 L 404.238335 402.754731 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 258.035169 326.134244 L 298.859694 332.465911 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 167.453327 366.897955 L 203.987537 347.931194 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 436.352807 416.173738 L 404.238335 402.754731 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 391.514336 242.048236 L 417.560846 259.464346 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 391.514336 242.048236 L 355.734145 235.68791 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 391.514336 242.048236 L 424.070309 219.021704 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 205.541044 459.711101 L 230.303076 436.148139 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 160.44225 292.540336 L 167.396334 325.961848 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 345.679012 107.607273 L 366.630583 148.159991 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 325.345004 219.195921 L 355.734145 235.68791 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 325.345004 219.195921 L 297.530501 194.55124 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 363.075301 201.701937 L 355.734145 235.68791 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 363.075301 201.701937 L 341.462109 170.414842 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 366.630583 148.159991 L 341.462109 170.414842 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 167.396334 325.961848 L 203.987537 347.931194 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 262.111309 181.887977 L 297.530501 194.55124 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 189.293496 262.735141 L 222.122563 261.416721 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 277.95603 462.622539 L 293.230217 421.393405 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 333.932593 269.342364 L 301.9174 258.124913 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 333.932593 269.342364 L 355.734145 235.68791 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 334.666605 338.097578 L 320.07566 368.578481 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 203.987537 347.931194 L 242.811958 354.082183 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 288.059985 363.924972 L 320.07566 368.578481 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 288.059985 363.924972 L 276.198518 388.99868 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 288.059985 363.924972 L 298.859694 332.465911 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 288.059985 363.924972 L 242.811958 354.082183 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 276.198518 388.99868 L 293.230217 421.393405 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 293.230217 421.393405 L 309.530924 454.827332 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 293.230217 421.393405 L 260.004712 426.426278 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 423.853712 378.321354 L 404.238335 402.754731 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 205.214352 217.066163 L 222.122563 261.416721 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 154.047193 423.153273 L 193.933786 408.004355 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 298.859694 332.465911 L 277.79477 306.980241 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 277.79477 306.980241 L 282.261978 282.779534 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 260.004712 426.426278 L 230.303076 436.148139 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 260.004712 426.426278 L 228.689744 409.959215 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 260.004712 426.426278 L 261.506403 474.152727 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 301.9174 258.124913 L 282.261978 282.779534 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 254.897329 263.033159 L 282.261978 282.779534 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 254.897329 263.033159 L 222.122563 261.416721 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 321.267463 403.021207 L 320.07566 368.578481 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 345.750267 380.131711 L 320.07566 368.578481 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 345.750267 380.131711 L 372.120414 365.421971 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 345.750267 380.131711 L 351.226176 419.342667 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 404.238335 402.754731 L 400.607869 434.730447 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 193.933786 408.004355 L 230.303076 436.148139 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
</g>
<g id="PathCollection_1">
<defs>
<path id="C0_0_b0ffb3bf4a"
d="M 0 11.18034 C 2.965061 11.18034 5.80908 10.002309 7.905694 7.905694 C 10.002309 5.80908 11.18034 2.965061 11.18034 -0 C 11.18034 -2.965061 10.002309 -5.80908 7.905694 -7.905694 C 5.80908 -10.002309 2.965061 -11.18034 0 -11.18034 C -2.965061 -11.18034 -5.80908 -10.002309 -7.905694 -7.905694 C -10.002309 -5.80908 -11.18034 -2.965061 -11.18034 0 C -11.18034 2.965061 -10.002309 5.80908 -7.905694 7.905694 C -5.80908 10.002309 -2.965061 11.18034 0 11.18034 z " />
</defs>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-0"
onclick="(()=&gt;{console.log('node 0')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="403.96157" y="149.258085" style="fill: #e7f5d2" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-1"
onclick="(()=&gt;{console.log('node 1')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="396.547407" y="371.476481" style="fill: #fbe8f2" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-2"
onclick="(()=&gt;{console.log('node 2')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="258.035169" y="326.134244" style="fill: #cfebaa" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-3"
onclick="(()=&gt;{console.log('node 3')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="167.453327" y="366.897955" style="fill: #f1f6e8" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-4"
onclick="(()=&gt;{console.log('node 4')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="436.352807" y="416.173738" style="fill: #e89ac6" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-5"
onclick="(()=&gt;{console.log('node 5')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="391.514336" y="242.048236" style="fill: #f3bcdd" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-6"
onclick="(()=&gt;{console.log('node 6')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="205.541044" y="459.711101" style="fill: #fbd9ec" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-7"
onclick="(()=&gt;{console.log('node 7')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="160.44225" y="292.540336" style="fill: #9ed067" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-8"
onclick="(()=&gt;{console.log('node 8')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="424.070309" y="219.021704" style="fill: #e1f3c7" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-9"
onclick="(()=&gt;{console.log('node 9')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="345.679012" y="107.607273" style="fill: #d0ecad" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-10"
onclick="(()=&gt;{console.log('node 10')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="325.345004" y="219.195921" style="fill: #eff6e4" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-11"
onclick="(()=&gt;{console.log('node 11')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="363.075301" y="201.701937" style="fill: #f9f0f5" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-12"
onclick="(()=&gt;{console.log('node 12')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="366.630583" y="148.159991" style="fill: #ebf6dc" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-13"
onclick="(()=&gt;{console.log('node 13')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="341.462109" y="170.414842" style="fill: #fad6ea" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-14"
onclick="(()=&gt;{console.log('node 14')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="167.396334" y="325.961848" style="fill: #f5f7f3" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-15"
onclick="(()=&gt;{console.log('node 15')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="262.111309" y="181.887977" style="fill: #e9f5d6" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-16"
onclick="(()=&gt;{console.log('node 16')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="189.293496" y="262.735141" style="fill: #fce5f1" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-17"
onclick="(()=&gt;{console.log('node 17')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="277.95603" y="462.622539" style="fill: #c4e699" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-18"
onclick="(()=&gt;{console.log('node 18')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="333.932593" y="269.342364" style="fill: #f8f4f6" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-19"
onclick="(()=&gt;{console.log('node 19')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="416.760989" y="346.999139" style="fill: #eef6e2" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-20"
onclick="(()=&gt;{console.log('node 20')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="334.666605" y="338.097578" style="fill: #f5f7f3" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-21"
onclick="(()=&gt;{console.log('node 21')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="203.987537" y="347.931194" style="fill: #edf6df" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-22"
onclick="(()=&gt;{console.log('node 22')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="288.059985" y="363.924972" style="fill: #ddf1c1" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-23"
onclick="(()=&gt;{console.log('node 23')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="276.198518" y="388.99868" style="fill: #f5f7f3" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-24"
onclick="(()=&gt;{console.log('node 24')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="293.230217" y="421.393405" style="fill: #f3f7ef" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-25"
onclick="(()=&gt;{console.log('node 25')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="423.853712" y="378.321354" style="fill: #edf6df" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-26"
onclick="(()=&gt;{console.log('node 26')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="205.214352" y="217.066163" style="fill: #e7f5d2" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-27"
onclick="(()=&gt;{console.log('node 27')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="242.811958" y="354.082183" style="fill: #d2ecb0" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-28"
onclick="(()=&gt;{console.log('node 28')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="154.047193" y="423.153273" style="fill: #e6f5d0" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-29"
onclick="(()=&gt;{console.log('node 29')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="298.859694" y="332.465911" style="fill: #ecf6de" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-30"
onclick="(()=&gt;{console.log('node 30')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="277.79477" y="306.980241" style="fill: #eaf5d9" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-31"
onclick="(()=&gt;{console.log('node 31')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="260.004712" y="426.426278" style="fill: #f9f1f5" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-32"
onclick="(()=&gt;{console.log('node 32')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="301.9174" y="258.124913" style="fill: #dbf0bf" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-33"
onclick="(()=&gt;{console.log('node 33')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="254.897329" y="263.033159" style="fill: #eff6e4" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-34"
onclick="(()=&gt;{console.log('node 34')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="321.267463" y="403.021207" style="fill: #d0ecad" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-35"
onclick="(()=&gt;{console.log('node 35')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="345.750267" y="380.131711" style="fill: #f9eef4" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-36"
onclick="(()=&gt;{console.log('node 36')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="404.238335" y="402.754731" style="fill: #f9eef4" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-37"
onclick="(()=&gt;{console.log('node 37')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="355.734145" y="235.68791" style="fill: #f9eef4" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-38"
onclick="(()=&gt;{console.log('node 38')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="193.933786" y="408.004355" style="fill: #f0f6e7" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-39"
onclick="(()=&gt;{console.log('node 39')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="297.530501" y="194.55124" style="fill: #f3f6ed" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-40"
onclick="(()=&gt;{console.log('node 40')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="320.07566" y="368.578481" style="fill: #dbf0bf" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-41"
onclick="(()=&gt;{console.log('node 41')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="228.689744" y="409.959215" style="fill: #f9eff4" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-42"
onclick="(()=&gt;{console.log('node 42')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="351.226176" y="419.342667" style="fill: #cfebaa" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-43"
onclick="(()=&gt;{console.log('node 43')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="372.120414" y="365.421971" style="fill: #f7f6f7" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-44"
onclick="(()=&gt;{console.log('node 44')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="230.303076" y="436.148139" style="fill: #f8cee6" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-45"
onclick="(()=&gt;{console.log('node 45')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="261.506403" y="474.152727" style="fill: #e6f5d0" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-46"
onclick="(()=&gt;{console.log('node 46')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="417.560846" y="259.464346" style="fill: #b7e085" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-47"
onclick="(()=&gt;{console.log('node 47')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="400.607869" y="434.730447" style="fill: #f8cee6" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-48"
onclick="(()=&gt;{console.log('node 48')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="282.261978" y="282.779534" style="fill: #d6eeb6" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-49"
onclick="(()=&gt;{console.log('node 49')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="222.122563" y="261.416721" style="fill: #edf6df" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-50"
onclick="(()=&gt;{console.log('node 50')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="309.530924" y="454.827332" style="fill: #f9eef4" />
</g>
</g>
<g id="beatrice-text-female-0" onclick="(()=&gt;{console.log('text 0 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(399.786883 152.569335) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-32"
d="M 1844 884 L 3897 884 L 3897 0 L 506 0 L 506 884 L 2209 2388 Q 2438 2594 2547 2791 Q 2656 2988 2656 3200 Q 2656 3528 2436 3728 Q 2216 3928 1850 3928 Q 1569 3928 1234 3808 Q 900 3688 519 3450 L 519 4475 Q 925 4609 1322 4679 Q 1719 4750 2100 4750 Q 2938 4750 3402 4381 Q 3866 4013 3866 3353 Q 3866 2972 3669 2642 Q 3472 2313 2841 1759 L 1844 884 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-32" />
</g>
</g>
</g>
<g id="beatrice-text-female-1" onclick="(()=&gt;{console.log('text 1 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(392.37272 374.787731) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-34"
d="M 2356 3675 L 1038 1722 L 2356 1722 L 2356 3675 z M 2156 4666 L 3494 4666 L 3494 1722 L 4159 1722 L 4159 850 L 3494 850 L 3494 0 L 2356 0 L 2356 850 L 288 850 L 288 1881 L 2156 4666 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-34" />
</g>
</g>
</g>
<g id="beatrice-text-female-2" onclick="(()=&gt;{console.log('text 2 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(253.860482 329.445494) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-37"
d="M 428 4666 L 3944 4666 L 3944 3988 L 2125 0 L 953 0 L 2675 3781 L 428 3781 L 428 4666 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-37" />
</g>
</g>
</g>
<g id="beatrice-text-female-3" onclick="(()=&gt;{console.log('text 3 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(163.27864 370.209205) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-38"
d="M 2228 2088 Q 1891 2088 1709 1903 Q 1528 1719 1528 1375 Q 1528 1031 1709 848 Q 1891 666 2228 666 Q 2563 666 2741 848 Q 2919 1031 2919 1375 Q 2919 1722 2741 1905 Q 2563 2088 2228 2088 z M 1350 2484 Q 925 2613 709 2878 Q 494 3144 494 3541 Q 494 4131 934 4440 Q 1375 4750 2228 4750 Q 3075 4750 3515 4442 Q 3956 4134 3956 3541 Q 3956 3144 3739 2878 Q 3522 2613 3097 2484 Q 3572 2353 3814 2058 Q 4056 1763 4056 1313 Q 4056 619 3595 264 Q 3134 -91 2228 -91 Q 1319 -91 855 264 Q 391 619 391 1313 Q 391 1763 633 2058 Q 875 2353 1350 2484 z M 1631 3419 Q 1631 3141 1786 2991 Q 1941 2841 2228 2841 Q 2509 2841 2662 2991 Q 2816 3141 2816 3419 Q 2816 3697 2662 3845 Q 2509 3994 2228 3994 Q 1941 3994 1786 3844 Q 1631 3694 1631 3419 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-38" />
</g>
</g>
</g>
<g id="beatrice-text-female-4" onclick="(()=&gt;{console.log('text 4 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(428.003432 419.484988) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-31"
d="M 750 831 L 1813 831 L 1813 3847 L 722 3622 L 722 4441 L 1806 4666 L 2950 4666 L 2950 831 L 4013 831 L 4013 0 L 750 0 L 750 831 z "
transform="scale(0.015625)" />
<path id="DejaVuSans-Bold-30"
d="M 2944 2338 Q 2944 3213 2780 3570 Q 2616 3928 2228 3928 Q 1841 3928 1675 3570 Q 1509 3213 1509 2338 Q 1509 1453 1675 1090 Q 1841 728 2228 728 Q 2613 728 2778 1090 Q 2944 1453 2944 2338 z M 4147 2328 Q 4147 1169 3647 539 Q 3147 -91 2228 -91 Q 1306 -91 806 539 Q 306 1169 306 2328 Q 306 3491 806 4120 Q 1306 4750 2228 4750 Q 3147 4750 3647 4120 Q 4147 3491 4147 2328 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-5" onclick="(()=&gt;{console.log('text 5 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(383.164961 245.359486) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-6" onclick="(()=&gt;{console.log('text 6 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(197.191669 463.022351) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-35"
d="M 678 4666 L 3669 4666 L 3669 3781 L 1638 3781 L 1638 3059 Q 1775 3097 1914 3117 Q 2053 3138 2203 3138 Q 3056 3138 3531 2711 Q 4006 2284 4006 1522 Q 4006 766 3489 337 Q 2972 -91 2053 -91 Q 1656 -91 1267 -14 Q 878 63 494 219 L 494 1166 Q 875 947 1217 837 Q 1559 728 1863 728 Q 2300 728 2551 942 Q 2803 1156 2803 1522 Q 2803 1891 2551 2103 Q 2300 2316 1863 2316 Q 1603 2316 1309 2248 Q 1016 2181 678 2041 L 678 4666 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-7" onclick="(()=&gt;{console.log('text 7 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(152.092875 295.851586) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-36"
d="M 2316 2303 Q 2000 2303 1842 2098 Q 1684 1894 1684 1484 Q 1684 1075 1842 870 Q 2000 666 2316 666 Q 2634 666 2792 870 Q 2950 1075 2950 1484 Q 2950 1894 2792 2098 Q 2634 2303 2316 2303 z M 3803 4544 L 3803 3681 Q 3506 3822 3243 3889 Q 2981 3956 2731 3956 Q 2194 3956 1894 3657 Q 1594 3359 1544 2772 Q 1750 2925 1990 3001 Q 2231 3078 2516 3078 Q 3231 3078 3670 2659 Q 4109 2241 4109 1563 Q 4109 813 3618 361 Q 3128 -91 2303 -91 Q 1394 -91 895 523 Q 397 1138 397 2266 Q 397 3422 980 4083 Q 1563 4744 2578 4744 Q 2900 4744 3203 4694 Q 3506 4644 3803 4544 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-8" onclick="(()=&gt;{console.log('text 8 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(415.720934 222.332954) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-9" onclick="(()=&gt;{console.log('text 9 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(337.329637 110.918523) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-10" onclick="(()=&gt;{console.log('text 10 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(316.995629 222.507171) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-39"
d="M 641 103 L 641 966 Q 928 831 1190 764 Q 1453 697 1709 697 Q 2247 697 2547 995 Q 2847 1294 2900 1881 Q 2688 1725 2447 1647 Q 2206 1569 1925 1569 Q 1209 1569 770 1986 Q 331 2403 331 3084 Q 331 3838 820 4291 Q 1309 4744 2131 4744 Q 3044 4744 3544 4128 Q 4044 3513 4044 2388 Q 4044 1231 3459 570 Q 2875 -91 1856 -91 Q 1528 -91 1228 -42 Q 928 6 641 103 z M 2125 2350 Q 2441 2350 2600 2554 Q 2759 2759 2759 3169 Q 2759 3575 2600 3781 Q 2441 3988 2125 3988 Q 1809 3988 1650 3781 Q 1491 3575 1491 3169 Q 1491 2759 1650 2554 Q 1809 2350 2125 2350 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-11" onclick="(()=&gt;{console.log('text 11 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(354.725926 205.013187) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-12" onclick="(()=&gt;{console.log('text 12 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(358.281208 151.471241) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-13" onclick="(()=&gt;{console.log('text 13 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(333.112734 173.726092) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-14" onclick="(()=&gt;{console.log('text 14 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(159.046959 329.273098) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-15" onclick="(()=&gt;{console.log('text 15 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(253.761934 185.199227) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-16" onclick="(()=&gt;{console.log('text 16 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(180.944121 266.046391) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-33"
d="M 2981 2516 Q 3453 2394 3698 2092 Q 3944 1791 3944 1325 Q 3944 631 3412 270 Q 2881 -91 1863 -91 Q 1503 -91 1142 -33 Q 781 25 428 141 L 428 1069 Q 766 900 1098 814 Q 1431 728 1753 728 Q 2231 728 2486 893 Q 2741 1059 2741 1369 Q 2741 1688 2480 1852 Q 2219 2016 1709 2016 L 1228 2016 L 1228 2791 L 1734 2791 Q 2188 2791 2409 2933 Q 2631 3075 2631 3366 Q 2631 3634 2415 3781 Q 2200 3928 1806 3928 Q 1516 3928 1219 3862 Q 922 3797 628 3669 L 628 4550 Q 984 4650 1334 4700 Q 1684 4750 2022 4750 Q 2931 4750 3382 4451 Q 3834 4153 3834 3553 Q 3834 3144 3618 2883 Q 3403 2622 2981 2516 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-17" onclick="(()=&gt;{console.log('text 17 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(269.606655 465.933789) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-18" onclick="(()=&gt;{console.log('text 18 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(325.583218 272.653614) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-19" onclick="(()=&gt;{console.log('text 19 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(408.411614 350.310389) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-20" onclick="(()=&gt;{console.log('text 20 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(326.31723 341.408828) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-21" onclick="(()=&gt;{console.log('text 21 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(195.638162 351.242444) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-22" onclick="(()=&gt;{console.log('text 22 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(279.71061 367.236222) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-23" onclick="(()=&gt;{console.log('text 23 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(267.849143 392.30993) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-24" onclick="(()=&gt;{console.log('text 24 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(284.880842 424.704655) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-25" onclick="(()=&gt;{console.log('text 25 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(415.504337 381.632604) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-26" onclick="(()=&gt;{console.log('text 26 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(196.864977 220.377413) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-27" onclick="(()=&gt;{console.log('text 27 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(234.462583 357.393433) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-28" onclick="(()=&gt;{console.log('text 28 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(145.697818 426.464523) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-29" onclick="(()=&gt;{console.log('text 29 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(290.510319 335.777161) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-30" onclick="(()=&gt;{console.log('text 30 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(269.445395 310.291491) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-31" onclick="(()=&gt;{console.log('text 31 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(251.655337 429.737528) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-32" onclick="(()=&gt;{console.log('text 32 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(293.568025 261.436163) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-33" onclick="(()=&gt;{console.log('text 33 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(246.547954 266.344409) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-34" onclick="(()=&gt;{console.log('text 34 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(312.918088 406.332457) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-35" onclick="(()=&gt;{console.log('text 35 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(337.400892 383.442961) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-36" onclick="(()=&gt;{console.log('text 36 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(395.88896 406.065981) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-37" onclick="(()=&gt;{console.log('text 37 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(347.38477 238.99916) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-38" onclick="(()=&gt;{console.log('text 38 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(185.584411 411.315605) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-39" onclick="(()=&gt;{console.log('text 39 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(289.181126 197.86249) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-40" onclick="(()=&gt;{console.log('text 40 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(311.726285 371.889731) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-41" onclick="(()=&gt;{console.log('text 41 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(220.340369 413.270465) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-42" onclick="(()=&gt;{console.log('text 42 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(342.876801 422.653917) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-43" onclick="(()=&gt;{console.log('text 43 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(363.771039 368.733221) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-44" onclick="(()=&gt;{console.log('text 44 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(221.953701 439.459389) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-45" onclick="(()=&gt;{console.log('text 45 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(253.157028 477.463977) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-46" onclick="(()=&gt;{console.log('text 46 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(409.211471 262.775596) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-47" onclick="(()=&gt;{console.log('text 47 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(392.258494 438.041697) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-48" onclick="(()=&gt;{console.log('text 48 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(273.912603 286.090784) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-49" onclick="(()=&gt;{console.log('text 49 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(213.773188 264.727971) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-50" onclick="(()=&gt;{console.log('text 50 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(301.181549 458.138582) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
</g>
</g>
<defs>
<clipPath id="pe3de578e26">
<rect x="124.405104" y="69.12" width="341.589792" height="443.52" />
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 56 KiB

View File

@ -0,0 +1,898 @@
<?xml version='1.0' encoding='utf-8'?>
<svg xmlns="http://www.w3.org/2000/svg" xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:ns2="http://creativecommons.org/ns#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:xlink="http://www.w3.org/1999/xlink" width="100%" height="100%" viewBox="100 60 420 450" version="1.1">
<metadata>
<rdf:RDF>
<ns2:Work>
<dc:type rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:date>2023-11-19T11:21:55.705408</dc:date>
<dc:format>image/svg+xml</dc:format>
<dc:creator>
<ns2:Agent>
<dc:title>Matplotlib v3.7.1, https://matplotlib.org/</dc:title>
</ns2:Agent>
</dc:creator>
</ns2:Work>
</rdf:RDF>
</metadata>
<defs>
<style type="text/css">
* {
stroke-linejoin: round;
stroke-linecap: butt
}
</style>
<style type="text/css">
.beatrice-node-pointer {
cursor: pointer;
}
.beatrice-node-pointer:hover {
stroke: gray;
}
.beatrice-node-pointer-selected {
stroke: #ef6767c2;
stroke-width: 3
}
.beatrice-text-pointer {
cursor: pointer;
pointer-events: none
}
.beatrice-text-pointer:hover {
/* ホバー時のスタイルは既に設定されたスタイルと異なる特定の属性を変更することができます。 */
}
</style>
</defs>
<g id="figure_1">
<g id="patch_1">
<path d="M 0 576 L 576 576 L 576 0 L 0 0 z " style="fill: #ffffff" />
</g>
<g id="axes_1">
<g id="LineCollection_1">
<path d="M 383.475478 335.382791 L 350.123561 336.312105 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 383.475478 335.382791 L 393.562573 295.917472 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 383.475478 335.382791 L 396.396073 371.656412 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 395.592267 184.349842 L 344.302973 166.290216 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 166.614267 246.553188 L 214.405523 244.575019 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 362.886037 395.352171 L 389.299516 416.267064 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 362.886037 395.352171 L 367.134249 434.454954 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 362.886037 395.352171 L 396.396073 371.656412 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 362.886037 395.352171 L 321.091057 403.95329 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 291.699254 114.456198 L 287.429936 148.935339 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 309.72476 346.813492 L 326.464644 303.679747 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 396.396073 371.656412 L 422.276969 403.842356 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 396.396073 371.656412 L 419.504487 334.14189 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 311.713 188.802087 L 278.840744 190.572938 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 311.713 188.802087 L 287.429936 148.935339 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 311.713 188.802087 L 344.302973 166.290216 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 213.805036 285.720019 L 216.196468 317.113868 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 213.805036 285.720019 L 241.321249 255.242558 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 213.805036 285.720019 L 169.41455 268.66905 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 326.464644 303.679747 L 341.073251 272.287852 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 326.464644 303.679747 L 281.878613 277.846944 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 326.464644 303.679747 L 350.123561 336.312105 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 468.104517 290.196764 L 453.314054 329.209099 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 252.522958 107.607273 L 287.429936 148.935339 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 241.817 158.353487 L 278.840744 190.572938 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 278.840744 190.572938 L 264.569363 223.030096 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 190.32114 223.314542 L 214.405523 244.575019 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 233.41271 348.401671 L 216.196468 317.113868 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 122.295483 352.502553 L 162.445269 355.484449 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 158.624602 400.46174 L 162.445269 355.484449 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 214.405523 244.575019 L 241.321249 255.242558 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 214.405523 244.575019 L 207.563253 203.013335 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 264.569363 223.030096 L 298.808991 236.296491 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 264.569363 223.030096 L 241.321249 255.242558 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 264.569363 223.030096 L 236.649711 206.251683 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 220.79649 394.869471 L 213.931911 434.927829 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 220.79649 394.869471 L 208.453065 361.465389 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 241.321249 255.242558 L 281.878613 277.846944 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 257.120877 296.156882 L 281.878613 277.846944 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 453.314054 329.209099 L 419.504487 334.14189 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 287.180764 309.56402 L 281.878613 277.846944 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 287.429936 148.935339 L 321.071206 134.027026 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 300.632415 433.812968 L 321.091057 403.95329 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 216.196468 317.113868 L 181.944484 318.835753 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 216.196468 317.113868 L 208.453065 361.465389 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 419.504487 334.14189 L 436.061109 363.566053 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 366.514998 474.152727 L 367.134249 434.454954 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 208.453065 361.465389 L 162.445269 355.484449 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
</g>
<g id="PathCollection_1">
<defs>
<path id="C0_0_3858269516"
d="M 0 11.18034 C 2.965061 11.18034 5.80908 10.002309 7.905694 7.905694 C 10.002309 5.80908 11.18034 2.965061 11.18034 -0 C 11.18034 -2.965061 10.002309 -5.80908 7.905694 -7.905694 C 5.80908 -10.002309 2.965061 -11.18034 0 -11.18034 C -2.965061 -11.18034 -5.80908 -10.002309 -7.905694 -7.905694 C -10.002309 -5.80908 -11.18034 -2.965061 -11.18034 0 C -11.18034 2.965061 -10.002309 5.80908 -7.905694 7.905694 C -5.80908 10.002309 -2.965061 11.18034 0 11.18034 z " />
</defs>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-0" onclick="(()=&gt;{console.log('node 0')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="383.475478" y="335.382791" style="fill: #fde2bb" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-1" onclick="(()=&gt;{console.log('node 1')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="393.562573" y="295.917472" style="fill: #fdba68" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-2" onclick="(()=&gt;{console.log('node 2')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="395.592267" y="184.349842" style="fill: #fbe9cf" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-3" onclick="(()=&gt;{console.log('node 3')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="166.614267" y="246.553188" style="fill: #7e70ab" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-4" onclick="(()=&gt;{console.log('node 4')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="362.886037" y="395.352171" style="fill: #e8e9f1" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-5" onclick="(()=&gt;{console.log('node 5')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="291.699254" y="114.456198" style="fill: #f9b158" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-6" onclick="(()=&gt;{console.log('node 6')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="309.72476" y="346.813492" style="fill: #e4e5f0" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-7" onclick="(()=&gt;{console.log('node 7')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="396.396073" y="371.656412" style="fill: #fdcc8c" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-8" onclick="(()=&gt;{console.log('node 8')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="311.713" y="188.802087" style="fill: #fedeb3" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-9" onclick="(()=&gt;{console.log('node 9')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="213.805036" y="285.720019" style="fill: #bab5d7" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-10"
onclick="(()=&gt;{console.log('node 10')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="326.464644" y="303.679747" style="fill: #eaebf2" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-11"
onclick="(()=&gt;{console.log('node 11')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="468.104517" y="290.196764" style="fill: #f7f7f6" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-12"
onclick="(()=&gt;{console.log('node 12')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="169.41455" y="268.66905" style="fill: #dfe1ee" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-13"
onclick="(()=&gt;{console.log('node 13')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="252.522958" y="107.607273" style="fill: #eff0f4" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-14"
onclick="(()=&gt;{console.log('node 14')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="241.817" y="158.353487" style="fill: #e58a20" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-15"
onclick="(()=&gt;{console.log('node 15')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="278.840744" y="190.572938" style="fill: #fedbac" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-16"
onclick="(()=&gt;{console.log('node 16')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="190.32114" y="223.314542" style="fill: #dfe1ee" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-17"
onclick="(()=&gt;{console.log('node 17')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="233.41271" y="348.401671" style="fill: #c3c0dd" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-18"
onclick="(()=&gt;{console.log('node 18')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="122.295483" y="352.502553" style="fill: #fed8a6" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-19"
onclick="(()=&gt;{console.log('node 19')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="158.624602" y="400.46174" style="fill: #f7f6f3" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-20"
onclick="(()=&gt;{console.log('node 20')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="214.405523" y="244.575019" style="fill: #f9f2e9" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-21"
onclick="(()=&gt;{console.log('node 21')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="264.569363" y="223.030096" style="fill: #faecd7" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-22"
onclick="(()=&gt;{console.log('node 22')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="220.79649" y="394.869471" style="fill: #fbead2" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-23"
onclick="(()=&gt;{console.log('node 23')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="344.302973" y="166.290216" style="fill: #feddaf" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-24"
onclick="(()=&gt;{console.log('node 24')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="241.321249" y="255.242558" style="fill: #c3c0dd" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-25"
onclick="(()=&gt;{console.log('node 25')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="257.120877" y="296.156882" style="fill: #f9f0e4" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-26"
onclick="(()=&gt;{console.log('node 26')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="207.563253" y="203.013335" style="fill: #fbebd5" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-27"
onclick="(()=&gt;{console.log('node 27')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="453.314054" y="329.209099" style="fill: #fdc47b" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-28"
onclick="(()=&gt;{console.log('node 28')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="350.123561" y="336.312105" style="fill: #fbe9cf" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-29"
onclick="(()=&gt;{console.log('node 29')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="287.180764" y="309.56402" style="fill: #f7f6f3" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-30"
onclick="(()=&gt;{console.log('node 30')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="287.429936" y="148.935339" style="fill: #fbebd5" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-31"
onclick="(()=&gt;{console.log('node 31')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="281.878613" y="277.846944" style="fill: #d1d1e6" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-32"
onclick="(()=&gt;{console.log('node 32')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="300.632415" y="433.812968" style="fill: #fde2bb" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-33"
onclick="(()=&gt;{console.log('node 33')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="216.196468" y="317.113868" style="fill: #dddfed" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-34"
onclick="(()=&gt;{console.log('node 34')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="419.504487" y="334.14189" style="fill: #fdc57f" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-35"
onclick="(()=&gt;{console.log('node 35')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="321.071206" y="134.027026" style="fill: #fee0b6" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-36"
onclick="(()=&gt;{console.log('node 36')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="366.514998" y="474.152727" style="fill: #fdbd6e" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-37"
onclick="(()=&gt;{console.log('node 37')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="208.453065" y="361.465389" style="fill: #cccbe3" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-38"
onclick="(()=&gt;{console.log('node 38')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="236.649711" y="206.251683" style="fill: #faecd7" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-39"
onclick="(()=&gt;{console.log('node 39')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="298.808991" y="236.296491" style="fill: #fdc57f" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-40"
onclick="(()=&gt;{console.log('node 40')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="181.944484" y="318.835753" style="fill: #f9f0e4" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-41"
onclick="(()=&gt;{console.log('node 41')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="367.134249" y="434.454954" style="fill: #f6f6f7" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-42"
onclick="(()=&gt;{console.log('node 42')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="422.276969" y="403.842356" style="fill: #fdbf72" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-43"
onclick="(()=&gt;{console.log('node 43')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="321.091057" y="403.95329" style="fill: #f8f5f1" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-44"
onclick="(()=&gt;{console.log('node 44')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="162.445269" y="355.484449" style="fill: #eaebf2" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-45"
onclick="(()=&gt;{console.log('node 45')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="341.073251" y="272.287852" style="fill: #f6aa4f" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-46"
onclick="(()=&gt;{console.log('node 46')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="389.299516" y="416.267064" style="fill: #de8013" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-47"
onclick="(()=&gt;{console.log('node 47')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="213.931911" y="434.927829" style="fill: #fbb55e" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-48"
onclick="(()=&gt;{console.log('node 48')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="436.061109" y="363.566053" style="fill: #ebecf3" />
</g>
</g>
<g id="beatrice-text-male-0" onclick="(()=&gt;{console.log('text 0 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(379.30079 338.694041) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-31"
d="M 750 831 L 1813 831 L 1813 3847 L 722 3622 L 722 4441 L 1806 4666 L 2950 4666 L 2950 831 L 4013 831 L 4013 0 L 750 0 L 750 831 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-31" />
</g>
</g>
</g>
<g id="beatrice-text-male-1" onclick="(()=&gt;{console.log('text 1 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(389.387885 299.228722) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-33"
d="M 2981 2516 Q 3453 2394 3698 2092 Q 3944 1791 3944 1325 Q 3944 631 3412 270 Q 2881 -91 1863 -91 Q 1503 -91 1142 -33 Q 781 25 428 141 L 428 1069 Q 766 900 1098 814 Q 1431 728 1753 728 Q 2231 728 2486 893 Q 2741 1059 2741 1369 Q 2741 1688 2480 1852 Q 2219 2016 1709 2016 L 1228 2016 L 1228 2791 L 1734 2791 Q 2188 2791 2409 2933 Q 2631 3075 2631 3366 Q 2631 3634 2415 3781 Q 2200 3928 1806 3928 Q 1516 3928 1219 3862 Q 922 3797 628 3669 L 628 4550 Q 984 4650 1334 4700 Q 1684 4750 2022 4750 Q 2931 4750 3382 4451 Q 3834 4153 3834 3553 Q 3834 3144 3618 2883 Q 3403 2622 2981 2516 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-33" />
</g>
</g>
</g>
<g id="beatrice-text-male-2" onclick="(()=&gt;{console.log('text 2 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(391.41758 187.661092) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-35"
d="M 678 4666 L 3669 4666 L 3669 3781 L 1638 3781 L 1638 3059 Q 1775 3097 1914 3117 Q 2053 3138 2203 3138 Q 3056 3138 3531 2711 Q 4006 2284 4006 1522 Q 4006 766 3489 337 Q 2972 -91 2053 -91 Q 1656 -91 1267 -14 Q 878 63 494 219 L 494 1166 Q 875 947 1217 837 Q 1559 728 1863 728 Q 2300 728 2551 942 Q 2803 1156 2803 1522 Q 2803 1891 2551 2103 Q 2300 2316 1863 2316 Q 1603 2316 1309 2248 Q 1016 2181 678 2041 L 678 4666 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-35" />
</g>
</g>
</g>
<g id="beatrice-text-male-3" onclick="(()=&gt;{console.log('text 3 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(162.43958 249.864438) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-36"
d="M 2316 2303 Q 2000 2303 1842 2098 Q 1684 1894 1684 1484 Q 1684 1075 1842 870 Q 2000 666 2316 666 Q 2634 666 2792 870 Q 2950 1075 2950 1484 Q 2950 1894 2792 2098 Q 2634 2303 2316 2303 z M 3803 4544 L 3803 3681 Q 3506 3822 3243 3889 Q 2981 3956 2731 3956 Q 2194 3956 1894 3657 Q 1594 3359 1544 2772 Q 1750 2925 1990 3001 Q 2231 3078 2516 3078 Q 3231 3078 3670 2659 Q 4109 2241 4109 1563 Q 4109 813 3618 361 Q 3128 -91 2303 -91 Q 1394 -91 895 523 Q 397 1138 397 2266 Q 397 3422 980 4083 Q 1563 4744 2578 4744 Q 2900 4744 3203 4694 Q 3506 4644 3803 4544 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-36" />
</g>
</g>
</g>
<g id="beatrice-text-male-4" onclick="(()=&gt;{console.log('text 4 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(358.71135 398.663421) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-39"
d="M 641 103 L 641 966 Q 928 831 1190 764 Q 1453 697 1709 697 Q 2247 697 2547 995 Q 2847 1294 2900 1881 Q 2688 1725 2447 1647 Q 2206 1569 1925 1569 Q 1209 1569 770 1986 Q 331 2403 331 3084 Q 331 3838 820 4291 Q 1309 4744 2131 4744 Q 3044 4744 3544 4128 Q 4044 3513 4044 2388 Q 4044 1231 3459 570 Q 2875 -91 1856 -91 Q 1528 -91 1228 -42 Q 928 6 641 103 z M 2125 2350 Q 2441 2350 2600 2554 Q 2759 2759 2759 3169 Q 2759 3575 2600 3781 Q 2441 3988 2125 3988 Q 1809 3988 1650 3781 Q 1491 3575 1491 3169 Q 1491 2759 1650 2554 Q 1809 2350 2125 2350 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-39" />
</g>
</g>
</g>
<g id="beatrice-text-male-5" onclick="(()=&gt;{console.log('text 5 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(283.349879 117.767448) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-6" onclick="(()=&gt;{console.log('text 6 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(301.375385 350.124742) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-32"
d="M 1844 884 L 3897 884 L 3897 0 L 506 0 L 506 884 L 2209 2388 Q 2438 2594 2547 2791 Q 2656 2988 2656 3200 Q 2656 3528 2436 3728 Q 2216 3928 1850 3928 Q 1569 3928 1234 3808 Q 900 3688 519 3450 L 519 4475 Q 925 4609 1322 4679 Q 1719 4750 2100 4750 Q 2938 4750 3402 4381 Q 3866 4013 3866 3353 Q 3866 2972 3669 2642 Q 3472 2313 2841 1759 L 1844 884 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-7" onclick="(()=&gt;{console.log('text 7 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(388.046698 374.967662) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-8" onclick="(()=&gt;{console.log('text 8 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(303.363625 192.113337) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-30"
d="M 2944 2338 Q 2944 3213 2780 3570 Q 2616 3928 2228 3928 Q 1841 3928 1675 3570 Q 1509 3213 1509 2338 Q 1509 1453 1675 1090 Q 1841 728 2228 728 Q 2613 728 2778 1090 Q 2944 1453 2944 2338 z M 4147 2328 Q 4147 1169 3647 539 Q 3147 -91 2228 -91 Q 1306 -91 806 539 Q 306 1169 306 2328 Q 306 3491 806 4120 Q 1306 4750 2228 4750 Q 3147 4750 3647 4120 Q 4147 3491 4147 2328 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-9" onclick="(()=&gt;{console.log('text 9 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(205.455661 289.031269) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-10" onclick="(()=&gt;{console.log('text 10 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(318.115269 306.990997) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-11" onclick="(()=&gt;{console.log('text 11 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(459.755142 293.508014) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-12" onclick="(()=&gt;{console.log('text 12 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(161.065175 271.9803) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-38"
d="M 2228 2088 Q 1891 2088 1709 1903 Q 1528 1719 1528 1375 Q 1528 1031 1709 848 Q 1891 666 2228 666 Q 2563 666 2741 848 Q 2919 1031 2919 1375 Q 2919 1722 2741 1905 Q 2563 2088 2228 2088 z M 1350 2484 Q 925 2613 709 2878 Q 494 3144 494 3541 Q 494 4131 934 4440 Q 1375 4750 2228 4750 Q 3075 4750 3515 4442 Q 3956 4134 3956 3541 Q 3956 3144 3739 2878 Q 3522 2613 3097 2484 Q 3572 2353 3814 2058 Q 4056 1763 4056 1313 Q 4056 619 3595 264 Q 3134 -91 2228 -91 Q 1319 -91 855 264 Q 391 619 391 1313 Q 391 1763 633 2058 Q 875 2353 1350 2484 z M 1631 3419 Q 1631 3141 1786 2991 Q 1941 2841 2228 2841 Q 2509 2841 2662 2991 Q 2816 3141 2816 3419 Q 2816 3697 2662 3845 Q 2509 3994 2228 3994 Q 1941 3994 1786 3844 Q 1631 3694 1631 3419 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-13" onclick="(()=&gt;{console.log('text 13 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(244.173583 110.918523) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-14" onclick="(()=&gt;{console.log('text 14 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(233.467625 161.664737) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-15" onclick="(()=&gt;{console.log('text 15 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(270.491369 193.884188) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-16" onclick="(()=&gt;{console.log('text 16 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(181.971765 226.625792) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-34"
d="M 2356 3675 L 1038 1722 L 2356 1722 L 2356 3675 z M 2156 4666 L 3494 4666 L 3494 1722 L 4159 1722 L 4159 850 L 3494 850 L 3494 0 L 2356 0 L 2356 850 L 288 850 L 288 1881 L 2156 4666 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-17" onclick="(()=&gt;{console.log('text 17 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(225.063335 351.712921) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-37"
d="M 428 4666 L 3944 4666 L 3944 3988 L 2125 0 L 953 0 L 2675 3781 L 428 3781 L 428 4666 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-18" onclick="(()=&gt;{console.log('text 18 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(113.946108 355.813803) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-19" onclick="(()=&gt;{console.log('text 19 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(150.275227 403.77299) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-20" onclick="(()=&gt;{console.log('text 20 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(206.056148 247.886269) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-21" onclick="(()=&gt;{console.log('text 21 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(256.219988 226.341346) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-22" onclick="(()=&gt;{console.log('text 22 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(212.447115 398.180721) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-23" onclick="(()=&gt;{console.log('text 23 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(335.953598 169.601466) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-24" onclick="(()=&gt;{console.log('text 24 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(232.971874 258.553808) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-25" onclick="(()=&gt;{console.log('text 25 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(248.771502 299.468132) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-26" onclick="(()=&gt;{console.log('text 26 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(199.213878 206.324585) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-27" onclick="(()=&gt;{console.log('text 27 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(444.964679 332.520349) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-28" onclick="(()=&gt;{console.log('text 28 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(341.774186 339.623355) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-29" onclick="(()=&gt;{console.log('text 29 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(278.831389 312.87527) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-30" onclick="(()=&gt;{console.log('text 30 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(279.080561 152.246589) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-31" onclick="(()=&gt;{console.log('text 31 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(273.529238 281.158194) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-32" onclick="(()=&gt;{console.log('text 32 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(292.28304 437.124218) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-33" onclick="(()=&gt;{console.log('text 33 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(207.847093 320.425118) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-34" onclick="(()=&gt;{console.log('text 34 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(411.155112 337.45314) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-35" onclick="(()=&gt;{console.log('text 35 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(312.721831 137.338276) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-36" onclick="(()=&gt;{console.log('text 36 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(358.165623 477.463977) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-37" onclick="(()=&gt;{console.log('text 37 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(200.10369 364.776639) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-38" onclick="(()=&gt;{console.log('text 38 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(228.300336 209.562933) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-39" onclick="(()=&gt;{console.log('text 39 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(290.459616 239.607741) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-40" onclick="(()=&gt;{console.log('text 40 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(173.595109 322.147003) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-41" onclick="(()=&gt;{console.log('text 41 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(358.784874 437.766204) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-42" onclick="(()=&gt;{console.log('text 42 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(413.927594 407.153606) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-43" onclick="(()=&gt;{console.log('text 43 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(312.741682 407.26454) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-44" onclick="(()=&gt;{console.log('text 44 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(154.095894 358.795699) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-45" onclick="(()=&gt;{console.log('text 45 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(332.723876 275.599102) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-46" onclick="(()=&gt;{console.log('text 46 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(380.950141 419.578314) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-47" onclick="(()=&gt;{console.log('text 47 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(205.582536 438.239079) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-48" onclick="(()=&gt;{console.log('text 48 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(423.537047 366.877303) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
<use xlink:href="#DejaVuSans-Bold-30" x="139.160156" />
</g>
</g>
</g>
</g>
</g>
<defs>
<clipPath id="pd42c8a995e">
<rect x="85.985534" y="69.12" width="418.428931" height="443.52" />
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 54 KiB

View File

@ -21,8 +21,8 @@
{
"name": "configArea",
"options": {
"detectors": ["dio", "harvest", "crepe", "crepe_full", "crepe_tiny", "rmvpe", "rmvpe_onnx"],
"inputChunkNums": [1, 2, 8, 16, 24, 32, 40, 48, 64, 80, 96, 112, 128, 192, 256, 320, 384, 448, 512, 576, 640, 704, 768, 832, 896, 960, 1024, 2048, 4096, 8192, 16384]
"detectors": ["dio", "harvest", "crepe", "crepe_full", "crepe_tiny", "rmvpe", "rmvpe_onnx", "fcpe"],
"inputChunkNums": [1, 2, 4, 6, 8, 16, 24, 32, 40, 48, 64, 80, 96, 112, 128, 192, 256, 320, 384, 448, 512, 576, 640, 704, 768, 832, 896, 960, 1024, 2048, 4096, 8192, 16384]
}
}
]

View File

@ -0,0 +1 @@
web

File diff suppressed because one or more lines are too long

View File

@ -1,5 +1,9 @@
/*! regenerator-runtime -- Copyright (c) 2014-present, Facebook, Inc. -- license (MIT): https://github.com/facebook/regenerator/blob/main/LICENSE */
/*!**********************!*\
!*** ./src/index.ts ***!
\**********************/
/**
* @license React
* react-dom.production.min.js

File diff suppressed because it is too large Load Diff

View File

@ -13,7 +13,14 @@
"build:mod": "cd ../lib && npm run build:dev && cd - && cp -r ../lib/dist/* node_modules/@dannadori/voice-changer-client-js/dist/",
"build:mod_dos": "cd ../lib && npm run build:dev && cd ../demo && npm-run-all build:mod_copy",
"build:mod_copy": "XCOPY ..\\lib\\dist\\* .\\node_modules\\@dannadori\\voice-changer-client-js\\dist\\* /s /e /h /y",
"test": "echo \"Error: no test specified\" && exit 1"
"test": "echo \"Error: no test specified\" && exit 1",
"____ comment ____": "ウェブバージョンのスクリプト",
"clean:web": "rimraf dist_web/",
"webpack:web:prod": "npx webpack --config webpack_web.prod.js && copy .\\public\\info_web .\\dist_web\\info && copy .\\public\\assets\\gui_settings\\edition_web.txt .\\dist_web\\assets\\gui_settings\\edition.txt",
"webpack:web:dev": "npx webpack --config webpack_web.dev.js && copy .\\public\\info_web .\\dist_web\\info && copy .\\public\\assets\\gui_settings\\edition_web.txt .\\dist_web\\assets\\gui_settings\\edition.txt",
"build:web:prod": "npm-run-all clean:web webpack:web:prod",
"build:web:dev": "npm-run-all clean:web webpack:web:dev",
"start:web": "webpack-dev-server --config webpack_web.dev.js"
},
"keywords": [
"voice conversion"
@ -21,50 +28,51 @@
"author": "wataru.okada@flect.co.jp",
"license": "ISC",
"devDependencies": {
"@babel/core": "^7.23.2",
"@babel/plugin-transform-runtime": "^7.23.2",
"@babel/preset-env": "^7.23.2",
"@babel/preset-react": "^7.22.15",
"@babel/preset-typescript": "^7.23.2",
"@types/node": "^20.8.10",
"@types/react": "^18.2.34",
"@types/react-dom": "^18.2.14",
"autoprefixer": "^10.4.16",
"@babel/core": "^7.24.0",
"@babel/plugin-transform-runtime": "^7.24.0",
"@babel/preset-env": "^7.24.0",
"@babel/preset-react": "^7.23.3",
"@babel/preset-typescript": "^7.23.3",
"@types/node": "^20.11.21",
"@types/react": "^18.2.60",
"@types/react-dom": "^18.2.19",
"autoprefixer": "^10.4.17",
"babel-loader": "^9.1.3",
"copy-webpack-plugin": "^11.0.0",
"css-loader": "^6.8.1",
"eslint": "^8.53.0",
"eslint-config-prettier": "^9.0.0",
"eslint-plugin-prettier": "^5.0.1",
"copy-webpack-plugin": "^12.0.2",
"css-loader": "^6.10.0",
"eslint": "^8.57.0",
"eslint-config-prettier": "^9.1.0",
"eslint-plugin-prettier": "^5.1.3",
"eslint-plugin-react": "^7.33.2",
"eslint-webpack-plugin": "^4.0.1",
"html-loader": "^4.2.0",
"html-webpack-plugin": "^5.5.3",
"html-loader": "^5.0.0",
"html-webpack-plugin": "^5.6.0",
"npm-run-all": "^4.1.5",
"postcss-loader": "^7.3.3",
"postcss-loader": "^8.1.1",
"postcss-nested": "^6.0.1",
"prettier": "^3.0.3",
"prettier": "^3.2.5",
"rimraf": "^5.0.5",
"style-loader": "^3.3.3",
"ts-loader": "^9.5.0",
"style-loader": "^3.3.4",
"ts-loader": "^9.5.1",
"tsconfig-paths": "^4.2.0",
"typescript": "^5.2.2",
"webpack": "^5.89.0",
"typescript": "^5.3.3",
"webpack": "^5.90.3",
"webpack-cli": "^5.1.4",
"webpack-dev-server": "^4.15.1"
"webpack-dev-server": "^5.0.2"
},
"dependencies": {
"@alexanderolsen/libsamplerate-js": "^2.1.0",
"@dannadori/voice-changer-client-js": "^1.0.175",
"@dannadori/worker-manager": "^1.0.12",
"@fortawesome/fontawesome-svg-core": "^6.4.2",
"@fortawesome/free-brands-svg-icons": "^6.4.2",
"@fortawesome/free-regular-svg-icons": "^6.4.2",
"@fortawesome/free-solid-svg-icons": "^6.4.2",
"@alexanderolsen/libsamplerate-js": "^2.1.1",
"@dannadori/voice-changer-client-js": "^1.0.182",
"@dannadori/voice-changer-js": "^1.0.2",
"@dannadori/worker-manager": "^1.0.20",
"@fortawesome/fontawesome-svg-core": "^6.5.1",
"@fortawesome/free-brands-svg-icons": "^6.5.1",
"@fortawesome/free-regular-svg-icons": "^6.5.1",
"@fortawesome/free-solid-svg-icons": "^6.5.1",
"@fortawesome/react-fontawesome": "^0.2.0",
"@tensorflow/tfjs": "^4.12.0",
"onnxruntime-web": "^1.16.1",
"protobufjs": "^7.2.5",
"@tensorflow/tfjs": "^4.17.0",
"onnxruntime-web": "^1.17.1",
"protobufjs": "^7.2.6",
"react": "^18.2.0",
"react-dom": "^18.2.0"
}

View File

@ -0,0 +1,928 @@
<?xml version='1.0' encoding='utf-8'?>
<svg xmlns="http://www.w3.org/2000/svg" xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:ns2="http://creativecommons.org/ns#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:xlink="http://www.w3.org/1999/xlink" width="100%" height="100%" viewBox="100 60 420 450" version="1.1">
<metadata>
<rdf:RDF>
<ns2:Work>
<dc:type rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:date>2023-11-19T11:21:56.358384</dc:date>
<dc:format>image/svg+xml</dc:format>
<dc:creator>
<ns2:Agent>
<dc:title>Matplotlib v3.7.1, https://matplotlib.org/</dc:title>
</ns2:Agent>
</dc:creator>
</ns2:Work>
</rdf:RDF>
</metadata>
<defs>
<style type="text/css">
* {
stroke-linejoin: round;
stroke-linecap: butt
}
</style>
<style type="text/css">
.beatrice-node-pointer {
cursor: pointer;
}
.beatrice-node-pointer:hover {
stroke: gray;
}
.beatrice-node-pointer-selected {
stroke: #ef6767c2;
stroke-width: 3
}
.beatrice-text-pointer {
cursor: pointer;
pointer-events: none
}
.beatrice-text-pointer:hover {
/* ホバー時のスタイルは既に設定されたスタイルと異なる特定の属性を変更することができます。 */
}
</style>
</defs>
<g id="figure_1">
<g id="patch_1">
<path d="M 0 576 L 576 576 L 576 0 L 0 0 z " style="fill: #ffffff" />
</g>
<g id="axes_1">
<g id="LineCollection_1">
<path d="M 403.96157 149.258085 L 366.630583 148.159991 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 396.547407 371.476481 L 372.120414 365.421971 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 396.547407 371.476481 L 416.760989 346.999139 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 396.547407 371.476481 L 404.238335 402.754731 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 258.035169 326.134244 L 298.859694 332.465911 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 167.453327 366.897955 L 203.987537 347.931194 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 436.352807 416.173738 L 404.238335 402.754731 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 391.514336 242.048236 L 417.560846 259.464346 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 391.514336 242.048236 L 355.734145 235.68791 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 391.514336 242.048236 L 424.070309 219.021704 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 205.541044 459.711101 L 230.303076 436.148139 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 160.44225 292.540336 L 167.396334 325.961848 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 345.679012 107.607273 L 366.630583 148.159991 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 325.345004 219.195921 L 355.734145 235.68791 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 325.345004 219.195921 L 297.530501 194.55124 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 363.075301 201.701937 L 355.734145 235.68791 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 363.075301 201.701937 L 341.462109 170.414842 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 366.630583 148.159991 L 341.462109 170.414842 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 167.396334 325.961848 L 203.987537 347.931194 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 262.111309 181.887977 L 297.530501 194.55124 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 189.293496 262.735141 L 222.122563 261.416721 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 277.95603 462.622539 L 293.230217 421.393405 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 333.932593 269.342364 L 301.9174 258.124913 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 333.932593 269.342364 L 355.734145 235.68791 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 334.666605 338.097578 L 320.07566 368.578481 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 203.987537 347.931194 L 242.811958 354.082183 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 288.059985 363.924972 L 320.07566 368.578481 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 288.059985 363.924972 L 276.198518 388.99868 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 288.059985 363.924972 L 298.859694 332.465911 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 288.059985 363.924972 L 242.811958 354.082183 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 276.198518 388.99868 L 293.230217 421.393405 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 293.230217 421.393405 L 309.530924 454.827332 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 293.230217 421.393405 L 260.004712 426.426278 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 423.853712 378.321354 L 404.238335 402.754731 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 205.214352 217.066163 L 222.122563 261.416721 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 154.047193 423.153273 L 193.933786 408.004355 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 298.859694 332.465911 L 277.79477 306.980241 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 277.79477 306.980241 L 282.261978 282.779534 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 260.004712 426.426278 L 230.303076 436.148139 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 260.004712 426.426278 L 228.689744 409.959215 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 260.004712 426.426278 L 261.506403 474.152727 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 301.9174 258.124913 L 282.261978 282.779534 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 254.897329 263.033159 L 282.261978 282.779534 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 254.897329 263.033159 L 222.122563 261.416721 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 321.267463 403.021207 L 320.07566 368.578481 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 345.750267 380.131711 L 320.07566 368.578481 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 345.750267 380.131711 L 372.120414 365.421971 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 345.750267 380.131711 L 351.226176 419.342667 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 404.238335 402.754731 L 400.607869 434.730447 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
<path d="M 193.933786 408.004355 L 230.303076 436.148139 " clip-path="url(#pe3de578e26)"
style="fill: none; stroke: #808080" />
</g>
<g id="PathCollection_1">
<defs>
<path id="C0_0_b0ffb3bf4a"
d="M 0 11.18034 C 2.965061 11.18034 5.80908 10.002309 7.905694 7.905694 C 10.002309 5.80908 11.18034 2.965061 11.18034 -0 C 11.18034 -2.965061 10.002309 -5.80908 7.905694 -7.905694 C 5.80908 -10.002309 2.965061 -11.18034 0 -11.18034 C -2.965061 -11.18034 -5.80908 -10.002309 -7.905694 -7.905694 C -10.002309 -5.80908 -11.18034 -2.965061 -11.18034 0 C -11.18034 2.965061 -10.002309 5.80908 -7.905694 7.905694 C -5.80908 10.002309 -2.965061 11.18034 0 11.18034 z " />
</defs>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-0"
onclick="(()=&gt;{console.log('node 0')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="403.96157" y="149.258085" style="fill: #e7f5d2" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-1"
onclick="(()=&gt;{console.log('node 1')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="396.547407" y="371.476481" style="fill: #fbe8f2" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-2"
onclick="(()=&gt;{console.log('node 2')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="258.035169" y="326.134244" style="fill: #cfebaa" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-3"
onclick="(()=&gt;{console.log('node 3')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="167.453327" y="366.897955" style="fill: #f1f6e8" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-4"
onclick="(()=&gt;{console.log('node 4')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="436.352807" y="416.173738" style="fill: #e89ac6" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-5"
onclick="(()=&gt;{console.log('node 5')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="391.514336" y="242.048236" style="fill: #f3bcdd" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-6"
onclick="(()=&gt;{console.log('node 6')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="205.541044" y="459.711101" style="fill: #fbd9ec" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-7"
onclick="(()=&gt;{console.log('node 7')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="160.44225" y="292.540336" style="fill: #9ed067" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-8"
onclick="(()=&gt;{console.log('node 8')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="424.070309" y="219.021704" style="fill: #e1f3c7" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-9"
onclick="(()=&gt;{console.log('node 9')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="345.679012" y="107.607273" style="fill: #d0ecad" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-10"
onclick="(()=&gt;{console.log('node 10')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="325.345004" y="219.195921" style="fill: #eff6e4" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-11"
onclick="(()=&gt;{console.log('node 11')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="363.075301" y="201.701937" style="fill: #f9f0f5" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-12"
onclick="(()=&gt;{console.log('node 12')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="366.630583" y="148.159991" style="fill: #ebf6dc" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-13"
onclick="(()=&gt;{console.log('node 13')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="341.462109" y="170.414842" style="fill: #fad6ea" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-14"
onclick="(()=&gt;{console.log('node 14')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="167.396334" y="325.961848" style="fill: #f5f7f3" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-15"
onclick="(()=&gt;{console.log('node 15')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="262.111309" y="181.887977" style="fill: #e9f5d6" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-16"
onclick="(()=&gt;{console.log('node 16')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="189.293496" y="262.735141" style="fill: #fce5f1" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-17"
onclick="(()=&gt;{console.log('node 17')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="277.95603" y="462.622539" style="fill: #c4e699" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-18"
onclick="(()=&gt;{console.log('node 18')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="333.932593" y="269.342364" style="fill: #f8f4f6" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-19"
onclick="(()=&gt;{console.log('node 19')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="416.760989" y="346.999139" style="fill: #eef6e2" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-20"
onclick="(()=&gt;{console.log('node 20')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="334.666605" y="338.097578" style="fill: #f5f7f3" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-21"
onclick="(()=&gt;{console.log('node 21')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="203.987537" y="347.931194" style="fill: #edf6df" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-22"
onclick="(()=&gt;{console.log('node 22')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="288.059985" y="363.924972" style="fill: #ddf1c1" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-23"
onclick="(()=&gt;{console.log('node 23')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="276.198518" y="388.99868" style="fill: #f5f7f3" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-24"
onclick="(()=&gt;{console.log('node 24')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="293.230217" y="421.393405" style="fill: #f3f7ef" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-25"
onclick="(()=&gt;{console.log('node 25')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="423.853712" y="378.321354" style="fill: #edf6df" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-26"
onclick="(()=&gt;{console.log('node 26')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="205.214352" y="217.066163" style="fill: #e7f5d2" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-27"
onclick="(()=&gt;{console.log('node 27')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="242.811958" y="354.082183" style="fill: #d2ecb0" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-28"
onclick="(()=&gt;{console.log('node 28')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="154.047193" y="423.153273" style="fill: #e6f5d0" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-29"
onclick="(()=&gt;{console.log('node 29')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="298.859694" y="332.465911" style="fill: #ecf6de" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-30"
onclick="(()=&gt;{console.log('node 30')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="277.79477" y="306.980241" style="fill: #eaf5d9" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-31"
onclick="(()=&gt;{console.log('node 31')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="260.004712" y="426.426278" style="fill: #f9f1f5" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-32"
onclick="(()=&gt;{console.log('node 32')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="301.9174" y="258.124913" style="fill: #dbf0bf" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-33"
onclick="(()=&gt;{console.log('node 33')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="254.897329" y="263.033159" style="fill: #eff6e4" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-34"
onclick="(()=&gt;{console.log('node 34')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="321.267463" y="403.021207" style="fill: #d0ecad" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-35"
onclick="(()=&gt;{console.log('node 35')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="345.750267" y="380.131711" style="fill: #f9eef4" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-36"
onclick="(()=&gt;{console.log('node 36')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="404.238335" y="402.754731" style="fill: #f9eef4" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-37"
onclick="(()=&gt;{console.log('node 37')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="355.734145" y="235.68791" style="fill: #f9eef4" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-38"
onclick="(()=&gt;{console.log('node 38')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="193.933786" y="408.004355" style="fill: #f0f6e7" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-39"
onclick="(()=&gt;{console.log('node 39')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="297.530501" y="194.55124" style="fill: #f3f6ed" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-40"
onclick="(()=&gt;{console.log('node 40')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="320.07566" y="368.578481" style="fill: #dbf0bf" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-41"
onclick="(()=&gt;{console.log('node 41')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="228.689744" y="409.959215" style="fill: #f9eff4" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-42"
onclick="(()=&gt;{console.log('node 42')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="351.226176" y="419.342667" style="fill: #cfebaa" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-43"
onclick="(()=&gt;{console.log('node 43')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="372.120414" y="365.421971" style="fill: #f7f6f7" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-44"
onclick="(()=&gt;{console.log('node 44')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="230.303076" y="436.148139" style="fill: #f8cee6" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-45"
onclick="(()=&gt;{console.log('node 45')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="261.506403" y="474.152727" style="fill: #e6f5d0" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-46"
onclick="(()=&gt;{console.log('node 46')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="417.560846" y="259.464346" style="fill: #b7e085" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-47"
onclick="(()=&gt;{console.log('node 47')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="400.607869" y="434.730447" style="fill: #f8cee6" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-48"
onclick="(()=&gt;{console.log('node 48')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="282.261978" y="282.779534" style="fill: #d6eeb6" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-49"
onclick="(()=&gt;{console.log('node 49')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="222.122563" y="261.416721" style="fill: #edf6df" />
</g>
<g clip-path="url(#pe3de578e26)" id="beatrice-node-female-50"
onclick="(()=&gt;{console.log('node 50')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_b0ffb3bf4a" x="309.530924" y="454.827332" style="fill: #f9eef4" />
</g>
</g>
<g id="beatrice-text-female-0" onclick="(()=&gt;{console.log('text 0 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(399.786883 152.569335) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-32"
d="M 1844 884 L 3897 884 L 3897 0 L 506 0 L 506 884 L 2209 2388 Q 2438 2594 2547 2791 Q 2656 2988 2656 3200 Q 2656 3528 2436 3728 Q 2216 3928 1850 3928 Q 1569 3928 1234 3808 Q 900 3688 519 3450 L 519 4475 Q 925 4609 1322 4679 Q 1719 4750 2100 4750 Q 2938 4750 3402 4381 Q 3866 4013 3866 3353 Q 3866 2972 3669 2642 Q 3472 2313 2841 1759 L 1844 884 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-32" />
</g>
</g>
</g>
<g id="beatrice-text-female-1" onclick="(()=&gt;{console.log('text 1 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(392.37272 374.787731) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-34"
d="M 2356 3675 L 1038 1722 L 2356 1722 L 2356 3675 z M 2156 4666 L 3494 4666 L 3494 1722 L 4159 1722 L 4159 850 L 3494 850 L 3494 0 L 2356 0 L 2356 850 L 288 850 L 288 1881 L 2156 4666 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-34" />
</g>
</g>
</g>
<g id="beatrice-text-female-2" onclick="(()=&gt;{console.log('text 2 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(253.860482 329.445494) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-37"
d="M 428 4666 L 3944 4666 L 3944 3988 L 2125 0 L 953 0 L 2675 3781 L 428 3781 L 428 4666 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-37" />
</g>
</g>
</g>
<g id="beatrice-text-female-3" onclick="(()=&gt;{console.log('text 3 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(163.27864 370.209205) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-38"
d="M 2228 2088 Q 1891 2088 1709 1903 Q 1528 1719 1528 1375 Q 1528 1031 1709 848 Q 1891 666 2228 666 Q 2563 666 2741 848 Q 2919 1031 2919 1375 Q 2919 1722 2741 1905 Q 2563 2088 2228 2088 z M 1350 2484 Q 925 2613 709 2878 Q 494 3144 494 3541 Q 494 4131 934 4440 Q 1375 4750 2228 4750 Q 3075 4750 3515 4442 Q 3956 4134 3956 3541 Q 3956 3144 3739 2878 Q 3522 2613 3097 2484 Q 3572 2353 3814 2058 Q 4056 1763 4056 1313 Q 4056 619 3595 264 Q 3134 -91 2228 -91 Q 1319 -91 855 264 Q 391 619 391 1313 Q 391 1763 633 2058 Q 875 2353 1350 2484 z M 1631 3419 Q 1631 3141 1786 2991 Q 1941 2841 2228 2841 Q 2509 2841 2662 2991 Q 2816 3141 2816 3419 Q 2816 3697 2662 3845 Q 2509 3994 2228 3994 Q 1941 3994 1786 3844 Q 1631 3694 1631 3419 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-38" />
</g>
</g>
</g>
<g id="beatrice-text-female-4" onclick="(()=&gt;{console.log('text 4 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(428.003432 419.484988) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-31"
d="M 750 831 L 1813 831 L 1813 3847 L 722 3622 L 722 4441 L 1806 4666 L 2950 4666 L 2950 831 L 4013 831 L 4013 0 L 750 0 L 750 831 z "
transform="scale(0.015625)" />
<path id="DejaVuSans-Bold-30"
d="M 2944 2338 Q 2944 3213 2780 3570 Q 2616 3928 2228 3928 Q 1841 3928 1675 3570 Q 1509 3213 1509 2338 Q 1509 1453 1675 1090 Q 1841 728 2228 728 Q 2613 728 2778 1090 Q 2944 1453 2944 2338 z M 4147 2328 Q 4147 1169 3647 539 Q 3147 -91 2228 -91 Q 1306 -91 806 539 Q 306 1169 306 2328 Q 306 3491 806 4120 Q 1306 4750 2228 4750 Q 3147 4750 3647 4120 Q 4147 3491 4147 2328 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-5" onclick="(()=&gt;{console.log('text 5 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(383.164961 245.359486) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-6" onclick="(()=&gt;{console.log('text 6 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(197.191669 463.022351) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-35"
d="M 678 4666 L 3669 4666 L 3669 3781 L 1638 3781 L 1638 3059 Q 1775 3097 1914 3117 Q 2053 3138 2203 3138 Q 3056 3138 3531 2711 Q 4006 2284 4006 1522 Q 4006 766 3489 337 Q 2972 -91 2053 -91 Q 1656 -91 1267 -14 Q 878 63 494 219 L 494 1166 Q 875 947 1217 837 Q 1559 728 1863 728 Q 2300 728 2551 942 Q 2803 1156 2803 1522 Q 2803 1891 2551 2103 Q 2300 2316 1863 2316 Q 1603 2316 1309 2248 Q 1016 2181 678 2041 L 678 4666 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-7" onclick="(()=&gt;{console.log('text 7 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(152.092875 295.851586) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-36"
d="M 2316 2303 Q 2000 2303 1842 2098 Q 1684 1894 1684 1484 Q 1684 1075 1842 870 Q 2000 666 2316 666 Q 2634 666 2792 870 Q 2950 1075 2950 1484 Q 2950 1894 2792 2098 Q 2634 2303 2316 2303 z M 3803 4544 L 3803 3681 Q 3506 3822 3243 3889 Q 2981 3956 2731 3956 Q 2194 3956 1894 3657 Q 1594 3359 1544 2772 Q 1750 2925 1990 3001 Q 2231 3078 2516 3078 Q 3231 3078 3670 2659 Q 4109 2241 4109 1563 Q 4109 813 3618 361 Q 3128 -91 2303 -91 Q 1394 -91 895 523 Q 397 1138 397 2266 Q 397 3422 980 4083 Q 1563 4744 2578 4744 Q 2900 4744 3203 4694 Q 3506 4644 3803 4544 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-8" onclick="(()=&gt;{console.log('text 8 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(415.720934 222.332954) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-9" onclick="(()=&gt;{console.log('text 9 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(337.329637 110.918523) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-10" onclick="(()=&gt;{console.log('text 10 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(316.995629 222.507171) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-39"
d="M 641 103 L 641 966 Q 928 831 1190 764 Q 1453 697 1709 697 Q 2247 697 2547 995 Q 2847 1294 2900 1881 Q 2688 1725 2447 1647 Q 2206 1569 1925 1569 Q 1209 1569 770 1986 Q 331 2403 331 3084 Q 331 3838 820 4291 Q 1309 4744 2131 4744 Q 3044 4744 3544 4128 Q 4044 3513 4044 2388 Q 4044 1231 3459 570 Q 2875 -91 1856 -91 Q 1528 -91 1228 -42 Q 928 6 641 103 z M 2125 2350 Q 2441 2350 2600 2554 Q 2759 2759 2759 3169 Q 2759 3575 2600 3781 Q 2441 3988 2125 3988 Q 1809 3988 1650 3781 Q 1491 3575 1491 3169 Q 1491 2759 1650 2554 Q 1809 2350 2125 2350 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-11" onclick="(()=&gt;{console.log('text 11 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(354.725926 205.013187) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-12" onclick="(()=&gt;{console.log('text 12 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(358.281208 151.471241) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-13" onclick="(()=&gt;{console.log('text 13 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(333.112734 173.726092) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-14" onclick="(()=&gt;{console.log('text 14 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(159.046959 329.273098) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-15" onclick="(()=&gt;{console.log('text 15 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(253.761934 185.199227) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-16" onclick="(()=&gt;{console.log('text 16 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(180.944121 266.046391) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-33"
d="M 2981 2516 Q 3453 2394 3698 2092 Q 3944 1791 3944 1325 Q 3944 631 3412 270 Q 2881 -91 1863 -91 Q 1503 -91 1142 -33 Q 781 25 428 141 L 428 1069 Q 766 900 1098 814 Q 1431 728 1753 728 Q 2231 728 2486 893 Q 2741 1059 2741 1369 Q 2741 1688 2480 1852 Q 2219 2016 1709 2016 L 1228 2016 L 1228 2791 L 1734 2791 Q 2188 2791 2409 2933 Q 2631 3075 2631 3366 Q 2631 3634 2415 3781 Q 2200 3928 1806 3928 Q 1516 3928 1219 3862 Q 922 3797 628 3669 L 628 4550 Q 984 4650 1334 4700 Q 1684 4750 2022 4750 Q 2931 4750 3382 4451 Q 3834 4153 3834 3553 Q 3834 3144 3618 2883 Q 3403 2622 2981 2516 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-17" onclick="(()=&gt;{console.log('text 17 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(269.606655 465.933789) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-18" onclick="(()=&gt;{console.log('text 18 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(325.583218 272.653614) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-19" onclick="(()=&gt;{console.log('text 19 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(408.411614 350.310389) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-20" onclick="(()=&gt;{console.log('text 20 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(326.31723 341.408828) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-21" onclick="(()=&gt;{console.log('text 21 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(195.638162 351.242444) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-22" onclick="(()=&gt;{console.log('text 22 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(279.71061 367.236222) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-23" onclick="(()=&gt;{console.log('text 23 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(267.849143 392.30993) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-24" onclick="(()=&gt;{console.log('text 24 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(284.880842 424.704655) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-25" onclick="(()=&gt;{console.log('text 25 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(415.504337 381.632604) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-26" onclick="(()=&gt;{console.log('text 26 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(196.864977 220.377413) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-27" onclick="(()=&gt;{console.log('text 27 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(234.462583 357.393433) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-28" onclick="(()=&gt;{console.log('text 28 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(145.697818 426.464523) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-29" onclick="(()=&gt;{console.log('text 29 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(290.510319 335.777161) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-30" onclick="(()=&gt;{console.log('text 30 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(269.445395 310.291491) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-31" onclick="(()=&gt;{console.log('text 31 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(251.655337 429.737528) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-32" onclick="(()=&gt;{console.log('text 32 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(293.568025 261.436163) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-33" onclick="(()=&gt;{console.log('text 33 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(246.547954 266.344409) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-34" onclick="(()=&gt;{console.log('text 34 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(312.918088 406.332457) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-35" onclick="(()=&gt;{console.log('text 35 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(337.400892 383.442961) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-36" onclick="(()=&gt;{console.log('text 36 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(395.88896 406.065981) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-37" onclick="(()=&gt;{console.log('text 37 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(347.38477 238.99916) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-38" onclick="(()=&gt;{console.log('text 38 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(185.584411 411.315605) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-39" onclick="(()=&gt;{console.log('text 39 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(289.181126 197.86249) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-40" onclick="(()=&gt;{console.log('text 40 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(311.726285 371.889731) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-41" onclick="(()=&gt;{console.log('text 41 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(220.340369 413.270465) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-42" onclick="(()=&gt;{console.log('text 42 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(342.876801 422.653917) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-43" onclick="(()=&gt;{console.log('text 43 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(363.771039 368.733221) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-44" onclick="(()=&gt;{console.log('text 44 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(221.953701 439.459389) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-45" onclick="(()=&gt;{console.log('text 45 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(253.157028 477.463977) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-46" onclick="(()=&gt;{console.log('text 46 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(409.211471 262.775596) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-47" onclick="(()=&gt;{console.log('text 47 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(392.258494 438.041697) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-48" onclick="(()=&gt;{console.log('text 48 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(273.912603 286.090784) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-49" onclick="(()=&gt;{console.log('text 49 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(213.773188 264.727971) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-female-50" onclick="(()=&gt;{console.log('text 50 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pe3de578e26)">
<g transform="translate(301.181549 458.138582) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
</g>
</g>
<defs>
<clipPath id="pe3de578e26">
<rect x="124.405104" y="69.12" width="341.589792" height="443.52" />
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 56 KiB

View File

@ -0,0 +1,898 @@
<?xml version='1.0' encoding='utf-8'?>
<svg xmlns="http://www.w3.org/2000/svg" xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:ns2="http://creativecommons.org/ns#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:xlink="http://www.w3.org/1999/xlink" width="100%" height="100%" viewBox="100 60 420 450" version="1.1">
<metadata>
<rdf:RDF>
<ns2:Work>
<dc:type rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:date>2023-11-19T11:21:55.705408</dc:date>
<dc:format>image/svg+xml</dc:format>
<dc:creator>
<ns2:Agent>
<dc:title>Matplotlib v3.7.1, https://matplotlib.org/</dc:title>
</ns2:Agent>
</dc:creator>
</ns2:Work>
</rdf:RDF>
</metadata>
<defs>
<style type="text/css">
* {
stroke-linejoin: round;
stroke-linecap: butt
}
</style>
<style type="text/css">
.beatrice-node-pointer {
cursor: pointer;
}
.beatrice-node-pointer:hover {
stroke: gray;
}
.beatrice-node-pointer-selected {
stroke: #ef6767c2;
stroke-width: 3
}
.beatrice-text-pointer {
cursor: pointer;
pointer-events: none
}
.beatrice-text-pointer:hover {
/* ホバー時のスタイルは既に設定されたスタイルと異なる特定の属性を変更することができます。 */
}
</style>
</defs>
<g id="figure_1">
<g id="patch_1">
<path d="M 0 576 L 576 576 L 576 0 L 0 0 z " style="fill: #ffffff" />
</g>
<g id="axes_1">
<g id="LineCollection_1">
<path d="M 383.475478 335.382791 L 350.123561 336.312105 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 383.475478 335.382791 L 393.562573 295.917472 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 383.475478 335.382791 L 396.396073 371.656412 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 395.592267 184.349842 L 344.302973 166.290216 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 166.614267 246.553188 L 214.405523 244.575019 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 362.886037 395.352171 L 389.299516 416.267064 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 362.886037 395.352171 L 367.134249 434.454954 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 362.886037 395.352171 L 396.396073 371.656412 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 362.886037 395.352171 L 321.091057 403.95329 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 291.699254 114.456198 L 287.429936 148.935339 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 309.72476 346.813492 L 326.464644 303.679747 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 396.396073 371.656412 L 422.276969 403.842356 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 396.396073 371.656412 L 419.504487 334.14189 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 311.713 188.802087 L 278.840744 190.572938 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 311.713 188.802087 L 287.429936 148.935339 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 311.713 188.802087 L 344.302973 166.290216 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 213.805036 285.720019 L 216.196468 317.113868 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 213.805036 285.720019 L 241.321249 255.242558 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 213.805036 285.720019 L 169.41455 268.66905 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 326.464644 303.679747 L 341.073251 272.287852 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 326.464644 303.679747 L 281.878613 277.846944 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 326.464644 303.679747 L 350.123561 336.312105 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 468.104517 290.196764 L 453.314054 329.209099 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 252.522958 107.607273 L 287.429936 148.935339 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 241.817 158.353487 L 278.840744 190.572938 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 278.840744 190.572938 L 264.569363 223.030096 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 190.32114 223.314542 L 214.405523 244.575019 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 233.41271 348.401671 L 216.196468 317.113868 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 122.295483 352.502553 L 162.445269 355.484449 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 158.624602 400.46174 L 162.445269 355.484449 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 214.405523 244.575019 L 241.321249 255.242558 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 214.405523 244.575019 L 207.563253 203.013335 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 264.569363 223.030096 L 298.808991 236.296491 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 264.569363 223.030096 L 241.321249 255.242558 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 264.569363 223.030096 L 236.649711 206.251683 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 220.79649 394.869471 L 213.931911 434.927829 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 220.79649 394.869471 L 208.453065 361.465389 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 241.321249 255.242558 L 281.878613 277.846944 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 257.120877 296.156882 L 281.878613 277.846944 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 453.314054 329.209099 L 419.504487 334.14189 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 287.180764 309.56402 L 281.878613 277.846944 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 287.429936 148.935339 L 321.071206 134.027026 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 300.632415 433.812968 L 321.091057 403.95329 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 216.196468 317.113868 L 181.944484 318.835753 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 216.196468 317.113868 L 208.453065 361.465389 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 419.504487 334.14189 L 436.061109 363.566053 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 366.514998 474.152727 L 367.134249 434.454954 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
<path d="M 208.453065 361.465389 L 162.445269 355.484449 " clip-path="url(#pd42c8a995e)"
style="fill: none; stroke: #808080" />
</g>
<g id="PathCollection_1">
<defs>
<path id="C0_0_3858269516"
d="M 0 11.18034 C 2.965061 11.18034 5.80908 10.002309 7.905694 7.905694 C 10.002309 5.80908 11.18034 2.965061 11.18034 -0 C 11.18034 -2.965061 10.002309 -5.80908 7.905694 -7.905694 C 5.80908 -10.002309 2.965061 -11.18034 0 -11.18034 C -2.965061 -11.18034 -5.80908 -10.002309 -7.905694 -7.905694 C -10.002309 -5.80908 -11.18034 -2.965061 -11.18034 0 C -11.18034 2.965061 -10.002309 5.80908 -7.905694 7.905694 C -5.80908 10.002309 -2.965061 11.18034 0 11.18034 z " />
</defs>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-0" onclick="(()=&gt;{console.log('node 0')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="383.475478" y="335.382791" style="fill: #fde2bb" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-1" onclick="(()=&gt;{console.log('node 1')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="393.562573" y="295.917472" style="fill: #fdba68" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-2" onclick="(()=&gt;{console.log('node 2')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="395.592267" y="184.349842" style="fill: #fbe9cf" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-3" onclick="(()=&gt;{console.log('node 3')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="166.614267" y="246.553188" style="fill: #7e70ab" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-4" onclick="(()=&gt;{console.log('node 4')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="362.886037" y="395.352171" style="fill: #e8e9f1" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-5" onclick="(()=&gt;{console.log('node 5')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="291.699254" y="114.456198" style="fill: #f9b158" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-6" onclick="(()=&gt;{console.log('node 6')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="309.72476" y="346.813492" style="fill: #e4e5f0" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-7" onclick="(()=&gt;{console.log('node 7')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="396.396073" y="371.656412" style="fill: #fdcc8c" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-8" onclick="(()=&gt;{console.log('node 8')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="311.713" y="188.802087" style="fill: #fedeb3" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-9" onclick="(()=&gt;{console.log('node 9')})()"
class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="213.805036" y="285.720019" style="fill: #bab5d7" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-10"
onclick="(()=&gt;{console.log('node 10')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="326.464644" y="303.679747" style="fill: #eaebf2" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-11"
onclick="(()=&gt;{console.log('node 11')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="468.104517" y="290.196764" style="fill: #f7f7f6" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-12"
onclick="(()=&gt;{console.log('node 12')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="169.41455" y="268.66905" style="fill: #dfe1ee" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-13"
onclick="(()=&gt;{console.log('node 13')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="252.522958" y="107.607273" style="fill: #eff0f4" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-14"
onclick="(()=&gt;{console.log('node 14')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="241.817" y="158.353487" style="fill: #e58a20" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-15"
onclick="(()=&gt;{console.log('node 15')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="278.840744" y="190.572938" style="fill: #fedbac" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-16"
onclick="(()=&gt;{console.log('node 16')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="190.32114" y="223.314542" style="fill: #dfe1ee" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-17"
onclick="(()=&gt;{console.log('node 17')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="233.41271" y="348.401671" style="fill: #c3c0dd" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-18"
onclick="(()=&gt;{console.log('node 18')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="122.295483" y="352.502553" style="fill: #fed8a6" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-19"
onclick="(()=&gt;{console.log('node 19')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="158.624602" y="400.46174" style="fill: #f7f6f3" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-20"
onclick="(()=&gt;{console.log('node 20')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="214.405523" y="244.575019" style="fill: #f9f2e9" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-21"
onclick="(()=&gt;{console.log('node 21')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="264.569363" y="223.030096" style="fill: #faecd7" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-22"
onclick="(()=&gt;{console.log('node 22')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="220.79649" y="394.869471" style="fill: #fbead2" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-23"
onclick="(()=&gt;{console.log('node 23')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="344.302973" y="166.290216" style="fill: #feddaf" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-24"
onclick="(()=&gt;{console.log('node 24')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="241.321249" y="255.242558" style="fill: #c3c0dd" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-25"
onclick="(()=&gt;{console.log('node 25')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="257.120877" y="296.156882" style="fill: #f9f0e4" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-26"
onclick="(()=&gt;{console.log('node 26')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="207.563253" y="203.013335" style="fill: #fbebd5" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-27"
onclick="(()=&gt;{console.log('node 27')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="453.314054" y="329.209099" style="fill: #fdc47b" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-28"
onclick="(()=&gt;{console.log('node 28')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="350.123561" y="336.312105" style="fill: #fbe9cf" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-29"
onclick="(()=&gt;{console.log('node 29')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="287.180764" y="309.56402" style="fill: #f7f6f3" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-30"
onclick="(()=&gt;{console.log('node 30')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="287.429936" y="148.935339" style="fill: #fbebd5" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-31"
onclick="(()=&gt;{console.log('node 31')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="281.878613" y="277.846944" style="fill: #d1d1e6" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-32"
onclick="(()=&gt;{console.log('node 32')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="300.632415" y="433.812968" style="fill: #fde2bb" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-33"
onclick="(()=&gt;{console.log('node 33')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="216.196468" y="317.113868" style="fill: #dddfed" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-34"
onclick="(()=&gt;{console.log('node 34')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="419.504487" y="334.14189" style="fill: #fdc57f" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-35"
onclick="(()=&gt;{console.log('node 35')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="321.071206" y="134.027026" style="fill: #fee0b6" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-36"
onclick="(()=&gt;{console.log('node 36')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="366.514998" y="474.152727" style="fill: #fdbd6e" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-37"
onclick="(()=&gt;{console.log('node 37')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="208.453065" y="361.465389" style="fill: #cccbe3" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-38"
onclick="(()=&gt;{console.log('node 38')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="236.649711" y="206.251683" style="fill: #faecd7" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-39"
onclick="(()=&gt;{console.log('node 39')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="298.808991" y="236.296491" style="fill: #fdc57f" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-40"
onclick="(()=&gt;{console.log('node 40')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="181.944484" y="318.835753" style="fill: #f9f0e4" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-41"
onclick="(()=&gt;{console.log('node 41')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="367.134249" y="434.454954" style="fill: #f6f6f7" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-42"
onclick="(()=&gt;{console.log('node 42')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="422.276969" y="403.842356" style="fill: #fdbf72" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-43"
onclick="(()=&gt;{console.log('node 43')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="321.091057" y="403.95329" style="fill: #f8f5f1" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-44"
onclick="(()=&gt;{console.log('node 44')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="162.445269" y="355.484449" style="fill: #eaebf2" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-45"
onclick="(()=&gt;{console.log('node 45')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="341.073251" y="272.287852" style="fill: #f6aa4f" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-46"
onclick="(()=&gt;{console.log('node 46')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="389.299516" y="416.267064" style="fill: #de8013" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-47"
onclick="(()=&gt;{console.log('node 47')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="213.931911" y="434.927829" style="fill: #fbb55e" />
</g>
<g clip-path="url(#pd42c8a995e)" id="beatrice-node-male-48"
onclick="(()=&gt;{console.log('node 48')})()" class="beatrice-node-pointer">
<use xlink:href="#C0_0_3858269516" x="436.061109" y="363.566053" style="fill: #ebecf3" />
</g>
</g>
<g id="beatrice-text-male-0" onclick="(()=&gt;{console.log('text 0 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(379.30079 338.694041) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-31"
d="M 750 831 L 1813 831 L 1813 3847 L 722 3622 L 722 4441 L 1806 4666 L 2950 4666 L 2950 831 L 4013 831 L 4013 0 L 750 0 L 750 831 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-31" />
</g>
</g>
</g>
<g id="beatrice-text-male-1" onclick="(()=&gt;{console.log('text 1 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(389.387885 299.228722) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-33"
d="M 2981 2516 Q 3453 2394 3698 2092 Q 3944 1791 3944 1325 Q 3944 631 3412 270 Q 2881 -91 1863 -91 Q 1503 -91 1142 -33 Q 781 25 428 141 L 428 1069 Q 766 900 1098 814 Q 1431 728 1753 728 Q 2231 728 2486 893 Q 2741 1059 2741 1369 Q 2741 1688 2480 1852 Q 2219 2016 1709 2016 L 1228 2016 L 1228 2791 L 1734 2791 Q 2188 2791 2409 2933 Q 2631 3075 2631 3366 Q 2631 3634 2415 3781 Q 2200 3928 1806 3928 Q 1516 3928 1219 3862 Q 922 3797 628 3669 L 628 4550 Q 984 4650 1334 4700 Q 1684 4750 2022 4750 Q 2931 4750 3382 4451 Q 3834 4153 3834 3553 Q 3834 3144 3618 2883 Q 3403 2622 2981 2516 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-33" />
</g>
</g>
</g>
<g id="beatrice-text-male-2" onclick="(()=&gt;{console.log('text 2 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(391.41758 187.661092) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-35"
d="M 678 4666 L 3669 4666 L 3669 3781 L 1638 3781 L 1638 3059 Q 1775 3097 1914 3117 Q 2053 3138 2203 3138 Q 3056 3138 3531 2711 Q 4006 2284 4006 1522 Q 4006 766 3489 337 Q 2972 -91 2053 -91 Q 1656 -91 1267 -14 Q 878 63 494 219 L 494 1166 Q 875 947 1217 837 Q 1559 728 1863 728 Q 2300 728 2551 942 Q 2803 1156 2803 1522 Q 2803 1891 2551 2103 Q 2300 2316 1863 2316 Q 1603 2316 1309 2248 Q 1016 2181 678 2041 L 678 4666 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-35" />
</g>
</g>
</g>
<g id="beatrice-text-male-3" onclick="(()=&gt;{console.log('text 3 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(162.43958 249.864438) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-36"
d="M 2316 2303 Q 2000 2303 1842 2098 Q 1684 1894 1684 1484 Q 1684 1075 1842 870 Q 2000 666 2316 666 Q 2634 666 2792 870 Q 2950 1075 2950 1484 Q 2950 1894 2792 2098 Q 2634 2303 2316 2303 z M 3803 4544 L 3803 3681 Q 3506 3822 3243 3889 Q 2981 3956 2731 3956 Q 2194 3956 1894 3657 Q 1594 3359 1544 2772 Q 1750 2925 1990 3001 Q 2231 3078 2516 3078 Q 3231 3078 3670 2659 Q 4109 2241 4109 1563 Q 4109 813 3618 361 Q 3128 -91 2303 -91 Q 1394 -91 895 523 Q 397 1138 397 2266 Q 397 3422 980 4083 Q 1563 4744 2578 4744 Q 2900 4744 3203 4694 Q 3506 4644 3803 4544 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-36" />
</g>
</g>
</g>
<g id="beatrice-text-male-4" onclick="(()=&gt;{console.log('text 4 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(358.71135 398.663421) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-39"
d="M 641 103 L 641 966 Q 928 831 1190 764 Q 1453 697 1709 697 Q 2247 697 2547 995 Q 2847 1294 2900 1881 Q 2688 1725 2447 1647 Q 2206 1569 1925 1569 Q 1209 1569 770 1986 Q 331 2403 331 3084 Q 331 3838 820 4291 Q 1309 4744 2131 4744 Q 3044 4744 3544 4128 Q 4044 3513 4044 2388 Q 4044 1231 3459 570 Q 2875 -91 1856 -91 Q 1528 -91 1228 -42 Q 928 6 641 103 z M 2125 2350 Q 2441 2350 2600 2554 Q 2759 2759 2759 3169 Q 2759 3575 2600 3781 Q 2441 3988 2125 3988 Q 1809 3988 1650 3781 Q 1491 3575 1491 3169 Q 1491 2759 1650 2554 Q 1809 2350 2125 2350 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-39" />
</g>
</g>
</g>
<g id="beatrice-text-male-5" onclick="(()=&gt;{console.log('text 5 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(283.349879 117.767448) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-6" onclick="(()=&gt;{console.log('text 6 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(301.375385 350.124742) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-32"
d="M 1844 884 L 3897 884 L 3897 0 L 506 0 L 506 884 L 2209 2388 Q 2438 2594 2547 2791 Q 2656 2988 2656 3200 Q 2656 3528 2436 3728 Q 2216 3928 1850 3928 Q 1569 3928 1234 3808 Q 900 3688 519 3450 L 519 4475 Q 925 4609 1322 4679 Q 1719 4750 2100 4750 Q 2938 4750 3402 4381 Q 3866 4013 3866 3353 Q 3866 2972 3669 2642 Q 3472 2313 2841 1759 L 1844 884 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-7" onclick="(()=&gt;{console.log('text 7 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(388.046698 374.967662) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-8" onclick="(()=&gt;{console.log('text 8 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(303.363625 192.113337) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-30"
d="M 2944 2338 Q 2944 3213 2780 3570 Q 2616 3928 2228 3928 Q 1841 3928 1675 3570 Q 1509 3213 1509 2338 Q 1509 1453 1675 1090 Q 1841 728 2228 728 Q 2613 728 2778 1090 Q 2944 1453 2944 2338 z M 4147 2328 Q 4147 1169 3647 539 Q 3147 -91 2228 -91 Q 1306 -91 806 539 Q 306 1169 306 2328 Q 306 3491 806 4120 Q 1306 4750 2228 4750 Q 3147 4750 3647 4120 Q 4147 3491 4147 2328 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-9" onclick="(()=&gt;{console.log('text 9 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(205.455661 289.031269) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-10" onclick="(()=&gt;{console.log('text 10 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(318.115269 306.990997) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-11" onclick="(()=&gt;{console.log('text 11 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(459.755142 293.508014) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-12" onclick="(()=&gt;{console.log('text 12 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(161.065175 271.9803) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-38"
d="M 2228 2088 Q 1891 2088 1709 1903 Q 1528 1719 1528 1375 Q 1528 1031 1709 848 Q 1891 666 2228 666 Q 2563 666 2741 848 Q 2919 1031 2919 1375 Q 2919 1722 2741 1905 Q 2563 2088 2228 2088 z M 1350 2484 Q 925 2613 709 2878 Q 494 3144 494 3541 Q 494 4131 934 4440 Q 1375 4750 2228 4750 Q 3075 4750 3515 4442 Q 3956 4134 3956 3541 Q 3956 3144 3739 2878 Q 3522 2613 3097 2484 Q 3572 2353 3814 2058 Q 4056 1763 4056 1313 Q 4056 619 3595 264 Q 3134 -91 2228 -91 Q 1319 -91 855 264 Q 391 619 391 1313 Q 391 1763 633 2058 Q 875 2353 1350 2484 z M 1631 3419 Q 1631 3141 1786 2991 Q 1941 2841 2228 2841 Q 2509 2841 2662 2991 Q 2816 3141 2816 3419 Q 2816 3697 2662 3845 Q 2509 3994 2228 3994 Q 1941 3994 1786 3844 Q 1631 3694 1631 3419 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-32" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-13" onclick="(()=&gt;{console.log('text 13 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(244.173583 110.918523) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-14" onclick="(()=&gt;{console.log('text 14 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(233.467625 161.664737) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-15" onclick="(()=&gt;{console.log('text 15 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(270.491369 193.884188) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-16" onclick="(()=&gt;{console.log('text 16 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(181.971765 226.625792) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-34"
d="M 2356 3675 L 1038 1722 L 2356 1722 L 2356 3675 z M 2156 4666 L 3494 4666 L 3494 1722 L 4159 1722 L 4159 850 L 3494 850 L 3494 0 L 2356 0 L 2356 850 L 288 850 L 288 1881 L 2156 4666 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-17" onclick="(()=&gt;{console.log('text 17 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(225.063335 351.712921) scale(0.12 -0.12)">
<defs>
<path id="DejaVuSans-Bold-37"
d="M 428 4666 L 3944 4666 L 3944 3988 L 2125 0 L 953 0 L 2675 3781 L 428 3781 L 428 4666 z "
transform="scale(0.015625)" />
</defs>
<use xlink:href="#DejaVuSans-Bold-33" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-18" onclick="(()=&gt;{console.log('text 18 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(113.946108 355.813803) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-19" onclick="(()=&gt;{console.log('text 19 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(150.275227 403.77299) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-20" onclick="(()=&gt;{console.log('text 20 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(206.056148 247.886269) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-21" onclick="(()=&gt;{console.log('text 21 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(256.219988 226.341346) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-22" onclick="(()=&gt;{console.log('text 22 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(212.447115 398.180721) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-23" onclick="(()=&gt;{console.log('text 23 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(335.953598 169.601466) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-24" onclick="(()=&gt;{console.log('text 24 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(232.971874 258.553808) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-25" onclick="(()=&gt;{console.log('text 25 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(248.771502 299.468132) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-34" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-26" onclick="(()=&gt;{console.log('text 26 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(199.213878 206.324585) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-27" onclick="(()=&gt;{console.log('text 27 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(444.964679 332.520349) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-32" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-28" onclick="(()=&gt;{console.log('text 28 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(341.774186 339.623355) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-35" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-29" onclick="(()=&gt;{console.log('text 29 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(278.831389 312.87527) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-36" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-30" onclick="(()=&gt;{console.log('text 30 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(279.080561 152.246589) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-31" onclick="(()=&gt;{console.log('text 31 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(273.529238 281.158194) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-32" onclick="(()=&gt;{console.log('text 32 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(292.28304 437.124218) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-33" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-33" onclick="(()=&gt;{console.log('text 33 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(207.847093 320.425118) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-34" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-34" onclick="(()=&gt;{console.log('text 34 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(411.155112 337.45314) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-35" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-35" onclick="(()=&gt;{console.log('text 35 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(312.721831 137.338276) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-36" onclick="(()=&gt;{console.log('text 36 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(358.165623 477.463977) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-37" onclick="(()=&gt;{console.log('text 37 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(200.10369 364.776639) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-38" onclick="(()=&gt;{console.log('text 38 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(228.300336 209.562933) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-37" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-39" onclick="(()=&gt;{console.log('text 39 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(290.459616 239.607741) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-40" onclick="(()=&gt;{console.log('text 40 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(173.595109 322.147003) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-31" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-41" onclick="(()=&gt;{console.log('text 41 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(358.784874 437.766204) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-36" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-42" onclick="(()=&gt;{console.log('text 42 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(413.927594 407.153606) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-43" onclick="(()=&gt;{console.log('text 43 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(312.741682 407.26454) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-44" onclick="(()=&gt;{console.log('text 44 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(154.095894 358.795699) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-38" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-45" onclick="(()=&gt;{console.log('text 45 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(332.723876 275.599102) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-37" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-46" onclick="(()=&gt;{console.log('text 46 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(380.950141 419.578314) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-38" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-47" onclick="(()=&gt;{console.log('text 47 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(205.582536 438.239079) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-39" />
<use xlink:href="#DejaVuSans-Bold-39" x="69.580078" />
</g>
</g>
</g>
<g id="beatrice-text-male-48" onclick="(()=&gt;{console.log('text 48 clicked')})()"
class="beatrice-text-pointer">
<g clip-path="url(#pd42c8a995e)">
<g transform="translate(423.537047 366.877303) scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-Bold-31" />
<use xlink:href="#DejaVuSans-Bold-30" x="69.580078" />
<use xlink:href="#DejaVuSans-Bold-30" x="139.160156" />
</g>
</g>
</g>
</g>
</g>
<defs>
<clipPath id="pd42c8a995e">
<rect x="85.985534" y="69.12" width="418.428931" height="443.52" />
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 54 KiB

View File

@ -21,8 +21,8 @@
{
"name": "configArea",
"options": {
"detectors": ["dio", "harvest", "crepe", "crepe_full", "crepe_tiny", "rmvpe", "rmvpe_onnx"],
"inputChunkNums": [1, 2, 8, 16, 24, 32, 40, 48, 64, 80, 96, 112, 128, 192, 256, 320, 384, 448, 512, 576, 640, 704, 768, 832, 896, 960, 1024, 2048, 4096, 8192, 16384]
"detectors": ["dio", "harvest", "crepe", "crepe_full", "crepe_tiny", "rmvpe", "rmvpe_onnx", "fcpe"],
"inputChunkNums": [1, 2, 4, 6, 8, 16, 24, 32, 40, 48, 64, 80, 96, 112, 128, 192, 256, 320, 384, 448, 512, 576, 640, 704, 768, 832, 896, 960, 1024, 2048, 4096, 8192, 16384]
}
}
]

View File

@ -0,0 +1 @@
web

View File

@ -0,0 +1 @@
{}

View File

@ -13,6 +13,7 @@ import { AppRootProvider, useAppRoot } from "./001_provider/001_AppRootProvider"
import { useIndexedDB } from "@dannadori/voice-changer-client-js";
import { Demo } from "./components/demo/010_Demo";
import { useMessageBuilder } from "./hooks/useMessageBuilder";
import { removeDB as webDBRemove } from "@dannadori/voice-changer-js";
library.add(fas, far, fab);
@ -57,6 +58,7 @@ const AppStateWrapper = () => {
const onClearCacheClicked = async () => {
await removeDB();
await webDBRemove();
location.reload();
};
const onReloadClicked = () => {

View File

@ -0,0 +1,299 @@
import { ClientState, WebModelSlot } from "@dannadori/voice-changer-client-js";
import { VoiceChangerJSClientConfig, VoiceChangerJSClient, ProgressUpdateType, ProgreeeUpdateCallbcckInfo, VoiceChangerType, InputLengthKey, ResponseTimeInfo } from "@dannadori/voice-changer-js";
import { useEffect, useMemo, useRef, useState } from "react";
export type UseWebInfoProps = {
clientState: ClientState | null;
webEdition: boolean;
};
export const WebModelLoadingState = {
none: "none",
loading: "loading",
warmup: "warmup",
ready: "ready",
} as const;
export type WebModelLoadingState = (typeof WebModelLoadingState)[keyof typeof WebModelLoadingState];
export type VoiceChangerConfig = {
config: VoiceChangerJSClientConfig;
modelUrl: string;
portrait: string;
name: string;
termOfUse: string;
sampleRate: ModelSampleRateStr;
useF0: boolean;
inputLength: InputLengthKey;
progressCallback?: ((data: any) => void) | null;
};
export type WebInfoState = {
voiceChangerConfig: VoiceChangerConfig;
webModelLoadingState: WebModelLoadingState;
progressLoadPreprocess: number;
progressLoadVCModel: number;
progressWarmup: number;
webModelslot: WebModelSlot;
upkey: number;
responseTimeInfo: ResponseTimeInfo;
};
export type WebInfoStateAndMethod = WebInfoState & {
loadVoiceChanagerModel: () => Promise<void>;
setUpkey: (upkey: number) => void;
setVoiceChangerConfig: (voiceChangerType: VoiceChangerType, sampleRate: ModelSampleRateStr, useF0: boolean, inputLength: InputLengthKey) => void;
};
const ModelSampleRateStr = {
"40k": "40k",
"32k": "32k",
"16k": "16k",
} as const;
type ModelSampleRateStr = (typeof ModelSampleRateStr)[keyof typeof ModelSampleRateStr];
const noF0ModelUrl: { [modelType in VoiceChangerType]: { [inputLength in InputLengthKey]: { [sampleRate in ModelSampleRateStr]: string } } } = {
rvcv1: {
"24000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_40k_nof0_24000.bin",
"32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_32k_nof0_24000.bin",
},
"16000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_40k_nof0_16000.bin",
"32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_32k_nof0_16000.bin",
},
"12000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_40k_nof0_12000.bin",
"32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_32k_nof0_12000.bin",
},
"8000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_40k_nof0_8000.bin",
"32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_32k_nof0_8000.bin",
},
},
rvcv2: {
"24000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_40k_nof0_24000.bin",
"32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_32k_nof0_24000.bin",
"16k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_16k_nof0_24000.bin",
},
"16000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_40k_nof0_16000.bin",
"32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_32k_nof0_16000.bin",
"16k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_16k_nof0_16000.bin",
},
"12000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_40k_nof0_12000.bin",
"32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_32k_nof0_12000.bin",
"16k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_16k_nof0_12000.bin",
},
"8000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_40k_nof0_8000.bin",
"32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_32k_nof0_8000.bin",
"16k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_16k_nof0_8000.bin",
},
},
};
const f0ModelUrl: { [modelType in VoiceChangerType]: { [inputLength in InputLengthKey]: { [sampleRate in ModelSampleRateStr]: string } } } = {
rvcv1: {
"24000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_40k_f0_24000.bin",
"32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_32k_f0_24000.bin",
},
"16000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_40k_f0_16000.bin",
"32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_32k_f0_16000.bin",
},
"12000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_40k_f0_12000.bin",
"32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_32k_f0_12000.bin",
},
"8000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_40k_f0_8000.bin",
"32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv1_amitaro_v1_32k_f0_8000.bin",
},
},
rvcv2: {
"24000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_40k_f0_24000.bin",
// "32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_32k_f0_24000.bin",
"32k": "https://192.168.0.247:8080/models/rvcv2_exp_v2_32k_f0_24000.bin",
// "16k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_16k_f0_24000.bin",
// "16k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/vctk/rvcv2_vctk_v2_16k_f0_24000.bin",
"16k": "https://192.168.0.247:8080/models/rvcv2_vctk_v2_16k_f0_24000.bin",
},
"16000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_40k_f0_16000.bin",
"32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_32k_f0_16000.bin",
// "16k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_16k_f0_16000.bin",
"16k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/vctk/rvcv2_vctk_v2_16k_f0_16000.bin",
},
"12000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_40k_f0_12000.bin",
"32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_32k_f0_12000.bin",
// "16k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_16k_f0_12000.bin",
"16k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/vctk/rvcv2_vctk_v2_16k_f0_16000.bin",
},
"8000": {
"40k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_40k_f0_8000.bin",
"32k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_32k_f0_8000.bin",
// "16k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/rvcv2_amitaro_v2_16k_f0_8000.bin",
"16k": "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/vctk/rvcv2_vctk_v2_16k_f0_8000.bin",
},
},
};
export const useWebInfo = (props: UseWebInfoProps): WebInfoStateAndMethod => {
const initVoiceChangerType: VoiceChangerType = "rvcv2";
const initInputLength: InputLengthKey = "24000";
const initUseF0 = true;
const initSampleRate: ModelSampleRateStr = "32k";
const progressCallback = (data: ProgreeeUpdateCallbcckInfo) => {
if (data.progressUpdateType === ProgressUpdateType.loadPreprocessModel) {
setProgressLoadPreprocess(data.progress);
} else if (data.progressUpdateType === ProgressUpdateType.loadVCModel) {
setProgressLoadVCModel(data.progress);
} else if (data.progressUpdateType === ProgressUpdateType.checkResponseTime) {
setProgressWarmup(data.progress);
}
};
const generateVoiceChangerConfig = (voiceChangerType: VoiceChangerType, sampleRate: ModelSampleRateStr, useF0: boolean, inputLength: InputLengthKey) => {
let modelUrl;
if (useF0) {
modelUrl = f0ModelUrl[voiceChangerType][inputLength][sampleRate];
} else {
modelUrl = noF0ModelUrl[voiceChangerType][inputLength][sampleRate];
}
const config: VoiceChangerConfig = {
config: {
voiceChangerType: voiceChangerType,
inputLength: inputLength,
baseUrl: window.location.origin,
inputSamplingRate: 48000,
outputSamplingRate: 48000,
},
modelUrl: modelUrl,
portrait: "https://huggingface.co/wok000/vcclient_model/resolve/main/web_model/v_01_alpha/amitaro/amitaro.png",
name: "あみたろ",
termOfUse: "https://huggingface.co/wok000/vcclient_model/raw/main/rvc/amitaro_contentvec_256/term_of_use.txt",
sampleRate: sampleRate,
useF0: useF0,
inputLength: inputLength,
progressCallback,
};
return config;
};
const [voiceChangerConfig, _setVoiceChangerConfig] = useState<VoiceChangerConfig>(generateVoiceChangerConfig(initVoiceChangerType, initSampleRate, initUseF0, initInputLength));
const [webModelLoadingState, setWebModelLoadingState] = useState<WebModelLoadingState>(WebModelLoadingState.none);
const [progressLoadPreprocess, setProgressLoadPreprocess] = useState<number>(0);
const [progressLoadVCModel, setProgressLoadVCModel] = useState<number>(0);
const [progressWarmup, setProgressWarmup] = useState<number>(0);
const [upkey, setUpkey] = useState<number>(0);
const [responseTimeInfo, setResponseTimeInfo] = useState<ResponseTimeInfo>({
responseTime: 0,
realDuration: 0,
rtf: 0,
});
const voiceChangerJSClient = useRef<VoiceChangerJSClient>();
const webModelslot: WebModelSlot = useMemo(() => {
return {
slotIndex: -1,
voiceChangerType: "WebModel",
name: voiceChangerConfig.name,
description: "",
credit: "",
termsOfUseUrl: voiceChangerConfig.termOfUse,
iconFile: voiceChangerConfig.portrait,
speakers: {},
defaultTune: 0,
modelType: "pyTorchRVCNono",
f0: voiceChangerConfig.useF0,
samplingRate: 0,
modelFile: "",
};
}, []);
const setVoiceChangerConfig = (voiceChangerType: VoiceChangerType, sampleRate: ModelSampleRateStr, useF0: boolean, inputLength: InputLengthKey) => {
const config = generateVoiceChangerConfig(voiceChangerType, sampleRate, useF0, inputLength);
_setVoiceChangerConfig(config);
};
// useEffect(() => {
// setVoiceChangerConfig({ ...voiceChangerConfig, progressCallback });
// }, []);
const loadVoiceChanagerModel = async () => {
if (!props.clientState) {
throw new Error("[useWebInfo] clientState is null");
}
if (!props.clientState.initialized) {
console.warn("[useWebInfo] clientState is not initialized yet");
return;
}
if (!props.webEdition) {
console.warn("[useWebInfo] this is not web edition");
return;
}
console.log("loadVoiceChanagerModel1", voiceChangerConfig);
setWebModelLoadingState("loading");
voiceChangerJSClient.current = new VoiceChangerJSClient();
await voiceChangerJSClient.current.initialize(voiceChangerConfig.config, voiceChangerConfig.modelUrl, voiceChangerConfig.progressCallback);
console.log("loadVoiceChanagerModel2");
// worm up
setWebModelLoadingState("warmup");
const warmupResult = await voiceChangerJSClient.current.checkResponseTime();
console.log("warmup result", warmupResult);
// check time
const responseTimeInfo = await voiceChangerJSClient.current.checkResponseTime();
console.log("responseTimeInfo", responseTimeInfo);
setResponseTimeInfo(responseTimeInfo);
props.clientState?.setInternalAudioProcessCallback({
processAudio: async (data: Uint8Array) => {
const audioF32 = new Float32Array(data.buffer);
const res = await voiceChangerJSClient.current!.convert(audioF32);
const audio = new Uint8Array(res[0].buffer);
if (res[1]) {
console.log("RESPONSE!", res[1]);
setResponseTimeInfo(res[1]);
}
return audio;
},
});
setWebModelLoadingState("ready");
};
useEffect(() => {
if (!voiceChangerJSClient.current) {
console.log("setupkey", voiceChangerJSClient.current);
return;
}
voiceChangerJSClient.current.setUpkey(upkey);
}, [upkey]);
useEffect(() => {
console.log("change voice ", voiceChangerConfig);
setProgressLoadPreprocess(0);
setProgressLoadVCModel(0);
setProgressWarmup(0);
loadVoiceChanagerModel();
}, [voiceChangerConfig, props.clientState?.initialized]);
return {
voiceChangerConfig,
webModelLoadingState,
progressLoadPreprocess,
progressLoadVCModel,
progressWarmup,
webModelslot,
upkey,
responseTimeInfo,
loadVoiceChanagerModel,
setUpkey,
setVoiceChangerConfig,
};
};

View File

@ -8,10 +8,10 @@ type Props = {
};
type AppRootValue = {
audioContextState: AudioConfigState
appGuiSettingState: AppGuiSettingStateAndMethod
getGUISetting: () => Promise<void>
}
audioContextState: AudioConfigState;
appGuiSettingState: AppGuiSettingStateAndMethod;
getGUISetting: () => Promise<void>;
};
const AppRootContext = React.createContext<AppRootValue | null>(null);
export const useAppRoot = (): AppRootValue => {
@ -23,17 +23,16 @@ export const useAppRoot = (): AppRootValue => {
};
export const AppRootProvider = ({ children }: Props) => {
const audioContextState = useAudioConfig()
const appGuiSettingState = useAppGuiSetting()
const audioContextState = useAudioConfig();
const appGuiSettingState = useAppGuiSetting();
const getGUISetting = async () => {
await appGuiSettingState.getAppGuiSetting(`/assets/gui_settings/GUI.json`)
}
await appGuiSettingState.getAppGuiSetting(`/assets/gui_settings/GUI.json`);
};
const providerValue: AppRootValue = {
audioContextState,
appGuiSettingState,
getGUISetting
getGUISetting,
};
return <AppRootContext.Provider value={providerValue}>{children}</AppRootContext.Provider>;
};

View File

@ -1,11 +1,11 @@
import { ClientState } from "@dannadori/voice-changer-client-js";
import { VoiceChangerJSClient } from "@dannadori/voice-changer-js";
import React, { useContext, useEffect, useRef } from "react";
import { ReactNode } from "react";
import { useVCClient } from "../001_globalHooks/001_useVCClient";
import { useAppRoot } from "./001_AppRootProvider";
import { useMessageBuilder } from "../hooks/useMessageBuilder";
import { VoiceChangerJSClient } from "./VoiceChangerJSClient";
import { WebInfoStateAndMethod, useWebInfo } from "../001_globalHooks/100_useWebInfo";
type Props = {
children: ReactNode;
@ -14,6 +14,8 @@ type Props = {
type AppStateValue = ClientState & {
audioContext: AudioContext;
initializedRef: React.MutableRefObject<boolean>;
webInfoState: WebInfoStateAndMethod;
webEdition: boolean;
};
const AppStateContext = React.createContext<AppStateValue | null>(null);
@ -27,9 +29,11 @@ export const useAppState = (): AppStateValue => {
export const AppStateProvider = ({ children }: Props) => {
const appRoot = useAppRoot();
const webEdition = appRoot.appGuiSettingState.edition.indexOf("web") >= 0;
const clientState = useVCClient({ audioContext: appRoot.audioContextState.audioContext });
const messageBuilderState = useMessageBuilder();
const voiceChangerJSClient = useRef<VoiceChangerJSClient>();
const webInfoState = useWebInfo({ clientState: clientState.clientState, webEdition: webEdition });
// const voiceChangerJSClient = useRef<VoiceChangerJSClient>();
useEffect(() => {
messageBuilderState.setMessage(__filename, "ioError", {
@ -56,34 +60,19 @@ export const AppStateProvider = ({ children }: Props) => {
}
}, [clientState.clientState.ioErrorCount]);
// useEffect(() => {
// if (clientState.clientState.initialized) {
// voiceChangerJSClient.current = new VoiceChangerJSClient();
// voiceChangerJSClient.current.initialize();
// clientState.clientState.setInternalAudioProcessCallback({
// processAudio: async (data: Uint8Array) => {
// console.log("[CLIENTJS] start --------------------------------------");
// const audioF32 = new Float32Array(data.buffer);
// const converted = await voiceChangerJSClient.current!.convert(audioF32);
// let audio_int16_out = new Int16Array(converted.length);
// for (let i = 0; i < converted.length; i++) {
// audio_int16_out[i] = converted[i] * 32768.0;
// }
// const res = new Uint8Array(audio_int16_out.buffer);
// console.log("AUDIO::::audio_int16_out", audio_int16_out);
// console.log("[CLIENTJS] end --------------------------------------");
// return res;
// },
// });
// }
// }, [clientState.clientState.initialized]);
useEffect(() => {
if (appRoot.appGuiSettingState.edition.indexOf("web") >= 0 && clientState.clientState.initialized) {
clientState.clientState.setWorkletNodeSetting({ ...clientState.clientState.setting.workletNodeSetting, protocol: "internal" });
// webInfoState.loadVoiceChanagerModel(); // hook内でuseEffectでinvoke
}
}, [clientState.clientState.initialized]);
const providerValue: AppStateValue = {
audioContext: appRoot.audioContextState.audioContext!,
...clientState.clientState,
initializedRef,
webInfoState,
webEdition,
};
return <AppStateContext.Provider value={providerValue}>{children}</AppStateContext.Provider>;

View File

@ -1,149 +0,0 @@
import { create, ConverterType } from "@alexanderolsen/libsamplerate-js";
import { BlockingQueue } from "./_BlockingQueue";
import { WorkerManager, generateConfig, VoiceChangerProcessorInitializeParams, VoiceChangerProcessorConvertParams, FunctionType, VoiceChangerProcessorResult } from "@dannadori/voice-changer-js";
export class VoiceChangerJSClient {
private wm = new WorkerManager();
private audioBuffer: Float32Array = new Float32Array(0);
private audioInputLength = 24000;
private inputSamplingRate = 48000;
private outputSamplingRate = 48000;
private modelInputSamplingRate = 16000;
private modelOutputSamplingRate = 40000;
private sem = new BlockingQueue<number>();
private crossfadeChunks = 1;
private solaChunks = 0.5;
constructor() {
this.sem.enqueue(0);
}
private lock = async () => {
const num = await this.sem.dequeue();
return num;
};
private unlock = (num: number) => {
this.sem.enqueue(num + 1);
};
initialize = async () => {
console.log("Voice Changer Initializing,,,");
const baseUrl = "http://127.0.0.1:18888";
this.wm = new WorkerManager();
const config = generateConfig();
config.processorURL = `${baseUrl}/process.js`;
config.onnxWasmPaths = `${baseUrl}/`;
await this.wm.init(config);
const initializeParams: VoiceChangerProcessorInitializeParams = {
type: FunctionType.initialize,
inputLength: 24000,
f0_min: 50,
f0_max: 1100,
embPitchUrl: "http://127.0.0.1:18888/models/emb_pit_24000.bin",
rvcv2InputLength: 148,
// rvcv2Url: "http://127.0.0.1:18888/models/rvc2v_24000.bin",
rvcv2Url: "http://127.0.0.1:18888/models/rvc2vnof0_24000.bin",
transfer: [],
};
const res = (await this.wm.execute(initializeParams)) as VoiceChangerProcessorResult;
console.log("Voice Changer Initialized..", res);
};
convert = async (audio: Float32Array): Promise<Float32Array> => {
console.log("convert start....", audio);
const lockNum = await this.lock();
//resample
const audio_16k = await this.resample(audio, this.inputSamplingRate, this.modelInputSamplingRate);
//store data and get target data
//// store
const newAudioBuffer = new Float32Array(this.audioBuffer.length + audio_16k.length);
newAudioBuffer.set(this.audioBuffer);
newAudioBuffer.set(audio_16k, this.audioBuffer.length);
this.audioBuffer = newAudioBuffer;
//// Buffering.....
if (this.audioBuffer.length < this.audioInputLength * 1) {
console.log(`skip covert length:${this.audioBuffer.length}, audio_16k:${audio_16k.length}`);
await this.unlock(lockNum);
return new Float32Array(1);
} else {
console.log(`--------------- convert start... length:${this.audioBuffer.length}, audio_16k:${audio_16k.length}`);
}
//// get chunks
let chunkIndex = 0;
const audioChunks: Float32Array[] = [];
while (true) {
const chunkOffset = chunkIndex * this.audioInputLength - (this.crossfadeChunks + this.solaChunks) * 320 * chunkIndex;
const chunkEnd = chunkOffset + this.audioInputLength;
if (chunkEnd > this.audioBuffer.length) {
this.audioBuffer = this.audioBuffer.slice(chunkOffset);
break;
} else {
const chunk = this.audioBuffer.slice(chunkOffset, chunkEnd);
audioChunks.push(chunk);
}
chunkIndex++;
}
if (audioChunks.length == 0) {
await this.unlock(lockNum);
console.log(`skip covert length:${this.audioBuffer.length}, audio_16k:${audio_16k.length}`);
return new Float32Array(1);
}
//convert (each)
const convetedAudioChunks: Float32Array[] = [];
for (let i = 0; i < audioChunks.length; i++) {
const convertParams: VoiceChangerProcessorConvertParams = {
type: FunctionType.convert,
transfer: [audioChunks[i].buffer],
};
const res = (await this.wm.execute(convertParams)) as VoiceChangerProcessorResult;
const converted = new Float32Array(res.transfer[0] as ArrayBuffer);
console.log(`converted.length:::${i}:${converted.length}`);
convetedAudioChunks.push(converted);
}
//concat
let totalLength = convetedAudioChunks.reduce((prev, cur) => prev + cur.length, 0);
let convetedAudio = new Float32Array(totalLength);
let offset = 0;
for (let chunk of convetedAudioChunks) {
convetedAudio.set(chunk, offset);
offset += chunk.length;
}
console.log(`converted.length:::convetedAudio:${convetedAudio.length}`);
//resample
// const response = await this.resample(convetedAudio, this.params.modelOutputSamplingRate, this.params.outputSamplingRate);
const outputDuration = (this.audioInputLength * audioChunks.length - this.crossfadeChunks * 320) / 16000;
const outputSamples = outputDuration * this.outputSamplingRate;
const convertedOutputRatio = outputSamples / convetedAudio.length;
const realOutputSamplingRate = this.modelOutputSamplingRate * convertedOutputRatio;
console.log(`realOutputSamplingRate:${realOutputSamplingRate}, `, this.modelOutputSamplingRate, convertedOutputRatio);
// const response2 = await this.resample(convetedAudio, this.params.modelOutputSamplingRate, realOutputSamplingRate);
const response2 = await this.resample(convetedAudio, this.modelOutputSamplingRate, this.outputSamplingRate);
console.log(`converted from :${audioChunks.length * this.audioInputLength} to:${convetedAudio.length} to:${response2.length}`);
console.log(`outputDuration :${outputDuration} outputSamples:${outputSamples}, convertedOutputRatio:${convertedOutputRatio}, realOutputSamplingRate:${realOutputSamplingRate}`);
await this.unlock(lockNum);
return response2;
};
// Utility
resample = async (data: Float32Array, srcSampleRate: number, dstSampleRate: number) => {
const converterType = ConverterType.SRC_SINC_BEST_QUALITY;
const nChannels = 1;
const converter = await create(nChannels, srcSampleRate, dstSampleRate, {
converterType: converterType, // default SRC_SINC_FASTEST. see API for more
});
const res = converter.simple(data);
return res;
};
}

View File

@ -1,7 +1,8 @@
import React, { useContext, useEffect, useState } from "react";
import React, { useContext, useEffect, useState, useRef } from "react";
import { ReactNode } from "react";
import { useAppRoot } from "../../001_provider/001_AppRootProvider";
import { StateControlCheckbox, useStateControlCheckbox } from "../../hooks/useStateControlCheckbox";
import { useAppState } from "../../001_provider/001_AppStateProvider";
export const OpenServerControlCheckbox = "open-server-control-checkbox";
export const OpenModelSettingCheckbox = "open-model-setting-checkbox";
@ -61,6 +62,7 @@ type GuiStateAndMethod = {
setIsAnalyzing: (val: boolean) => void;
setShowPyTorchModelUpload: (val: boolean) => void;
reloadDeviceInfo: () => Promise<void>;
inputAudioDeviceInfo: MediaDeviceInfo[];
outputAudioDeviceInfo: MediaDeviceInfo[];
audioInputForGUI: string;
@ -83,6 +85,12 @@ type GuiStateAndMethod = {
textInputResolve: TextInputResolveType | null;
setTextInputResolve: (val: TextInputResolveType | null) => void;
// for Beatrice
beatriceJVSSpeakerId: number;
beatriceJVSSpeakerPitch: number;
setBeatriceJVSSpeakerId: (id: number) => void;
setBeatriceJVSSpeakerPitch: (pitch: number) => void;
};
const GuiStateContext = React.createContext<GuiStateAndMethod | null>(null);
@ -100,6 +108,7 @@ type TextInputResolveType = {
export const GuiStateProvider = ({ children }: Props) => {
const { appGuiSettingState } = useAppRoot();
const { serverSetting } = useAppState();
const [isConverting, setIsConverting] = useState<boolean>(false);
const [isAnalyzing, setIsAnalyzing] = useState<boolean>(false);
const [modelSlotNum, setModelSlotNum] = useState<number>(0);
@ -117,14 +126,23 @@ export const GuiStateProvider = ({ children }: Props) => {
const [textInputResolve, setTextInputResolve] = useState<TextInputResolveType | null>(null);
const reloadDeviceInfo = async () => {
try {
const ms = await navigator.mediaDevices.getUserMedia({ video: false, audio: true });
ms.getTracks().forEach((x) => {
x.stop();
});
} catch (e) {
console.warn("Enumerate device error::", e);
const [beatriceJVSSpeakerId, setBeatriceJVSSpeakerId] = useState<number>(1);
const [beatriceJVSSpeakerPitch, setBeatriceJVSSpeakerPitch] = useState<number>(0);
const checkDeviceAvailable = useRef<boolean>(false);
const _reloadDeviceInfo = async () => {
// デバイスチェックの空振り
if (checkDeviceAvailable.current == false) {
try {
const ms = await navigator.mediaDevices.getUserMedia({ video: false, audio: true });
ms.getTracks().forEach((x) => {
x.stop();
});
checkDeviceAvailable.current = true;
} catch (e) {
console.warn("Enumerate device error::", e);
}
}
const mediaDeviceInfos = await navigator.mediaDevices.enumerateDevices();
@ -171,14 +189,66 @@ export const GuiStateProvider = ({ children }: Props) => {
// })
return [audioInputs, audioOutputs];
};
const reloadDeviceInfo = async () => {
const audioInfo = await _reloadDeviceInfo();
setInputAudioDeviceInfo(audioInfo[0]);
setOutputAudioDeviceInfo(audioInfo[1]);
};
// useEffect(() => {
// const audioInitialize = async () => {
// await reloadDeviceInfo();
// };
// audioInitialize();
// }, []);
useEffect(() => {
const audioInitialize = async () => {
const audioInfo = await reloadDeviceInfo();
setInputAudioDeviceInfo(audioInfo[0]);
setOutputAudioDeviceInfo(audioInfo[1]);
let isMounted = true;
// デバイスのポーリングを再帰的に実行する関数
const pollDevices = async () => {
const checkDeviceDiff = (knownDeviceIds: Set<string>, newDeviceIds: Set<string>) => {
const deleted = new Set([...knownDeviceIds].filter((x) => !newDeviceIds.has(x)));
const added = new Set([...newDeviceIds].filter((x) => !knownDeviceIds.has(x)));
return { deleted, added };
};
try {
const audioInfo = await _reloadDeviceInfo();
const knownAudioinputIds = new Set(inputAudioDeviceInfo.map((x) => x.deviceId));
const newAudioinputIds = new Set(audioInfo[0].map((x) => x.deviceId));
const knownAudiooutputIds = new Set(outputAudioDeviceInfo.map((x) => x.deviceId));
const newAudiooutputIds = new Set(audioInfo[1].map((x) => x.deviceId));
const audioInputDiff = checkDeviceDiff(knownAudioinputIds, newAudioinputIds);
const audioOutputDiff = checkDeviceDiff(knownAudiooutputIds, newAudiooutputIds);
if (audioInputDiff.deleted.size > 0 || audioInputDiff.added.size > 0) {
console.log(`deleted input device: ${[...audioInputDiff.deleted]}`);
console.log(`added input device: ${[...audioInputDiff.added]}`);
setInputAudioDeviceInfo(audioInfo[0]);
}
if (audioOutputDiff.deleted.size > 0 || audioOutputDiff.added.size > 0) {
console.log(`deleted output device: ${[...audioOutputDiff.deleted]}`);
console.log(`added output device: ${[...audioOutputDiff.added]}`);
setOutputAudioDeviceInfo(audioInfo[1]);
}
if (isMounted) {
setTimeout(pollDevices, 1000 * 3);
}
} catch (err) {
console.error("An error occurred during enumeration of devices:", err);
}
};
audioInitialize();
}, []);
pollDevices();
return () => {
isMounted = false;
};
}, [inputAudioDeviceInfo, outputAudioDeviceInfo]);
// (1) Controller Switch
const openServerControlCheckbox = useStateControlCheckbox(OpenServerControlCheckbox);
@ -242,7 +312,25 @@ export const GuiStateProvider = ({ children }: Props) => {
setTimeout(show);
}, [appGuiSettingState.edition]);
const providerValue = {
useEffect(() => {
let dstId;
if (beatriceJVSSpeakerPitch == 0) {
dstId = (beatriceJVSSpeakerId - 1) * 5;
} else if (beatriceJVSSpeakerPitch == 1) {
dstId = (beatriceJVSSpeakerId - 1) * 5 + 1;
} else if (beatriceJVSSpeakerPitch == 2) {
dstId = (beatriceJVSSpeakerId - 1) * 5 + 2;
} else if (beatriceJVSSpeakerPitch == -1) {
dstId = (beatriceJVSSpeakerId - 1) * 5 + 3;
} else if (beatriceJVSSpeakerPitch == -2) {
dstId = (beatriceJVSSpeakerId - 1) * 5 + 4;
} else {
throw new Error(`invalid beatriceJVSSpeakerPitch speaker:${beatriceJVSSpeakerId} pitch:${beatriceJVSSpeakerPitch}`);
}
serverSetting.updateServerSettings({ ...serverSetting.serverSetting, dstId: dstId });
}, [beatriceJVSSpeakerId, beatriceJVSSpeakerPitch]);
const providerValue: GuiStateAndMethod = {
stateControls: {
openServerControlCheckbox,
openModelSettingCheckbox,
@ -296,6 +384,12 @@ export const GuiStateProvider = ({ children }: Props) => {
textInputResolve,
setTextInputResolve,
// For Beatrice
beatriceJVSSpeakerId,
beatriceJVSSpeakerPitch,
setBeatriceJVSSpeakerId,
setBeatriceJVSSpeakerPitch,
};
return <GuiStateContext.Provider value={providerValue}>{children}</GuiStateContext.Provider>;
};

View File

@ -1,4 +1,4 @@
import React from "react"
import React from "react";
import { GuiStateProvider } from "./001_GuiStateProvider";
import { Dialogs } from "./900_Dialogs";
import { ModelSlotControl } from "./b00_ModelSlotControl";
@ -13,5 +13,5 @@ export const Demo = () => {
<ModelSlotControl></ModelSlotControl>
</div>
</GuiStateProvider>
)
}
);
};

View File

@ -19,6 +19,17 @@ export const StartingNoticeDialog = () => {
ja: "(1) 一部の設定変更を行うとgpuを使用していても変換処理が遅くなることが発生します。もしこの現象が発生したらGPUの値を-1にしてから再度0に戻してください。",
en: "(1) When some settings are changed, conversion process becomes slow even when using GPU. If this occurs, reset the GPU value to -1 and then back to 0.",
});
messageBuilderState.setMessage(__filename, "web_edditon_1", { ja: "このWebエディションは実験的バージョンです。", en: "This edition(web) is an experimental Edition." });
messageBuilderState.setMessage(__filename, "web_edditon_2", {
ja: "より高機能・高性能なFullエディションは、",
en: "The more advanced and high-performance Full Edition can be obtained for free from the following GitHub repository.",
});
messageBuilderState.setMessage(__filename, "web_edditon_3", {
ja: "次のgithubリポジトリから無料で取得できます。",
en: "",
});
messageBuilderState.setMessage(__filename, "github", { ja: "github", en: "github" });
messageBuilderState.setMessage(__filename, "click_to_start", { ja: "スタートボタンを押してください。", en: "Click to start" });
messageBuilderState.setMessage(__filename, "start", { ja: "スタート", en: "start" });
}, []);
@ -44,7 +55,6 @@ export const StartingNoticeDialog = () => {
const licenseNoticeLink = useMemo(() => {
return isDesktopApp() ? (
// @ts-ignore
<span
className="link"
onClick={() => {
@ -95,6 +105,34 @@ export const StartingNoticeDialog = () => {
const licenseInfo = <div className="dialog-content-part">{licenseNoticeLink}</div>;
const webEdtionMessage = (
<div className="dialog-content-part">
<div>{messageBuilderState.getMessage(__filename, "web_edditon_1")}</div>
<div>{messageBuilderState.getMessage(__filename, "web_edditon_2")}</div>
<div>{messageBuilderState.getMessage(__filename, "web_edditon_3")}</div>
</div>
);
const githubLink = isDesktopApp() ? (
<span
className="link tooltip"
onClick={() => {
// @ts-ignore
window.electronAPI.openBrowser("https://github.com/w-okada/voice-changer");
}}
>
<img src="./assets/icons/github.svg" />
<div className="tooltip-text">{messageBuilderState.getMessage(__filename, "github")}</div>
<div>github</div>
</span>
) : (
<a className="link tooltip" href="https://github.com/w-okada/voice-changer" target="_blank" rel="noopener noreferrer">
<img src="./assets/icons/github.svg" />
<span>github</span>
<div className="tooltip-text">{messageBuilderState.getMessage(__filename, "github")}</div>
</a>
);
const clickToStartMessage = (
<div className="dialog-content-part">
<div>{messageBuilderState.getMessage(__filename, "click_to_start")}</div>
@ -110,12 +148,19 @@ export const StartingNoticeDialog = () => {
{clickToStartMessage}
</div>
);
const contentForWeb = (
<div className="body-row">
{webEdtionMessage}
{githubLink}
{clickToStartMessage}
</div>
);
return (
<div className="dialog-frame">
<div className="dialog-title">Message</div>
<div className="dialog-content">
{content}
{edition.indexOf("web") >= 0 ? contentForWeb : content}
{closeButtonRow}
</div>
</div>

View File

@ -116,6 +116,20 @@ export const FileUploaderScreen = (props: FileUploaderScreenProps) => {
return x.kind == "beatriceModel";
});
return enough;
} else if (setting.voiceChangerType == "LLVC") {
const enough =
!!setting.files.find((x) => {
return x.kind == "llvcModel";
}) &&
!!setting.files.find((x) => {
return x.kind == "llvcConfig";
});
return enough;
} else if (setting.voiceChangerType == "EasyVC") {
const enough = !!setting.files.find((x) => {
return x.kind == "easyVCModel";
});
return enough;
}
return false;
};
@ -177,6 +191,11 @@ export const FileUploaderScreen = (props: FileUploaderScreenProps) => {
rows.push(generateFileRow(uploadSetting!, "Model(combo)", "diffusionSVCModel", ["ptc"]));
} else if (vcType == "Beatrice") {
rows.push(generateFileRow(uploadSetting!, "Beatrice", "beatriceModel", ["bin"]));
} else if (vcType == "LLVC") {
rows.push(generateFileRow(uploadSetting!, "Model", "llvcModel", ["pth"]));
rows.push(generateFileRow(uploadSetting!, "Config", "llvcConfig", ["json"]));
} else if (vcType == "EasyVC") {
rows.push(generateFileRow(uploadSetting!, "Model", "easyVCModel", ["onnx"]));
}
return rows;
};

View File

@ -5,122 +5,123 @@ import { useAppState } from "../../../001_provider/001_AppStateProvider";
import { useIndexedDB } from "@dannadori/voice-changer-client-js";
import { useMessageBuilder } from "../../../hooks/useMessageBuilder";
export type HeaderAreaProps = {
mainTitle: string
subTitle: string
}
mainTitle: string;
subTitle: string;
};
export const HeaderArea = (props: HeaderAreaProps) => {
const { appGuiSettingState } = useAppRoot()
const messageBuilderState = useMessageBuilder()
const { clearSetting } = useAppState()
const { appGuiSettingState } = useAppRoot();
const messageBuilderState = useMessageBuilder();
const { clearSetting, webInfoState } = useAppState();
const { removeItem } = useIndexedDB({ clientType: null })
const { removeItem, removeDB } = useIndexedDB({ clientType: null });
useMemo(() => {
messageBuilderState.setMessage(__filename, "github", { "ja": "github", "en": "github" })
messageBuilderState.setMessage(__filename, "manual", { "ja": "マニュアル", "en": "manual" })
messageBuilderState.setMessage(__filename, "screenCapture", { "ja": "録画ツール", "en": "Record Screen" })
messageBuilderState.setMessage(__filename, "support", { "ja": "支援", "en": "Donation" })
}, [])
messageBuilderState.setMessage(__filename, "github", { ja: "github", en: "github" });
messageBuilderState.setMessage(__filename, "manual", { ja: "マニュアル", en: "manual" });
messageBuilderState.setMessage(__filename, "screenCapture", { ja: "録画ツール", en: "Record Screen" });
messageBuilderState.setMessage(__filename, "support", { ja: "支援", en: "Donation" });
}, []);
const githubLink = useMemo(() => {
return isDesktopApp() ?
(
// @ts-ignore
<span className="link tooltip" onClick={() => { window.electronAPI.openBrowser("https://github.com/w-okada/voice-changer") }}>
<img src="./assets/icons/github.svg" />
<div className="tooltip-text">{messageBuilderState.getMessage(__filename, "github")}</div>
</span>
)
:
(
<a className="link tooltip" href="https://github.com/w-okada/voice-changer" target="_blank" rel="noopener noreferrer">
<img src="./assets/icons/github.svg" />
<div className="tooltip-text">{messageBuilderState.getMessage(__filename, "github")}</div>
</a>
)
}, [])
return isDesktopApp() ? (
<span
className="link tooltip"
onClick={() => {
// @ts-ignore
window.electronAPI.openBrowser("https://github.com/w-okada/voice-changer");
}}
>
<img src="./assets/icons/github.svg" />
<div className="tooltip-text">{messageBuilderState.getMessage(__filename, "github")}</div>
</span>
) : (
<a className="link tooltip" href="https://github.com/w-okada/voice-changer" target="_blank" rel="noopener noreferrer">
<img src="./assets/icons/github.svg" />
<div className="tooltip-text">{messageBuilderState.getMessage(__filename, "github")}</div>
</a>
);
}, []);
const manualLink = useMemo(() => {
return isDesktopApp() ?
(
// @ts-ignore
<span className="link tooltip" onClick={() => { window.electronAPI.openBrowser("https://github.com/w-okada/voice-changer/blob/master/tutorials/tutorial_rvc_ja_latest.md") }}>
<img src="./assets/icons/help-circle.svg" />
<div className="tooltip-text tooltip-text-100px">{messageBuilderState.getMessage(__filename, "manual")}</div>
</span>
)
:
(
<a className="link tooltip" href="https://github.com/w-okada/voice-changer/blob/master/tutorials/tutorial_rvc_ja_latest.md" target="_blank" rel="noopener noreferrer">
<img src="./assets/icons/help-circle.svg" />
<div className="tooltip-text tooltip-text-100px">{messageBuilderState.getMessage(__filename, "manual")}</div>
</a>
)
}, [])
return isDesktopApp() ? (
<span
className="link tooltip"
onClick={() => {
// @ts-ignore
window.electronAPI.openBrowser("https://github.com/w-okada/voice-changer/blob/master/tutorials/tutorial_rvc_ja_latest.md");
}}
>
<img src="./assets/icons/help-circle.svg" />
<div className="tooltip-text tooltip-text-100px">{messageBuilderState.getMessage(__filename, "manual")}</div>
</span>
) : (
<a className="link tooltip" href="https://github.com/w-okada/voice-changer/blob/master/tutorials/tutorial_rvc_ja_latest.md" target="_blank" rel="noopener noreferrer">
<img src="./assets/icons/help-circle.svg" />
<div className="tooltip-text tooltip-text-100px">{messageBuilderState.getMessage(__filename, "manual")}</div>
</a>
);
}, []);
const toolLink = useMemo(() => {
return isDesktopApp() ?
(
<div className="link tooltip">
<img src="./assets/icons/tool.svg" />
<div className="tooltip-text tooltip-text-100px">
<p onClick={() => {
return isDesktopApp() ? (
<div className="link tooltip">
<img src="./assets/icons/tool.svg" />
<div className="tooltip-text tooltip-text-100px">
<p
onClick={() => {
// @ts-ignore
window.electronAPI.openBrowser("https://w-okada.github.io/screen-recorder-ts/")
}}>
{messageBuilderState.getMessage(__filename, "screenCapture")}
</p>
</div>
window.electronAPI.openBrowser("https://w-okada.github.io/screen-recorder-ts/");
}}
>
{messageBuilderState.getMessage(__filename, "screenCapture")}
</p>
</div>
)
:
(
<div className="link tooltip">
<img src="./assets/icons/tool.svg" />
<div className="tooltip-text tooltip-text-100px">
<p onClick={() => {
window.open("https://w-okada.github.io/screen-recorder-ts/", '_blank', "noreferrer")
}}>
{messageBuilderState.getMessage(__filename, "screenCapture")}
</p>
</div>
</div>
) : (
<div className="link tooltip">
<img src="./assets/icons/tool.svg" />
<div className="tooltip-text tooltip-text-100px">
<p
onClick={() => {
window.open("https://w-okada.github.io/screen-recorder-ts/", "_blank", "noreferrer");
}}
>
{messageBuilderState.getMessage(__filename, "screenCapture")}
</p>
</div>
)
}, [])
</div>
);
}, []);
const coffeeLink = useMemo(() => {
return isDesktopApp() ?
(
// @ts-ignore
<span className="link tooltip" onClick={() => { window.electronAPI.openBrowser("https://www.buymeacoffee.com/wokad") }}>
<img className="donate-img" src="./assets/buymeacoffee.png" />
<div className="tooltip-text tooltip-text-100px">{messageBuilderState.getMessage(__filename, "support")}</div>
</span>
)
:
(
<a className="link tooltip" href="https://www.buymeacoffee.com/wokad" target="_blank" rel="noopener noreferrer">
<img className="donate-img" src="./assets/buymeacoffee.png" />
<div className="tooltip-text tooltip-text-100px">
{messageBuilderState.getMessage(__filename, "support")}
</div>
</a>
)
}, [])
return isDesktopApp() ? (
<span
className="link tooltip"
onClick={() => {
// @ts-ignore
window.electronAPI.openBrowser("https://www.buymeacoffee.com/wokad");
}}
>
<img className="donate-img" src="./assets/buymeacoffee.png" />
<div className="tooltip-text tooltip-text-100px">{messageBuilderState.getMessage(__filename, "support")}</div>
</span>
) : (
<a className="link tooltip" href="https://www.buymeacoffee.com/wokad" target="_blank" rel="noopener noreferrer">
<img className="donate-img" src="./assets/buymeacoffee.png" />
<div className="tooltip-text tooltip-text-100px">{messageBuilderState.getMessage(__filename, "support")}</div>
</a>
);
}, []);
const headerArea = useMemo(() => {
const onClearSettingClicked = async () => {
await clearSetting()
await removeItem(INDEXEDDB_KEY_AUDIO_OUTPUT)
location.reload()
}
await clearSetting();
await removeItem(INDEXEDDB_KEY_AUDIO_OUTPUT);
await removeDB();
location.reload();
};
return (
<div className="headerArea">
@ -139,15 +140,16 @@ export const HeaderArea = (props: HeaderAreaProps) => {
{/* {licenseButton} */}
</span>
<span className="belongings">
<div className="belongings-button" onClick={onClearSettingClicked}>clear setting</div>
<div className="belongings-button" onClick={onClearSettingClicked}>
clear setting
</div>
{/* <div className="belongings-button" onClick={onReloadClicked}>reload</div>
<div className="belongings-button" onClick={onReselectVCClicked}>select vc</div> */}
</span>
</div>
</div>
)
}, [props.subTitle, props.mainTitle, appGuiSettingState.version, appGuiSettingState.edition])
);
}, [props.subTitle, props.mainTitle, appGuiSettingState.version, appGuiSettingState.edition]);
return headerArea
return headerArea;
};

View File

@ -13,7 +13,7 @@ const SortTypes = {
export type SortTypes = (typeof SortTypes)[keyof typeof SortTypes];
export const ModelSlotArea = (_props: ModelSlotAreaProps) => {
const { serverSetting, getInfo } = useAppState();
const { serverSetting, getInfo, webEdition } = useAppState();
const guiState = useGuiState();
const messageBuilderState = useMessageBuilder();
const [sortType, setSortType] = useState<SortTypes>("slot");
@ -116,5 +116,9 @@ export const ModelSlotArea = (_props: ModelSlotAreaProps) => {
);
}, [modelTiles, sortType]);
if (webEdition) {
return <></>;
}
return modelSlotArea;
};

View File

@ -0,0 +1,211 @@
import React, { useEffect, useMemo, useState } from "react";
import { useAppState } from "../../../001_provider/001_AppStateProvider";
import { useMessageBuilder } from "../../../hooks/useMessageBuilder";
export type PortraitProps = {};
const BeatriceSpeakerType = {
male: "male",
female: "female",
} as const;
type BeatriceSpeakerType = (typeof BeatriceSpeakerType)[keyof typeof BeatriceSpeakerType];
// @ts-ignore
import MyIcon from "./female-clickable.svg";
import { useGuiState } from "../001_GuiStateProvider";
export const Portrait = (_props: PortraitProps) => {
const { serverSetting, volume, bufferingTime, performance, webInfoState, webEdition } = useAppState();
const messageBuilderState = useMessageBuilder();
const [beatriceSpeakerType, setBeatriceSpeakerType] = useState<BeatriceSpeakerType>(BeatriceSpeakerType.male);
const [beatriceSpeakerIndexInGender, setBeatriceSpeakerIndexInGender] = useState<string>("");
const { setBeatriceJVSSpeakerId } = useGuiState();
const beatriceMaleSpeakersList = [1, 3, 5, 6, 9, 11, 12, 13, 20, 21, 22, 23, 28, 31, 32, 33, 34, 37, 41, 42, 44, 45, 46, 47, 48, 49, 50, 52, 54, 68, 70, 71, 73, 74, 75, 76, 77, 78, 79, 80, 81, 86, 87, 88, 89, 97, 98, 99, 100];
const beatriceFemaleSpeakersList = [2, 4, 7, 8, 10, 14, 15, 16, 17, 18, 19, 24, 25, 26, 27, 29, 30, 35, 36, 38, 39, 40, 43, 51, 53, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 69, 72, 82, 83, 84, 85, 90, 91, 92, 93, 94, 95, 96];
useMemo(() => {
messageBuilderState.setMessage(__filename, "terms_of_use", { ja: "利用規約", en: "terms of use" });
}, []);
const selected = useMemo(() => {
if (webEdition) {
return webInfoState.webModelslot;
}
if (serverSetting.serverSetting.modelSlotIndex == undefined) {
return;
} else if (serverSetting.serverSetting.modelSlotIndex == "Beatrice-JVS") {
const beatriceJVS = serverSetting.serverSetting.modelSlots.find((v) => v.slotIndex == "Beatrice-JVS");
return beatriceJVS;
} else {
return serverSetting.serverSetting.modelSlots[serverSetting.serverSetting.modelSlotIndex];
}
}, [serverSetting.serverSetting.modelSlotIndex, serverSetting.serverSetting.modelSlots, webEdition]);
useEffect(() => {
const vol = document.getElementById("status-vol") as HTMLSpanElement;
const buf = document.getElementById("status-buf") as HTMLSpanElement;
const res = document.getElementById("status-res") as HTMLSpanElement;
const rtf = document.getElementById("status-rtf") as HTMLSpanElement;
if (!vol || !buf || !res) {
return;
}
vol.innerText = volume.toFixed(4);
if (webEdition) {
buf.innerText = bufferingTime.toString();
res.innerText = webInfoState.responseTimeInfo.responseTime.toString() ?? "0";
rtf.innerText = webInfoState.responseTimeInfo.rtf.toString() ?? "0";
} else {
buf.innerText = bufferingTime.toString();
res.innerText = performance.responseTime.toString();
}
}, [volume, bufferingTime, performance, webInfoState.responseTimeInfo]);
const setSelectedClass = () => {
const iframe = document.querySelector(".beatrice-speaker-graph-container");
if (!iframe) {
return;
}
// @ts-ignore
const svgDoc = iframe.contentDocument;
const gElements = svgDoc.getElementsByClassName("beatrice-node-pointer");
for (const gElement of gElements) {
gElement.classList.remove("beatrice-node-pointer-selected");
}
const keys = beatriceSpeakerIndexInGender.split("-");
const id = keys.pop();
const gender = keys.pop();
if (beatriceSpeakerType == gender) {
const selected = svgDoc.getElementById(`beatrice-node-${gender}-${id}`);
selected?.classList.add("beatrice-node-pointer-selected");
}
};
const setBeatriceSpeakerIndex = async (elementId: string) => {
setBeatriceSpeakerIndexInGender(elementId);
const keys = elementId.split("-");
const id = Number(keys.pop());
const gender = keys.pop();
let beatriceSpeakerIndex;
if (gender == "male") {
beatriceSpeakerIndex = beatriceMaleSpeakersList[id];
} else {
beatriceSpeakerIndex = beatriceFemaleSpeakersList[id];
}
setBeatriceJVSSpeakerId(beatriceSpeakerIndex);
};
useEffect(() => {
const iframe = document.querySelector(".beatrice-speaker-graph-container");
if (!iframe) {
return;
}
const setOnClick = () => {
// @ts-ignore
const svgDoc = iframe.contentDocument;
const gElements = svgDoc.getElementsByClassName("beatrice-node-pointer");
const textElements = svgDoc.getElementsByClassName("beatrice-text-pointer");
for (const gElement of gElements) {
gElement.onclick = () => {
setBeatriceSpeakerIndex(gElement.id);
};
}
for (const textElement of textElements) {
textElement.onclick = () => {
setBeatriceSpeakerIndex(textElement.id);
};
}
setSelectedClass();
};
iframe.addEventListener("load", setOnClick);
return () => {
iframe.removeEventListener("load", setOnClick);
};
}, [selected, beatriceSpeakerType]);
useEffect(() => {
setSelectedClass();
}, [selected, beatriceSpeakerType, beatriceSpeakerIndexInGender]);
const portrait = useMemo(() => {
if (!selected) {
return <></>;
}
let portrait;
if (webEdition) {
const icon = selected.iconFile;
portrait = <img className="portrait" src={icon} alt={selected.name} />;
} else if (selected.slotIndex == "Beatrice-JVS") {
const maleButtonClass = beatriceSpeakerType == "male" ? "button-selected" : "button";
const femaleButtonClass = beatriceSpeakerType == "male" ? "button" : "button-selected";
const svgURL = beatriceSpeakerType == "male" ? "./assets/beatrice/male-clickable.svg" : "./assets/beatrice/female-clickable.svg";
portrait = (
<>
<div className="beatrice-portrait-title">
Beatrice <span className="edition">JVS Corpus</span>
</div>
<div className="beatrice-portrait-select">
<div
className={maleButtonClass}
onClick={() => {
setBeatriceSpeakerType(BeatriceSpeakerType.male);
}}
>
male
</div>
<div
className={femaleButtonClass}
onClick={() => {
setBeatriceSpeakerType(BeatriceSpeakerType.female);
}}
>
female
</div>
</div>
{/* <iframe className="beatrice-speaker-graph-container" style={{ width: "20rem", height: "20rem", border: "none" }} src="./assets/beatrice/female-clickable.svg" title="terms_of_use" sandbox="allow-same-origin allow-scripts allow-popups allow-forms"></iframe> */}
<iframe className="beatrice-speaker-graph-container" src={svgURL} title="beatrice JVS Corpus speakers" sandbox="allow-same-origin allow-scripts allow-popups allow-forms"></iframe>
</>
);
} else {
const modelDir = serverSetting.serverSetting.modelSlotIndex == "Beatrice-JVS" ? "model_dir_static" : serverSetting.serverSetting.voiceChangerParams.model_dir;
const icon = selected.iconFile.length > 0 ? modelDir + "/" + selected.slotIndex + "/" + selected.iconFile.split(/[\/\\]/).pop() : "./assets/icons/human.png";
portrait = <img className="portrait" src={icon} alt={selected.name} />;
}
const selectedTermOfUseUrlLink = selected.termsOfUseUrl ? (
<a href={selected.termsOfUseUrl} target="_blank" rel="noopener noreferrer" className="portrait-area-terms-of-use-link">
[{messageBuilderState.getMessage(__filename, "terms_of_use")}]
</a>
) : (
<></>
);
return (
<div className="portrait-area">
<div className="portrait-container">
{portrait}
<div className="portrait-area-status">
<p>
<span className="portrait-area-status-vctype">{selected.voiceChangerType}</span>
</p>
<p>
vol: <span id="status-vol">0</span>
</p>
<p>
buf: <span id="status-buf">0</span> ms
</p>
<p>
res: <span id="status-res">0</span> ms
</p>
<p>
rtf: <span id="status-rtf">0</span>
</p>
</div>
<div className="portrait-area-terms-of-use">{selectedTermOfUseUrlLink}</div>
</div>
</div>
);
}, [selected, beatriceSpeakerType]);
return portrait;
};

View File

@ -1,12 +1,17 @@
import React, { useMemo } from "react";
import { useAppState } from "../../../001_provider/001_AppStateProvider";
import { useGuiState } from "../001_GuiStateProvider";
export type TuningAreaProps = {};
export const TuningArea = (_props: TuningAreaProps) => {
const { serverSetting } = useAppState();
const { serverSetting, webInfoState, webEdition } = useAppState();
const { setBeatriceJVSSpeakerPitch, beatriceJVSSpeakerPitch } = useGuiState();
const selected = useMemo(() => {
if (webEdition) {
return webInfoState.webModelslot;
}
if (serverSetting.serverSetting.modelSlotIndex == undefined) {
return;
} else if (serverSetting.serverSetting.modelSlotIndex == "Beatrice-JVS") {
@ -15,7 +20,7 @@ export const TuningArea = (_props: TuningAreaProps) => {
} else {
return serverSetting.serverSetting.modelSlots[serverSetting.serverSetting.modelSlotIndex];
}
}, [serverSetting.serverSetting.modelSlotIndex, serverSetting.serverSetting.modelSlots]);
}, [serverSetting.serverSetting.modelSlotIndex, serverSetting.serverSetting.modelSlots, webEdition]);
const tuningArea = useMemo(() => {
if (!selected) {
@ -25,9 +30,48 @@ export const TuningArea = (_props: TuningAreaProps) => {
return <></>;
}
const currentTuning = serverSetting.serverSetting.tran;
// For Beatrice
if (selected.slotIndex == "Beatrice-JVS") {
const updateBeatriceJVSSpeakerPitch = async (pitch: number) => {
setBeatriceJVSSpeakerPitch(pitch);
};
return (
<div className="character-area-control">
<div className="character-area-control-title">TUNE:</div>
<div className="character-area-control-field">
<div className="character-area-slider-control">
<span className="character-area-slider-control-kind"></span>
<span className="character-area-slider-control-slider">
<input
type="range"
min="-2"
max="2"
step="1"
value={beatriceJVSSpeakerPitch}
onChange={(e) => {
updateBeatriceJVSSpeakerPitch(Number(e.target.value));
}}
></input>
</span>
<span className="character-area-slider-control-val">{beatriceJVSSpeakerPitch}</span>
</div>
</div>
</div>
);
}
let currentTuning;
if (webEdition) {
currentTuning = webInfoState.upkey;
} else {
currentTuning = serverSetting.serverSetting.tran;
}
const tranValueUpdatedAction = async (val: number) => {
await serverSetting.updateServerSettings({ ...serverSetting.serverSetting, tran: val });
if (webEdition) {
webInfoState.setUpkey(val);
} else {
await serverSetting.updateServerSettings({ ...serverSetting.serverSetting, tran: val });
}
};
return (
@ -53,7 +97,7 @@ export const TuningArea = (_props: TuningAreaProps) => {
</div>
</div>
);
}, [serverSetting.serverSetting, serverSetting.updateServerSettings, selected]);
}, [serverSetting.serverSetting, serverSetting.updateServerSettings, selected, webEdition, webInfoState.upkey]);
return tuningArea;
};

View File

@ -64,6 +64,9 @@ export const SpeakerArea = (_props: SpeakerAreaProps) => {
if (!selected) {
return <></>;
}
if (selected.slotIndex == "Beatrice-JVS") {
return; // beatrice JVS は変換先話者をグラフから選択するので、ここでは表示しない
}
const options = Object.keys(selected.speakers).map((key) => {
const val = selected.speakers[Number(key)];
@ -80,7 +83,7 @@ export const SpeakerArea = (_props: SpeakerAreaProps) => {
return (
<div className="character-area-control">
<div className="character-area-control-title">{selected.voiceChangerType == "DDSP-SVC" || selected.voiceChangerType == "so-vits-svc-40" || selected.voiceChangerType == "RVC" || selected.voiceChangerType == "Beatrice" ? "Voice:" : ""}</div>
<div className="character-area-control-title">{selected.voiceChangerType == "DDSP-SVC" || selected.voiceChangerType == "so-vits-svc-40" || selected.voiceChangerType == "RVC" ? "Voice:" : ""}</div>
<div className="character-area-control-field">
<div className="character-area-slider-control">
<span className="character-area-slider-control-kind">{selected.voiceChangerType == "MMVCv13" || selected.voiceChangerType == "MMVCv15" ? "dst" : ""}</span>

View File

@ -0,0 +1,199 @@
import React, { useMemo } from "react";
import { useAppState } from "../../../001_provider/001_AppStateProvider";
import { useGuiState } from "../001_GuiStateProvider";
export type WebEditionSettingAreaProps = {};
export const WebEditionSettingArea = (_props: WebEditionSettingAreaProps) => {
const { serverSetting, webInfoState, webEdition } = useAppState();
const guiState = useGuiState();
const selected = useMemo(() => {
if (webEdition) {
return webInfoState.webModelslot;
}
return null;
}, [webEdition]);
const settingArea = useMemo(() => {
if (!selected) {
return <></>;
}
const readyForConfig = guiState.isConverting == false && webInfoState.webModelLoadingState == "ready";
const versionV1ClassName = "character-area-control-button" + (webInfoState.voiceChangerConfig.config.voiceChangerType == "rvcv1" ? " character-area-control-button-active" : " character-area-control-button-stanby");
const versionV2ClassName = "character-area-control-button" + (webInfoState.voiceChangerConfig.config.voiceChangerType == "rvcv2" ? " character-area-control-button-active" : " character-area-control-button-stanby");
const verison = (
<div className="character-area-control">
<div className="character-area-control-title">Version</div>
<div className="character-area-control-field">
<div className="character-area-slider-control">
<span className="character-area-slider-control-kind"></span>
<span className="character-area-control-buttons">
<span
className={!readyForConfig ? "character-area-control-button-disable" : versionV1ClassName}
onClick={() => {
if (webInfoState.voiceChangerConfig.config.voiceChangerType == "rvcv1" || !readyForConfig) return;
webInfoState.setVoiceChangerConfig("rvcv1", webInfoState.voiceChangerConfig.sampleRate, webInfoState.voiceChangerConfig.useF0, webInfoState.voiceChangerConfig.inputLength);
}}
>
v1
</span>
<span
className={!readyForConfig ? "character-area-control-button-disable" : versionV2ClassName}
onClick={() => {
if (webInfoState.voiceChangerConfig.config.voiceChangerType == "rvcv2" || !readyForConfig) return;
webInfoState.setVoiceChangerConfig("rvcv2", webInfoState.voiceChangerConfig.sampleRate, webInfoState.voiceChangerConfig.useF0, webInfoState.voiceChangerConfig.inputLength);
}}
>
v2
</span>
</span>
</div>
</div>
</div>
);
const sr16KClassName = "character-area-control-button" + (webInfoState.voiceChangerConfig.sampleRate == "16k" ? " character-area-control-button-active" : " character-area-control-button-stanby");
const sr32KClassName = "character-area-control-button" + (webInfoState.voiceChangerConfig.sampleRate == "32k" ? " character-area-control-button-active" : " character-area-control-button-stanby");
const sr40KClassName = "character-area-control-button" + (webInfoState.voiceChangerConfig.sampleRate == "40k" ? " character-area-control-button-active" : " character-area-control-button-stanby");
const sampleRate = (
<div className="character-area-control">
<div className="character-area-control-title">SR</div>
<div className="character-area-control-field">
<div className="character-area-slider-control">
<span className="character-area-slider-control-kind"></span>
<span className="character-area-control-buttons">
<span
className={!readyForConfig ? "character-area-control-button-disable" : sr16KClassName}
onClick={() => {
if (webInfoState.voiceChangerConfig.sampleRate == "16k" || !readyForConfig) return;
webInfoState.setVoiceChangerConfig("rvcv2", "16k", webInfoState.voiceChangerConfig.useF0, webInfoState.voiceChangerConfig.inputLength);
}}
>
16k
</span>
<span
className={!readyForConfig ? "character-area-control-button-disable" : sr32KClassName}
onClick={() => {
if (webInfoState.voiceChangerConfig.sampleRate == "32k" || !readyForConfig) return;
webInfoState.setVoiceChangerConfig(webInfoState.voiceChangerConfig.config.voiceChangerType, "32k", webInfoState.voiceChangerConfig.useF0, webInfoState.voiceChangerConfig.inputLength);
}}
>
32k
</span>
<span
className={!readyForConfig ? "character-area-control-button-disable" : sr40KClassName}
onClick={() => {
if (webInfoState.voiceChangerConfig.sampleRate == "40k" || !readyForConfig) return;
webInfoState.setVoiceChangerConfig(webInfoState.voiceChangerConfig.config.voiceChangerType, "40k", webInfoState.voiceChangerConfig.useF0, webInfoState.voiceChangerConfig.inputLength);
}}
>
40k
</span>
</span>
</div>
</div>
</div>
);
const pitchEnableClassName = "character-area-control-button" + (webInfoState.voiceChangerConfig.useF0 == true ? " character-area-control-button-active" : " character-area-control-button-stanby");
const pitchDisableClassName = "character-area-control-button" + (webInfoState.voiceChangerConfig.useF0 == false ? " character-area-control-button-active" : " character-area-control-button-stanby");
const pitch = (
<div className="character-area-control">
<div className="character-area-control-title">Pitch</div>
<div className="character-area-control-field">
<div className="character-area-slider-control">
<span className="character-area-slider-control-kind"></span>
<span className="character-area-control-buttons">
<span
className={!readyForConfig ? "character-area-control-button-disable" : pitchEnableClassName}
onClick={() => {
if (webInfoState.voiceChangerConfig.useF0 == true || !readyForConfig) return;
webInfoState.setVoiceChangerConfig(webInfoState.voiceChangerConfig.config.voiceChangerType, webInfoState.voiceChangerConfig.sampleRate, true, webInfoState.voiceChangerConfig.inputLength);
}}
>
Enable
</span>
<span
className={!readyForConfig ? "character-area-control-button-disable" : pitchDisableClassName}
onClick={() => {
if (webInfoState.voiceChangerConfig.useF0 == false || !readyForConfig) return;
webInfoState.setVoiceChangerConfig(webInfoState.voiceChangerConfig.config.voiceChangerType, webInfoState.voiceChangerConfig.sampleRate, false, webInfoState.voiceChangerConfig.inputLength);
}}
>
Disable
</span>
</span>
</div>
</div>
</div>
);
const latencyHighClassName = "character-area-control-button" + (webInfoState.voiceChangerConfig.inputLength == "24000" ? " character-area-control-button-active" : " character-area-control-button-stanby");
const latencyMidClassName = "character-area-control-button" + (webInfoState.voiceChangerConfig.inputLength == "12000" ? " character-area-control-button-active" : " character-area-control-button-stanby");
const latencyLowClassName = "character-area-control-button" + (webInfoState.voiceChangerConfig.inputLength == "8000" ? " character-area-control-button-active" : " character-area-control-button-stanby");
const latency = (
<div className="character-area-control">
<div className="character-area-control-title">Latency</div>
<div className="character-area-control-field">
<div className="character-area-slider-control">
<span className="character-area-slider-control-kind"></span>
<span className="character-area-control-buttons">
<span
className={!readyForConfig ? "character-area-control-button-disable" : latencyHighClassName}
onClick={() => {
if (webInfoState.voiceChangerConfig.inputLength == "24000" || !readyForConfig) return;
webInfoState.setVoiceChangerConfig(webInfoState.voiceChangerConfig.config.voiceChangerType, webInfoState.voiceChangerConfig.sampleRate, webInfoState.voiceChangerConfig.useF0, "24000");
}}
>
High
</span>
<span
className={!readyForConfig ? "character-area-control-button-disable" : latencyMidClassName}
onClick={() => {
if (webInfoState.voiceChangerConfig.inputLength == "12000" || !readyForConfig) return;
webInfoState.setVoiceChangerConfig(webInfoState.voiceChangerConfig.config.voiceChangerType, webInfoState.voiceChangerConfig.sampleRate, webInfoState.voiceChangerConfig.useF0, "12000");
}}
>
Mid
</span>
<span
className={!readyForConfig ? "character-area-control-button-disable" : latencyLowClassName}
onClick={() => {
if (webInfoState.voiceChangerConfig.inputLength == "8000" || !readyForConfig) return;
webInfoState.setVoiceChangerConfig(webInfoState.voiceChangerConfig.config.voiceChangerType, webInfoState.voiceChangerConfig.sampleRate, webInfoState.voiceChangerConfig.useF0, "8000");
}}
>
Low
</span>
</span>
</div>
</div>
</div>
);
return (
<>
{verison}
{sampleRate}
{pitch}
{latency}
</>
);
}, [
serverSetting.serverSetting,
serverSetting.updateServerSettings,
selected,
webInfoState.upkey,
webInfoState.voiceChangerConfig.config.voiceChangerType,
webInfoState.voiceChangerConfig.sampleRate,
webInfoState.voiceChangerConfig.useF0,
webInfoState.voiceChangerConfig.inputLength,
webInfoState.webModelLoadingState,
guiState.isConverting,
webInfoState.webModelLoadingState,
]);
return settingArea;
};

View File

@ -10,22 +10,29 @@ import { F0FactorArea } from "./101-4_F0FactorArea";
import { SoVitsSVC40SettingArea } from "./101-5_so-vits-svc40SettingArea";
import { DDSPSVC30SettingArea } from "./101-6_ddsp-svc30SettingArea";
import { DiffusionSVCSettingArea } from "./101-7_diffusion-svcSettingArea";
import { Portrait } from "./101-0_Portrait";
import { useAppRoot } from "../../../001_provider/001_AppRootProvider";
import { WebEditionSettingArea } from "./101-8_web-editionSettingArea";
export type CharacterAreaProps = {};
export const CharacterArea = (_props: CharacterAreaProps) => {
const { serverSetting, initializedRef, volume, bufferingTime, performance, setting, setVoiceChangerClientSetting, start, stop } = useAppState();
const { appGuiSettingState } = useAppRoot();
const { serverSetting, initializedRef, setting, setVoiceChangerClientSetting, start, stop, webInfoState } = useAppState();
const guiState = useGuiState();
const messageBuilderState = useMessageBuilder();
const webEdition = appGuiSettingState.edition.indexOf("web") >= 0;
const { beatriceJVSSpeakerId } = useGuiState();
useMemo(() => {
messageBuilderState.setMessage(__filename, "terms_of_use", { ja: "利用規約", en: "terms of use" });
messageBuilderState.setMessage(__filename, "export_to_onnx", { ja: "onnx出力", en: "export to onnx" });
messageBuilderState.setMessage(__filename, "save_default", { ja: "設定保存", en: "save setting" });
messageBuilderState.setMessage(__filename, "alert_onnx", { ja: "ボイチェン中はonnx出力できません", en: "cannot export onnx when voice conversion is enabled" });
}, []);
const selected = useMemo(() => {
if (webEdition) {
return webInfoState.webModelslot;
}
if (serverSetting.serverSetting.modelSlotIndex == undefined) {
return;
} else if (serverSetting.serverSetting.modelSlotIndex == "Beatrice-JVS") {
@ -34,58 +41,7 @@ export const CharacterArea = (_props: CharacterAreaProps) => {
} else {
return serverSetting.serverSetting.modelSlots[serverSetting.serverSetting.modelSlotIndex];
}
}, [serverSetting.serverSetting.modelSlotIndex, serverSetting.serverSetting.modelSlots]);
useEffect(() => {
const vol = document.getElementById("status-vol") as HTMLSpanElement;
const buf = document.getElementById("status-buf") as HTMLSpanElement;
const res = document.getElementById("status-res") as HTMLSpanElement;
if (!vol || !buf || !res) {
return;
}
vol.innerText = volume.toFixed(4);
buf.innerText = bufferingTime.toString();
res.innerText = performance.responseTime.toString();
}, [volume, bufferingTime, performance]);
const portrait = useMemo(() => {
if (!selected) {
return <></>;
}
const modelDir = serverSetting.serverSetting.modelSlotIndex == "Beatrice-JVS" ? "model_dir_static" : serverSetting.serverSetting.voiceChangerParams.model_dir;
const icon = selected.iconFile.length > 0 ? modelDir + "/" + selected.slotIndex + "/" + selected.iconFile.split(/[\/\\]/).pop() : "./assets/icons/human.png";
const selectedTermOfUseUrlLink = selected.termsOfUseUrl ? (
<a href={selected.termsOfUseUrl} target="_blank" rel="noopener noreferrer" className="portrait-area-terms-of-use-link">
[{messageBuilderState.getMessage(__filename, "terms_of_use")}]
</a>
) : (
<></>
);
return (
<div className="portrait-area">
<div className="portrait-container">
<img className="portrait" src={icon} alt={selected.name} />
<div className="portrait-area-status">
<p>
<span className="portrait-area-status-vctype">{selected.voiceChangerType}</span>
</p>
<p>
vol: <span id="status-vol">0</span>
</p>
<p>
buf: <span id="status-buf">0</span> ms
</p>
<p>
res: <span id="status-res">0</span> ms
</p>
</div>
<div className="portrait-area-terms-of-use">{selectedTermOfUseUrlLink}</div>
</div>
</div>
);
}, [selected]);
}, [serverSetting.serverSetting.modelSlotIndex, serverSetting.serverSetting.modelSlots, webEdition]);
const [startWithAudioContextCreate, setStartWithAudioContextCreate] = useState<boolean>(false);
useEffect(() => {
@ -96,6 +52,22 @@ export const CharacterArea = (_props: CharacterAreaProps) => {
start();
}, [startWithAudioContextCreate]);
const nameArea = useMemo(() => {
if (!selected) {
return <></>;
}
return (
<div className="character-area-control">
<div className="character-area-control-title">Name:</div>
<div className="character-area-control-field">
<div className="character-area-text">
{selected.name} {selected.slotIndex == "Beatrice-JVS" ? `speaker:${beatriceJVSSpeakerId}` : ""}
</div>
</div>
</div>
);
}, [selected, beatriceJVSSpeakerId]);
const startControl = useMemo(() => {
const onStartClicked = async () => {
if (serverSetting.serverSetting.enableServerAudio == 0) {
@ -142,23 +114,67 @@ export const CharacterArea = (_props: CharacterAreaProps) => {
const startClassName = guiState.isConverting ? "character-area-control-button-active" : "character-area-control-button-stanby";
const stopClassName = guiState.isConverting ? "character-area-control-button-stanby" : "character-area-control-button-active";
const passThruClassName = serverSetting.serverSetting.passThrough == false ? "character-area-control-passthru-button-stanby" : "character-area-control-passthru-button-active blinking";
console.log("serverSetting.serverSetting.passThrough", passThruClassName, serverSetting.serverSetting.passThrough);
return (
<div className="character-area-control">
<div className="character-area-control-buttons">
<div onClick={onStartClicked} className={startClassName}>
start
if (webEdition && webInfoState.webModelLoadingState != "ready") {
if (webInfoState.webModelLoadingState == "none" || webInfoState.webModelLoadingState == "loading") {
return (
<div className="character-area-control">
<div className="character-area-control-title">wait...</div>
<div className="character-area-control-field">
<div className="character-area-text blink">{webInfoState.webModelLoadingState}..</div>
<div className="character-area-text">
pre:{Math.floor(webInfoState.progressLoadPreprocess * 100)}%, model: {Math.floor(webInfoState.progressLoadVCModel * 100)}%
</div>
</div>
</div>
<div onClick={onStopClicked} className={stopClassName}>
stop
);
} else if (webInfoState.webModelLoadingState == "warmup") {
return (
<div className="character-area-control">
<div className="character-area-control-title">wait...</div>
<div className="character-area-control-field">
<div className="character-area-text blink">{webInfoState.webModelLoadingState}..</div>
<div className="character-area-text">warm up:{Math.floor(webInfoState.progressWarmup * 100)}%</div>
</div>
</div>
<div onClick={onPassThroughClicked} className={passThruClassName}>
passthru
);
} else {
throw new Error("invalid webModelLoadingState");
}
} else {
if (webEdition) {
return (
<div className="character-area-control">
<div className="character-area-control-buttons">
<div onClick={onStartClicked} className={startClassName}>
start
</div>
<div onClick={onStopClicked} className={stopClassName}>
stop
</div>
</div>
</div>
</div>
</div>
);
}, [guiState.isConverting, start, stop, serverSetting.serverSetting, serverSetting.updateServerSettings]);
);
} else {
return (
<div className="character-area-control">
<div className="character-area-control-buttons">
<div onClick={onStartClicked} className={startClassName}>
start
</div>
<div onClick={onStopClicked} className={stopClassName}>
stop
</div>
<div onClick={onPassThroughClicked} className={passThruClassName}>
passthru
</div>
</div>
</div>
);
}
}
}, [guiState.isConverting, start, stop, serverSetting.serverSetting, serverSetting.updateServerSettings, webInfoState.progressLoadPreprocess, webInfoState.progressLoadVCModel, webInfoState.progressWarmup, webInfoState.webModelLoadingState]);
const gainControl = useMemo(() => {
const currentInputGain = serverSetting.serverSetting.enableServerAudio == 0 ? setting.voiceChangerClientSetting.inputGain : serverSetting.serverSetting.serverInputAudioGain;
@ -227,6 +243,9 @@ export const CharacterArea = (_props: CharacterAreaProps) => {
if (!selected) {
return <></>;
}
if (webEdition) {
return <></>;
}
const onUpdateDefaultClicked = async () => {
await serverSetting.updateModelDefault();
};
@ -275,8 +294,9 @@ export const CharacterArea = (_props: CharacterAreaProps) => {
const characterArea = useMemo(() => {
return (
<div className="character-area">
{portrait}
<Portrait></Portrait>
<div className="character-area-control-area">
{nameArea}
{startControl}
{gainControl}
<TuningArea />
@ -286,11 +306,12 @@ export const CharacterArea = (_props: CharacterAreaProps) => {
<SoVitsSVC40SettingArea />
<DDSPSVC30SettingArea />
<DiffusionSVCSettingArea />
<WebEditionSettingArea />
{modelSlotControl}
</div>
</div>
);
}, [portrait, startControl, gainControl, modelSlotControl]);
}, [startControl, gainControl, modelSlotControl]);
return characterArea;
};

View File

@ -8,7 +8,7 @@ export type QualityAreaProps = {
};
export const QualityArea = (props: QualityAreaProps) => {
const { setVoiceChangerClientSetting, serverSetting, setting } = useAppState();
const { setVoiceChangerClientSetting, serverSetting, setting, webEdition } = useAppState();
const { appGuiSettingState } = useAppRoot();
const edition = appGuiSettingState.edition;
@ -47,6 +47,52 @@ export const QualityArea = (props: QualityAreaProps) => {
};
const f0DetOptions = generateF0DetOptions();
const f0Det = webEdition ? (
<></>
) : (
<div className="config-sub-area-control">
<div className="config-sub-area-control-title">F0 Det.:</div>
<div className="config-sub-area-control-field">
<select
className="body-select"
value={serverSetting.serverSetting.f0Detector}
onChange={(e) => {
serverSetting.updateServerSettings({ ...serverSetting.serverSetting, f0Detector: e.target.value as F0Detector });
}}
>
{f0DetOptions}
</select>
</div>
</div>
);
const threshold = webEdition ? (
<></>
) : (
<div className="config-sub-area-control">
<div className="config-sub-area-control-title">S.Thresh.:</div>
<div className="config-sub-area-control-field">
<div className="config-sub-area-slider-control">
<span className="config-sub-area-slider-control-kind"></span>
<span className="config-sub-area-slider-control-slider">
<input
type="range"
className="config-sub-area-slider-control-slider"
min="0.00000"
max="0.001"
step="0.00001"
value={serverSetting.serverSetting.silentThreshold || 0}
onChange={(e) => {
serverSetting.updateServerSettings({ ...serverSetting.serverSetting, silentThreshold: Number(e.target.value) });
}}
></input>
</span>
<span className="config-sub-area-slider-control-val">{serverSetting.serverSetting.silentThreshold}</span>
</div>
</div>
</div>
);
return (
<div className="config-sub-area">
<div className="config-sub-area-control">
@ -101,42 +147,8 @@ export const QualityArea = (props: QualityAreaProps) => {
</div>
</div>
</div>
<div className="config-sub-area-control">
<div className="config-sub-area-control-title">F0 Det.:</div>
<div className="config-sub-area-control-field">
<select
className="body-select"
value={serverSetting.serverSetting.f0Detector}
onChange={(e) => {
serverSetting.updateServerSettings({ ...serverSetting.serverSetting, f0Detector: e.target.value as F0Detector });
}}
>
{f0DetOptions}
</select>
</div>
</div>
<div className="config-sub-area-control">
<div className="config-sub-area-control-title">S.Thresh.:</div>
<div className="config-sub-area-control-field">
<div className="config-sub-area-slider-control">
<span className="config-sub-area-slider-control-kind"></span>
<span className="config-sub-area-slider-control-slider">
<input
type="range"
className="config-sub-area-slider-control-slider"
min="0.00000"
max="0.001"
step="0.00001"
value={serverSetting.serverSetting.silentThreshold || 0}
onChange={(e) => {
serverSetting.updateServerSettings({ ...serverSetting.serverSetting, silentThreshold: Number(e.target.value) });
}}
></input>
</span>
<span className="config-sub-area-slider-control-val">{serverSetting.serverSetting.silentThreshold}</span>
</div>
</div>
</div>
{f0Det}
{threshold}
</div>
);
}, [serverSetting.serverSetting, setting, serverSetting.updateServerSettings, setVoiceChangerClientSetting]);

View File

@ -7,9 +7,10 @@ export type ConvertProps = {
};
export const ConvertArea = (props: ConvertProps) => {
const { setting, serverSetting, setWorkletNodeSetting, trancateBuffer } = useAppState();
const { setting, serverSetting, setWorkletNodeSetting, trancateBuffer, webEdition } = useAppState();
const { appGuiSettingState } = useAppRoot();
const edition = appGuiSettingState.edition;
const convertArea = useMemo(() => {
let nums: number[];
if (!props.inputChunkNums) {
@ -110,6 +111,8 @@ export const ConvertArea = (props: ConvertProps) => {
</div>
</div>
</>
) : webEdition ? (
<></>
) : (
<div className="config-sub-area-control">
<div className="config-sub-area-control-title">GPU:</div>
@ -133,6 +136,32 @@ export const ConvertArea = (props: ConvertProps) => {
</div>
</div>
);
const extraArea = webEdition ? (
<></>
) : (
<div className="config-sub-area-control">
<div className="config-sub-area-control-title">EXTRA:</div>
<div className="config-sub-area-control-field">
<select
className="body-select"
value={serverSetting.serverSetting.extraConvertSize}
onChange={(e) => {
serverSetting.updateServerSettings({ ...serverSetting.serverSetting, extraConvertSize: Number(e.target.value) });
trancateBuffer();
}}
>
{[1024 * 4, 1024 * 8, 1024 * 16, 1024 * 32, 1024 * 64, 1024 * 128].map((x) => {
return (
<option key={x} value={x}>
{x}
</option>
);
})}
</select>
</div>
</div>
);
return (
<div className="config-sub-area">
<div className="config-sub-area-control">
@ -157,27 +186,7 @@ export const ConvertArea = (props: ConvertProps) => {
</select>
</div>
</div>
<div className="config-sub-area-control">
<div className="config-sub-area-control-title">EXTRA:</div>
<div className="config-sub-area-control-field">
<select
className="body-select"
value={serverSetting.serverSetting.extraConvertSize}
onChange={(e) => {
serverSetting.updateServerSettings({ ...serverSetting.serverSetting, extraConvertSize: Number(e.target.value) });
trancateBuffer();
}}
>
{[1024 * 4, 1024 * 8, 1024 * 16, 1024 * 32, 1024 * 64, 1024 * 128].map((x) => {
return (
<option key={x} value={x}>
{x}
</option>
);
})}
</select>
</div>
</div>
{extraArea}
{gpuSelect}
</div>
);

View File

@ -2,14 +2,14 @@ import React, { useEffect, useMemo, useRef, useState } from "react";
import { useAppState } from "../../../001_provider/001_AppStateProvider";
import { fileSelectorAsDataURL, useIndexedDB } from "@dannadori/voice-changer-client-js";
import { useGuiState } from "../001_GuiStateProvider";
import { AUDIO_ELEMENT_FOR_PLAY_MONITOR, AUDIO_ELEMENT_FOR_PLAY_RESULT, AUDIO_ELEMENT_FOR_TEST_CONVERTED, AUDIO_ELEMENT_FOR_TEST_CONVERTED_ECHOBACK, AUDIO_ELEMENT_FOR_TEST_ORIGINAL, INDEXEDDB_KEY_AUDIO_MONITR, INDEXEDDB_KEY_AUDIO_OUTPUT } from "../../../const";
import { AUDIO_ELEMENT_FOR_PLAY_MONITOR, AUDIO_ELEMENT_FOR_PLAY_RESULT, AUDIO_ELEMENT_FOR_TEST_CONVERTED, AUDIO_ELEMENT_FOR_TEST_CONVERTED_ECHOBACK, INDEXEDDB_KEY_AUDIO_MONITR, INDEXEDDB_KEY_AUDIO_OUTPUT } from "../../../const";
import { isDesktopApp } from "../../../const";
export type DeviceAreaProps = {};
export const DeviceArea = (_props: DeviceAreaProps) => {
const { setting, serverSetting, audioContext, setAudioOutputElementId, setAudioMonitorElementId, initializedRef, setVoiceChangerClientSetting, startOutputRecording, stopOutputRecording } = useAppState();
const { isConverting, audioInputForGUI, inputAudioDeviceInfo, setAudioInputForGUI, fileInputEchoback, setFileInputEchoback, setAudioOutputForGUI, setAudioMonitorForGUI, audioOutputForGUI, audioMonitorForGUI, outputAudioDeviceInfo, shareScreenEnabled, setShareScreenEnabled } = useGuiState();
const { setting, serverSetting, audioContext, setAudioOutputElementId, setAudioMonitorElementId, initializedRef, setVoiceChangerClientSetting, startOutputRecording, stopOutputRecording, webEdition } = useAppState();
const { isConverting, audioInputForGUI, inputAudioDeviceInfo, setAudioInputForGUI, fileInputEchoback, setFileInputEchoback, setAudioOutputForGUI, setAudioMonitorForGUI, audioOutputForGUI, audioMonitorForGUI, outputAudioDeviceInfo, shareScreenEnabled, setShareScreenEnabled, reloadDeviceInfo } = useGuiState();
const [inputHostApi, setInputHostApi] = useState<string>("ALL");
const [outputHostApi, setOutputHostApi] = useState<string>("ALL");
const [monitorHostApi, setMonitorHostApi] = useState<string>("ALL");
@ -21,6 +21,20 @@ export const DeviceArea = (_props: DeviceAreaProps) => {
// (1) Audio Mode
const deviceModeRow = useMemo(() => {
if (webEdition) {
return (
<div className="config-sub-area-control">
<div className="config-sub-area-control-title">AUDIO:</div>
<div className="config-sub-area-control-field">
<div className="config-sub-area-buttons">
<div onClick={reloadDeviceInfo} className="config-sub-area-button">
reload
</div>
</div>
</div>
</div>
);
}
const enableServerAudio = serverSetting.serverSetting.enableServerAudio;
const clientChecked = enableServerAudio == 1 ? false : true;
const serverChecked = enableServerAudio == 1 ? true : false;
@ -63,6 +77,12 @@ export const DeviceArea = (_props: DeviceAreaProps) => {
/>
<label htmlFor="server-device">server</label>
</div>
<div className="config-sub-area-buttons">
<div onClick={reloadDeviceInfo} className="config-sub-area-button">
reload
</div>
</div>
</div>
</div>
</div>
@ -389,8 +409,12 @@ export const DeviceArea = (_props: DeviceAreaProps) => {
// Server Audio を使う場合はElementから音は出さない。
audio.volume = 0;
} else if (audioOutputForGUI == "none") {
// @ts-ignore
audio.setSinkId("");
try {
// @ts-ignore
audio.setSinkId("");
} catch (e) {
console.error("catch:" + e);
}
if (x == AUDIO_ELEMENT_FOR_TEST_CONVERTED_ECHOBACK) {
audio.volume = 0;
} else {
@ -404,8 +428,12 @@ export const DeviceArea = (_props: DeviceAreaProps) => {
return x.deviceId == audioOutputForGUI;
});
if (found) {
// @ts-ignore // 例外キャッチできないので事前にIDチェックが必要らしい。
audio.setSinkId(audioOutputForGUI);
try {
// @ts-ignore // 例外キャッチできないので事前にIDチェックが必要らしい。
audio.setSinkId(audioOutputForGUI);
} catch (e) {
console.error("catch:" + e);
}
} else {
console.warn("No audio output device. use default");
}
@ -620,9 +648,13 @@ export const DeviceArea = (_props: DeviceAreaProps) => {
// Server Audio を使う場合はElementから音は出さない。
audio.volume = 0;
} else if (audioMonitorForGUI == "none") {
// @ts-ignore
audio.setSinkId("");
audio.volume = 0;
try {
// @ts-ignore
audio.setSinkId("");
audio.volume = 0;
} catch (e) {
console.error("catch:" + e);
}
} else {
const audioOutputs = mediaDeviceInfos.filter((x) => {
return x.kind == "audiooutput";
@ -631,9 +663,13 @@ export const DeviceArea = (_props: DeviceAreaProps) => {
return x.deviceId == audioMonitorForGUI;
});
if (found) {
// @ts-ignore // 例外キャッチできないので事前にIDチェックが必要らしい。
audio.setSinkId(audioMonitorForGUI);
audio.volume = 1;
try {
// @ts-ignore // 例外キャッチできないので事前にIDチェックが必要らしい。
audio.setSinkId(audioMonitorForGUI);
audio.volume = 1;
} catch (e) {
console.error("catch:" + e);
}
} else {
console.warn("No audio output device. use default");
}

View File

@ -1,44 +1,52 @@
import React, { useMemo, useState } from "react"
import { useAppState } from "../../../001_provider/001_AppStateProvider"
import { useGuiState } from "../001_GuiStateProvider"
import { AUDIO_ELEMENT_FOR_SAMPLING_INPUT, AUDIO_ELEMENT_FOR_SAMPLING_OUTPUT } from "../../../const"
import React, { useMemo, useState } from "react";
import { useAppState } from "../../../001_provider/001_AppStateProvider";
import { useGuiState } from "../001_GuiStateProvider";
import { AUDIO_ELEMENT_FOR_SAMPLING_INPUT, AUDIO_ELEMENT_FOR_SAMPLING_OUTPUT } from "../../../const";
export type RecorderAreaProps = {
}
export type RecorderAreaProps = {};
export const RecorderArea = (_props: RecorderAreaProps) => {
const { serverSetting } = useAppState()
const { audioOutputForAnalyzer, setAudioOutputForAnalyzer, outputAudioDeviceInfo } = useGuiState()
const [serverIORecording, setServerIORecording] = useState<boolean>(false)
const { serverSetting, webEdition } = useAppState();
const { audioOutputForAnalyzer, setAudioOutputForAnalyzer, outputAudioDeviceInfo } = useGuiState();
const [serverIORecording, setServerIORecording] = useState<boolean>(false);
const serverIORecorderRow = useMemo(() => {
const onServerIORecordStartClicked = async () => {
setServerIORecording(true)
await serverSetting.updateServerSettings({ ...serverSetting.serverSetting, recordIO: 1 })
if (webEdition) {
return <> </>;
}
const onServerIORecordStartClicked = async () => {
setServerIORecording(true);
await serverSetting.updateServerSettings({ ...serverSetting.serverSetting, recordIO: 1 });
};
const onServerIORecordStopClicked = async () => {
setServerIORecording(false)
await serverSetting.updateServerSettings({ ...serverSetting.serverSetting, recordIO: 0 })
setServerIORecording(false);
await serverSetting.updateServerSettings({ ...serverSetting.serverSetting, recordIO: 0 });
// set wav (input)
const wavInput = document.getElementById(AUDIO_ELEMENT_FOR_SAMPLING_INPUT) as HTMLAudioElement
wavInput.src = "/tmp/in.wav?" + new Date().getTime()
wavInput.controls = true
// @ts-ignore
wavInput.setSinkId(audioOutputForAnalyzer)
const wavInput = document.getElementById(AUDIO_ELEMENT_FOR_SAMPLING_INPUT) as HTMLAudioElement;
wavInput.src = "/tmp/in.wav?" + new Date().getTime();
wavInput.controls = true;
try {
// @ts-ignore
wavInput.setSinkId(audioOutputForAnalyzer);
} catch (e) {
console.log(e);
}
// set wav (output)
const wavOutput = document.getElementById(AUDIO_ELEMENT_FOR_SAMPLING_OUTPUT) as HTMLAudioElement
wavOutput.src = "/tmp/out.wav?" + new Date().getTime()
wavOutput.controls = true
// @ts-ignore
wavOutput.setSinkId(audioOutputForAnalyzer)
}
const wavOutput = document.getElementById(AUDIO_ELEMENT_FOR_SAMPLING_OUTPUT) as HTMLAudioElement;
wavOutput.src = "/tmp/out.wav?" + new Date().getTime();
wavOutput.controls = true;
try {
// @ts-ignore
wavOutput.setSinkId(audioOutputForAnalyzer);
} catch (e) {
console.log(e);
}
};
const startClassName = serverIORecording ? "config-sub-area-button-active" : "config-sub-area-button"
const stopClassName = serverIORecording ? "config-sub-area-button" : "config-sub-area-button-active"
const startClassName = serverIORecording ? "config-sub-area-button-active" : "config-sub-area-button";
const stopClassName = serverIORecording ? "config-sub-area-button" : "config-sub-area-button-active";
return (
<>
<div className="config-sub-area-control">
@ -49,34 +57,51 @@ export const RecorderArea = (_props: RecorderAreaProps) => {
<div className="config-sub-area-control-title">SIO rec.</div>
<div className="config-sub-area-control-field">
<div className="config-sub-area-buttons">
<div onClick={onServerIORecordStartClicked} className={startClassName}>start</div>
<div onClick={onServerIORecordStopClicked} className={stopClassName}>stop</div>
<div onClick={onServerIORecordStartClicked} className={startClassName}>
start
</div>
<div onClick={onServerIORecordStopClicked} className={stopClassName}>
stop
</div>
</div>
</div>
</div>
<div className="config-sub-area-control left-padding-1">
<div className="config-sub-area-control-title">output</div>
<div className="config-sub-area-control-field">
<div className="config-sub-area-control-field-auido-io">
<select className="body-select" value={audioOutputForAnalyzer} onChange={(e) => {
setAudioOutputForAnalyzer(e.target.value)
const wavInput = document.getElementById(AUDIO_ELEMENT_FOR_SAMPLING_INPUT) as HTMLAudioElement
const wavOutput = document.getElementById(AUDIO_ELEMENT_FOR_SAMPLING_OUTPUT) as HTMLAudioElement
//@ts-ignore
wavInput.setSinkId(e.target.value)
//@ts-ignore
wavOutput.setSinkId(e.target.value)
}}>
{
outputAudioDeviceInfo.map(x => {
<select
className="body-select"
value={audioOutputForAnalyzer}
onChange={(e) => {
setAudioOutputForAnalyzer(e.target.value);
const wavInput = document.getElementById(AUDIO_ELEMENT_FOR_SAMPLING_INPUT) as HTMLAudioElement;
const wavOutput = document.getElementById(AUDIO_ELEMENT_FOR_SAMPLING_OUTPUT) as HTMLAudioElement;
try {
//@ts-ignore
wavInput.setSinkId(e.target.value);
//@ts-ignore
wavOutput.setSinkId(e.target.value);
} catch (e) {
console.log(e);
}
}}
>
{outputAudioDeviceInfo
.map((x) => {
if (x.deviceId == "none") {
return null
return null;
}
return <option key={x.deviceId} value={x.deviceId}>{x.label}</option>
}).filter(x => { return x != null })
}
return (
<option key={x.deviceId} value={x.deviceId}>
{x.label}
</option>
);
})
.filter((x) => {
return x != null;
})}
</select>
</div>
</div>
@ -102,17 +127,9 @@ export const RecorderArea = (_props: RecorderAreaProps) => {
</div>
</div>
</div>
</>
)
}, [serverIORecording, audioOutputForAnalyzer, outputAudioDeviceInfo, serverSetting.updateServerSettings])
return (
<div className="config-sub-area">
{serverIORecorderRow}
</div>
)
}
);
}, [serverIORecording, audioOutputForAnalyzer, outputAudioDeviceInfo, serverSetting.updateServerSettings]);
return <div className="config-sub-area">{serverIORecorderRow}</div>;
};

View File

@ -1,10 +1,12 @@
import React, { useMemo } from "react";
import { useGuiState } from "../001_GuiStateProvider";
import { useAppState } from "../../../001_provider/001_AppStateProvider";
export type MoreActionAreaProps = {};
export const MoreActionArea = (_props: MoreActionAreaProps) => {
const { stateControls } = useGuiState();
const { webEdition } = useAppState();
const serverIORecorderRow = useMemo(() => {
const onOpenMergeLabClicked = () => {
@ -44,5 +46,9 @@ export const MoreActionArea = (_props: MoreActionAreaProps) => {
);
}, [stateControls]);
return <div className="config-sub-area">{serverIORecorderRow}</div>;
if (webEdition) {
return <> </>;
} else {
return <div className="config-sub-area">{serverIORecorderRow}</div>;
}
};

View File

@ -1124,6 +1124,7 @@ body {
position: relative;
cursor: pointer;
display: inline-block;
z-index: 10;
}
/* ################## */
@ -1321,7 +1322,7 @@ body {
background: rgba(100, 100, 100, 0.5);
color: white;
position: absolute;
paddig: 2px;
padding: 2px;
font-size: 0.7rem;
right: 5px;
bottom: 5px;
@ -1363,6 +1364,15 @@ body {
border: solid 1px #000;
}
}
.character-area-control-button-disable {
width: 5rem;
border: solid 1px #333;
border-radius: 2px;
background: #d3d7d3;
font-weight: 700;
text-align: center;
color: grey;
}
.character-area-control-passthru-button-stanby {
width: 5rem;
border: solid 1px #999;
@ -1402,6 +1412,9 @@ body {
display: flex;
flex-direction: column;
.character-area-text {
font-size: 0.9rem;
}
.character-area-slider-control {
display: flex;
flex-direction: row;
@ -1831,3 +1844,62 @@ audio::-webkit-media-controls-overlay-enclosure{
opacity: 0.5;
}
}
.blink {
animation: blinking 0.8s ease-in-out infinite alternate;
}
@keyframes blinking {
0% {
opacity: 0;
}
100% {
opacity: 1;
}
}
.beatrice-portrait-title {
font-size: 1rem;
font-weight: 700;
color: #333;
text-shadow: 0 0 2px #333;
text-align: center;
.edition {
font-size: 0.6rem;
}
}
.beatrice-portrait-select {
display: flex;
justify-content: center;
.button {
/* border: solid 2px #999; */
color: #615454;
font-weight: 700;
font-size: 0.8rem;
border-radius: 2px;
background: #adafad;
cursor: pointer;
padding: 0px 5px 0px 5px;
margin: 0px 5px 0px 5px;
line-height: 140%;
height: 1.1rem;
}
.button-selected {
/* border: solid 2px #999; */
color: #615454;
font-weight: 700;
font-size: 0.8rem;
border-radius: 2px;
background: #62b574;
cursor: pointer;
padding: 0px 5px 0px 5px;
margin: 0px 5px 0px 5px;
line-height: 140%;
height: 1.1rem;
}
}
.beatrice-speaker-graph-container {
width: 20rem;
height: 19rem;
border: none;
}

View File

@ -59,16 +59,40 @@ module.exports = {
// patterns: [{ from: "./node_modules/@dannadori/voice-changer-js/dist/ort-wasm-simd.wasm", to: "ort-wasm-simd.wasm" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "./node_modules/@dannadori/voice-changer-js/dist/tfjs-backend-wasm-simd.wasm", to: "tfjs-backend-wasm-simd.wasm" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "./node_modules/@dannadori/voice-changer-js/dist/process.js", to: "process.js" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/emb_pit_24000.bin", to: "models/emb_pit_24000.bin" }],
// patterns: [{ from: "public/models/rvcv2_emb_pit_24000.bin", to: "models/rvcv2_emb_pit_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvc2v_24000.bin", to: "models/rvc2v_24000.bin" }],
// patterns: [{ from: "public/models/rvcv2_amitaro_v2_32k_f0_24000.bin", to: "models/rvcv2_amitaro_v2_32k_f0_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvc2vnof0_24000.bin", to: "models/rvc2vnof0_24000.bin" }],
// patterns: [{ from: "public/models/rvcv2_amitaro_v2_32k_nof0_24000.bin", to: "models/rvcv2_amitaro_v2_32k_nof0_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvcv2_amitaro_v2_40k_f0_24000.bin", to: "models/rvcv2_amitaro_v2_40k_f0_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvcv2_amitaro_v2_40k_nof0_24000.bin", to: "models/rvcv2_amitaro_v2_40k_nof0_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvcv1_emb_pit_24000.bin", to: "models/rvcv1_emb_pit_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvcv1_amitaro_v1_32k_f0_24000.bin", to: "models/rvcv1_amitaro_v1_32k_f0_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvcv1_amitaro_v1_32k_nof0_24000.bin", to: "models/rvcv1_amitaro_v1_32k_nof0_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvcv1_amitaro_v1_40k_f0_24000.bin", to: "models/rvcv1_amitaro_v1_40k_f0_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvcv1_amitaro_v1_40k_nof0_24000.bin", to: "models/rvcv1_amitaro_v1_40k_nof0_24000.bin" }],
// }),
],
};

View File

@ -0,0 +1,111 @@
const path = require("path");
const HtmlWebpackPlugin = require("html-webpack-plugin");
const CopyPlugin = require("copy-webpack-plugin");
const webpack = require("webpack");
module.exports = {
mode: "production",
entry: "./src/000_index.tsx",
resolve: {
extensions: [".ts", ".tsx", ".js"],
fallback: {
buffer: require.resolve("buffer/"),
},
},
module: {
rules: [
{
test: [/\.ts$/, /\.tsx$/],
use: [
{
loader: "babel-loader",
options: {
presets: ["@babel/preset-env", "@babel/preset-react", "@babel/preset-typescript"],
plugins: ["@babel/plugin-transform-runtime"],
},
},
],
},
{
test: /\.html$/,
loader: "html-loader",
},
{
test: /\.css$/,
use: ["style-loader", { loader: "css-loader", options: { importLoaders: 1 } }, "postcss-loader"],
},
{ test: /\.json$/, type: "asset/inline" },
{ test: /\.svg$/, type: "asset/resource" },
],
},
output: {
filename: "index.js",
path: path.resolve(__dirname, "dist_web"),
},
plugins: [
new webpack.ProvidePlugin({
Buffer: ["buffer", "Buffer"],
}),
new HtmlWebpackPlugin({
template: path.resolve(__dirname, "public/index.html"),
filename: "./index.html",
}),
new CopyPlugin({
patterns: [{ from: "public/assets", to: "assets" }],
}),
new CopyPlugin({
patterns: [{ from: "public/favicon.ico", to: "favicon.ico" }],
}),
// ダミーファイルコピー
// new CopyPlugin({ //コピーの順番で上のassetのコピーで上書きされることがあるようだ。⇒npmスクリプトで対処。
// patterns: [{ from: "public/assets/gui_settings/edition_web.txt", to: "assets/gui_settings/edition.txt" }],
// }),
// new CopyPlugin({ // 拡張子なしのファイルコピーはできないようだ。⇒npmスクリプトで対処。
// patterns: [{ from: "public/info_web.txt", to: "info" }],
// }),
// VC用ファイルコピー
new CopyPlugin({
patterns: [{ from: "./node_modules/@dannadori/voice-changer-js/dist/ort-wasm-simd.wasm", to: "ort-wasm-simd.wasm" }],
}),
new CopyPlugin({
patterns: [{ from: "./node_modules/@dannadori/voice-changer-js/dist/tfjs-backend-wasm-simd.wasm", to: "tfjs-backend-wasm-simd.wasm" }],
}),
new CopyPlugin({
patterns: [{ from: "./node_modules/@dannadori/voice-changer-js/dist/process.js", to: "process.js" }],
}),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvcv2_emb_pit_24000.bin", to: "models/rvcv2_emb_pit_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvcv2_amitaro_v2_32k_f0_24000.bin", to: "models/rvcv2_amitaro_v2_32k_f0_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvcv2_amitaro_v2_32k_nof0_24000.bin", to: "models/rvcv2_amitaro_v2_32k_nof0_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvcv2_amitaro_v2_40k_f0_24000.bin", to: "models/rvcv2_amitaro_v2_40k_f0_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvcv2_amitaro_v2_40k_nof0_24000.bin", to: "models/rvcv2_amitaro_v2_40k_nof0_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvcv1_emb_pit_24000.bin", to: "models/rvcv1_emb_pit_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvcv1_amitaro_v1_32k_f0_24000.bin", to: "models/rvcv1_amitaro_v1_32k_f0_24000.bin" }],
// }),
// new CopyPlugin({
// patterns: [{ from: "public/models/rvcv1_amitaro_v1_32k_nof0_24000.bin", to: "models/rvcv1_amitaro_v1_32k_nof0_24000.bin" }],
// }),
new CopyPlugin({
patterns: [{ from: "public/models/rvcv2_exp_v2_32k_f0_24000.bin", to: "models/rvcv2_exp_v2_32k_f0_24000.bin" }],
}),
new CopyPlugin({
patterns: [{ from: "public/models/rvcv2_vctk_v2_16k_f0_24000.bin", to: "models/rvcv2_vctk_v2_16k_f0_24000.bin" }],
}),
// new CopyPlugin({
// patterns: [{ from: "public/models/amitaro.png", to: "models/amitaro.png" }],
// }),
],
};

View File

@ -0,0 +1,36 @@
const path = require("path");
const { merge } = require("webpack-merge");
const common = require("./webpack_web.common.js");
const express = require("express");
module.exports = merge(common, {
mode: "development",
devServer: {
setupMiddlewares: (middlewares, devServer) => {
if (!devServer) {
throw new Error("webpack-dev-server is not defined");
}
// ミドルウェアを追加して静的ファイルへのアクセスログを出力
devServer.app.use(
"/",
express.static(path.join(__dirname, "dist_web"), {
setHeaders: (res, filepath) => {
console.log(`Serving static file: ${filepath}`);
},
}),
);
// 既存のミドルウェアをそのまま利用
return middlewares;
},
client: {
overlay: {
errors: false,
warnings: false,
},
logging: "log",
},
host: "0.0.0.0",
https: true,
},
});

View File

@ -0,0 +1,6 @@
const { merge } = require("webpack-merge");
const common = require("./webpack_web.common.js");
module.exports = merge(common, {
mode: "production",
});

View File

@ -1,8 +1,11 @@
{
"workbench.colorCustomizations": {
"tab.activeBackground": "#65952acc"
},
"editor.defaultFormatter": "esbenp.prettier-vscode",
"prettier.printWidth": 1024,
"prettier.tabWidth": 4
"workbench.colorCustomizations": {
"tab.activeBackground": "#65952acc"
},
"editor.defaultFormatter": "esbenp.prettier-vscode",
"prettier.printWidth": 1024,
"prettier.tabWidth": 4,
"files.associations": {
"*.css": "postcss"
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
{
"name": "@dannadori/voice-changer-client-js",
"version": "1.0.175",
"version": "1.0.182",
"description": "",
"main": "dist/index.js",
"directories": {
@ -26,35 +26,35 @@
"author": "wataru.okada@flect.co.jp",
"license": "ISC",
"devDependencies": {
"@types/audioworklet": "^0.0.50",
"@types/jest": "^29.5.7",
"@types/node": "^20.8.10",
"@types/react": "18.2.34",
"@types/react-dom": "18.2.14",
"eslint": "^8.52.0",
"eslint-config-prettier": "^9.0.0",
"eslint-plugin-prettier": "^5.0.1",
"@types/audioworklet": "^0.0.54",
"@types/jest": "^29.5.12",
"@types/node": "^20.11.21",
"@types/react": "18.2.60",
"@types/react-dom": "18.2.19",
"eslint": "^8.57.0",
"eslint-config-prettier": "^9.1.0",
"eslint-plugin-prettier": "^5.1.3",
"eslint-plugin-react": "^7.33.2",
"eslint-webpack-plugin": "^4.0.1",
"jest": "^29.7.0",
"npm-run-all": "^4.1.5",
"prettier": "^3.0.3",
"prettier": "^3.2.5",
"raw-loader": "^4.0.2",
"rimraf": "^5.0.5",
"ts-loader": "^9.5.0",
"typescript": "^5.2.2",
"webpack": "^5.89.0",
"ts-loader": "^9.5.1",
"typescript": "^5.3.3",
"webpack": "^5.90.3",
"webpack-cli": "^5.1.4",
"webpack-dev-server": "^4.15.1"
"webpack-dev-server": "^5.0.2"
},
"dependencies": {
"@types/readable-stream": "^4.0.4",
"amazon-chime-sdk-js": "^3.18.2",
"@types/readable-stream": "^4.0.10",
"amazon-chime-sdk-js": "^3.20.0",
"buffer": "^6.0.3",
"localforage": "^1.10.0",
"protobufjs": "^7.2.5",
"protobufjs": "^7.2.6",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"socket.io-client": "^4.7.2"
"socket.io-client": "^4.7.4"
}
}

View File

@ -58,6 +58,7 @@ export class VoiceChangerClient {
// const ctx44k = new AudioContext({ sampleRate: 44100 }) // これでもプチプチが残る
const ctx44k = new AudioContext({ sampleRate: 48000 }); // 結局これが一番まし。
// const ctx44k = new AudioContext({ sampleRate: 16000 }); // LLVCテスト⇒16K出力でプチプチなしで行ける。
console.log("audio out:", ctx44k);
try {
this.vcOutNode = new VoiceChangerWorkletNode(ctx44k, voiceChangerWorkletListener); // vc node

View File

@ -1,383 +1,443 @@
import { VoiceChangerWorkletProcessorRequest } from "../@types/voice-changer-worklet-processor";
import { DefaultClientSettng, DownSamplingMode, VOICE_CHANGER_CLIENT_EXCEPTION, WorkletNodeSetting, WorkletSetting } from "../const";
import {
DefaultClientSettng,
DownSamplingMode,
VOICE_CHANGER_CLIENT_EXCEPTION,
WorkletNodeSetting,
WorkletSetting,
} from "../const";
import { io, Socket } from "socket.io-client";
import { DefaultEventsMap } from "@socket.io/component-emitter";
import { ServerRestClient } from "./ServerRestClient";
export type VoiceChangerWorkletListener = {
notifyVolume: (vol: number) => void;
notifySendBufferingTime: (time: number) => void;
notifyResponseTime: (time: number, perf?: number[]) => void;
notifyException: (code: VOICE_CHANGER_CLIENT_EXCEPTION, message: string) => void;
notifyVolume: (vol: number) => void;
notifySendBufferingTime: (time: number) => void;
notifyResponseTime: (time: number, perf?: number[]) => void;
notifyException: (
code: VOICE_CHANGER_CLIENT_EXCEPTION,
message: string
) => void;
};
export type InternalCallback = {
processAudio: (data: Uint8Array) => Promise<Uint8Array>;
processAudio: (data: Uint8Array) => Promise<Uint8Array>;
};
export class VoiceChangerWorkletNode extends AudioWorkletNode {
private listener: VoiceChangerWorkletListener;
private listener: VoiceChangerWorkletListener;
private setting: WorkletNodeSetting = DefaultClientSettng.workletNodeSetting;
private requestChunks: ArrayBuffer[] = [];
private socket: Socket<DefaultEventsMap, DefaultEventsMap> | null = null;
// performance monitor
private bufferStart = 0;
private setting: WorkletNodeSetting = DefaultClientSettng.workletNodeSetting;
private requestChunks: ArrayBuffer[] = [];
private socket: Socket<DefaultEventsMap, DefaultEventsMap> | null = null;
// performance monitor
private bufferStart = 0;
private isOutputRecording = false;
private recordingOutputChunk: Float32Array[] = [];
private outputNode: VoiceChangerWorkletNode | null = null;
private isOutputRecording = false;
private recordingOutputChunk: Float32Array[] = [];
private outputNode: VoiceChangerWorkletNode | null = null;
// Promises
private startPromiseResolve: ((value: void | PromiseLike<void>) => void) | null = null;
private stopPromiseResolve: ((value: void | PromiseLike<void>) => void) | null = null;
// Promises
private startPromiseResolve:
| ((value: void | PromiseLike<void>) => void)
| null = null;
private stopPromiseResolve:
| ((value: void | PromiseLike<void>) => void)
| null = null;
// InternalCallback
private internalCallback: InternalCallback | null = null;
// InternalCallback
private internalCallback: InternalCallback | null = null;
constructor(context: AudioContext, listener: VoiceChangerWorkletListener) {
super(context, "voice-changer-worklet-processor");
this.port.onmessage = this.handleMessage.bind(this);
this.listener = listener;
this.createSocketIO();
console.log(`[worklet_node][voice-changer-worklet-processor] created.`);
constructor(context: AudioContext, listener: VoiceChangerWorkletListener) {
super(context, "voice-changer-worklet-processor");
this.port.onmessage = this.handleMessage.bind(this);
this.listener = listener;
this.createSocketIO();
console.log(`[worklet_node][voice-changer-worklet-processor] created.`);
}
setOutputNode = (outputNode: VoiceChangerWorkletNode | null) => {
this.outputNode = outputNode;
};
// 設定
updateSetting = (setting: WorkletNodeSetting) => {
console.log(
`[WorkletNode] Updating WorkletNode Setting,`,
this.setting,
setting
);
let recreateSocketIoRequired = false;
if (
this.setting.serverUrl != setting.serverUrl ||
this.setting.protocol != setting.protocol
) {
recreateSocketIoRequired = true;
}
this.setting = setting;
if (recreateSocketIoRequired) {
this.createSocketIO();
}
};
setInternalAudioProcessCallback = (internalCallback: InternalCallback) => {
this.internalCallback = internalCallback;
};
getSettings = (): WorkletNodeSetting => {
return this.setting;
};
getSocketId = () => {
return this.socket?.id;
};
// 処理
private createSocketIO = () => {
if (this.socket) {
this.socket.close();
}
if (this.setting.protocol === "sio") {
this.socket = io(this.setting.serverUrl + "/test");
this.socket.on("connect_error", (err) => {
this.listener.notifyException(
VOICE_CHANGER_CLIENT_EXCEPTION.ERR_SIO_CONNECT_FAILED,
`[SIO] rconnection failed ${err}`
);
});
this.socket.on("connect", () => {
console.log(`[SIO] connect to ${this.setting.serverUrl}`);
console.log(`[SIO] ${this.socket?.id}`);
});
this.socket.on("close", function (socket) {
console.log(`[SIO] close ${socket.id}`);
});
this.socket.on("message", (response: any[]) => {
console.log("message:", response);
});
this.socket.on("response", (response: any[]) => {
const cur = Date.now();
const responseTime = cur - response[0];
const result = response[1] as ArrayBuffer;
const perf = response[2];
// Quick hack for server device mode
if (response[0] == 0) {
this.listener.notifyResponseTime(
Math.round(perf[0] * 1000),
perf.slice(1, 4)
);
return;
}
if (result.byteLength < 128 * 2) {
this.listener.notifyException(
VOICE_CHANGER_CLIENT_EXCEPTION.ERR_SIO_INVALID_RESPONSE,
`[SIO] recevied data is too short ${result.byteLength}`
);
} else {
if (this.outputNode != null) {
this.outputNode.postReceivedVoice(response[1]);
} else {
this.postReceivedVoice(response[1]);
}
this.listener.notifyResponseTime(responseTime, perf);
}
});
}
};
postReceivedVoice = (data: ArrayBuffer) => {
// Int16 to Float
const i16Data = new Int16Array(data);
const f32Data = new Float32Array(i16Data.length);
// console.log(`[worklet] f32DataLength${f32Data.length} i16DataLength${i16Data.length}`)
i16Data.forEach((x, i) => {
const float = x >= 0x8000 ? -(0x10000 - x) / 0x8000 : x / 0x7fff;
f32Data[i] = float;
});
// アップサンプリング
let upSampledBuffer: Float32Array | null = null;
if (this.setting.sendingSampleRate == 48000) {
upSampledBuffer = f32Data;
} else {
upSampledBuffer = new Float32Array(f32Data.length * 2);
for (let i = 0; i < f32Data.length; i++) {
const currentFrame = f32Data[i];
const nextFrame = i + 1 < f32Data.length ? f32Data[i + 1] : f32Data[i];
upSampledBuffer[i * 2] = currentFrame;
upSampledBuffer[i * 2 + 1] = (currentFrame + nextFrame) / 2;
}
}
setOutputNode = (outputNode: VoiceChangerWorkletNode | null) => {
this.outputNode = outputNode;
const req: VoiceChangerWorkletProcessorRequest = {
requestType: "voice",
voice: upSampledBuffer,
numTrancateTreshold: 0,
volTrancateThreshold: 0,
volTrancateLength: 0,
};
this.port.postMessage(req);
// 設定
updateSetting = (setting: WorkletNodeSetting) => {
console.log(`[WorkletNode] Updating WorkletNode Setting,`, this.setting, setting);
let recreateSocketIoRequired = false;
if (this.setting.serverUrl != setting.serverUrl || this.setting.protocol != setting.protocol) {
recreateSocketIoRequired = true;
}
this.setting = setting;
if (recreateSocketIoRequired) {
this.createSocketIO();
}
};
setInternalAudioProcessCallback = (internalCallback: InternalCallback) => {
this.internalCallback = internalCallback;
};
getSettings = (): WorkletNodeSetting => {
return this.setting;
};
getSocketId = () => {
return this.socket?.id;
};
// 処理
private createSocketIO = () => {
if (this.socket) {
this.socket.close();
}
if (this.setting.protocol === "sio") {
this.socket = io(this.setting.serverUrl + "/test");
this.socket.on("connect_error", (err) => {
this.listener.notifyException(VOICE_CHANGER_CLIENT_EXCEPTION.ERR_SIO_CONNECT_FAILED, `[SIO] rconnection failed ${err}`);
});
this.socket.on("connect", () => {
console.log(`[SIO] connect to ${this.setting.serverUrl}`);
console.log(`[SIO] ${this.socket?.id}`);
});
this.socket.on("close", function (socket) {
console.log(`[SIO] close ${socket.id}`);
});
this.socket.on("message", (response: any[]) => {
console.log("message:", response);
});
this.socket.on("response", (response: any[]) => {
const cur = Date.now();
const responseTime = cur - response[0];
const result = response[1] as ArrayBuffer;
const perf = response[2];
// Quick hack for server device mode
if (response[0] == 0) {
this.listener.notifyResponseTime(Math.round(perf[0] * 1000), perf.slice(1, 4));
return;
}
if (result.byteLength < 128 * 2) {
this.listener.notifyException(VOICE_CHANGER_CLIENT_EXCEPTION.ERR_SIO_INVALID_RESPONSE, `[SIO] recevied data is too short ${result.byteLength}`);
} else {
if (this.outputNode != null) {
this.outputNode.postReceivedVoice(response[1]);
} else {
this.postReceivedVoice(response[1]);
}
this.listener.notifyResponseTime(responseTime, perf);
}
});
}
};
postReceivedVoice = (data: ArrayBuffer) => {
// Int16 to Float
const i16Data = new Int16Array(data);
const f32Data = new Float32Array(i16Data.length);
// console.log(`[worklet] f32DataLength${f32Data.length} i16DataLength${i16Data.length}`)
i16Data.forEach((x, i) => {
const float = x >= 0x8000 ? -(0x10000 - x) / 0x8000 : x / 0x7fff;
f32Data[i] = float;
});
// アップサンプリング
let upSampledBuffer: Float32Array | null = null;
if (this.setting.sendingSampleRate == 48000) {
upSampledBuffer = f32Data;
} else {
upSampledBuffer = new Float32Array(f32Data.length * 2);
for (let i = 0; i < f32Data.length; i++) {
const currentFrame = f32Data[i];
const nextFrame = i + 1 < f32Data.length ? f32Data[i + 1] : f32Data[i];
upSampledBuffer[i * 2] = currentFrame;
upSampledBuffer[i * 2 + 1] = (currentFrame + nextFrame) / 2;
}
}
const req: VoiceChangerWorkletProcessorRequest = {
requestType: "voice",
voice: upSampledBuffer,
numTrancateTreshold: 0,
volTrancateThreshold: 0,
volTrancateLength: 0,
};
this.port.postMessage(req);
if (this.isOutputRecording) {
this.recordingOutputChunk.push(upSampledBuffer);
}
};
private _averageDownsampleBuffer(buffer: Float32Array, originalSampleRate: number, destinationSamplerate: number) {
if (originalSampleRate == destinationSamplerate) {
return buffer;
}
if (destinationSamplerate > originalSampleRate) {
throw "downsampling rate show be smaller than original sample rate";
}
const sampleRateRatio = originalSampleRate / destinationSamplerate;
const newLength = Math.round(buffer.length / sampleRateRatio);
const result = new Float32Array(newLength);
let offsetResult = 0;
let offsetBuffer = 0;
while (offsetResult < result.length) {
var nextOffsetBuffer = Math.round((offsetResult + 1) * sampleRateRatio);
// Use average value of skipped samples
var accum = 0,
count = 0;
for (var i = offsetBuffer; i < nextOffsetBuffer && i < buffer.length; i++) {
accum += buffer[i];
count++;
}
result[offsetResult] = accum / count;
// Or you can simply get rid of the skipped samples:
// result[offsetResult] = buffer[nextOffsetBuffer];
offsetResult++;
offsetBuffer = nextOffsetBuffer;
}
return result;
if (this.isOutputRecording) {
this.recordingOutputChunk.push(upSampledBuffer);
}
handleMessage(event: any) {
// console.log(`[Node:handleMessage_] `, event.data.volume);
if (event.data.responseType === "start_ok") {
if (this.startPromiseResolve) {
this.startPromiseResolve();
this.startPromiseResolve = null;
}
} else if (event.data.responseType === "stop_ok") {
if (this.stopPromiseResolve) {
this.stopPromiseResolve();
this.stopPromiseResolve = null;
}
} else if (event.data.responseType === "volume") {
this.listener.notifyVolume(event.data.volume as number);
} else if (event.data.responseType === "inputData") {
const inputData = event.data.inputData as Float32Array;
// console.log("receive input data", inputData);
};
// ダウンサンプリング
let downsampledBuffer: Float32Array | null = null;
if (this.setting.sendingSampleRate == 48000) {
downsampledBuffer = inputData;
} else if (this.setting.downSamplingMode == DownSamplingMode.decimate) {
//////// (Kind 1) 間引き //////////
//// 48000Hz で入ってくるので間引いて24000Hzに変換する。
downsampledBuffer = new Float32Array(inputData.length / 2);
for (let i = 0; i < inputData.length; i++) {
if (i % 2 == 0) {
downsampledBuffer[i / 2] = inputData[i];
}
}
} else {
//////// (Kind 2) 平均 //////////
// downsampledBuffer = this._averageDownsampleBuffer(buffer, 48000, 24000)
downsampledBuffer = this._averageDownsampleBuffer(inputData, 48000, this.setting.sendingSampleRate);
}
// Float to Int16 (internalの場合はfloatのまま行く。)
if (this.setting.protocol != "internal") {
const arrayBuffer = new ArrayBuffer(downsampledBuffer.length * 2);
const dataView = new DataView(arrayBuffer);
for (let i = 0; i < downsampledBuffer.length; i++) {
let s = Math.max(-1, Math.min(1, downsampledBuffer[i]));
s = s < 0 ? s * 0x8000 : s * 0x7fff;
dataView.setInt16(i * 2, s, true);
}
// バッファリング
this.requestChunks.push(arrayBuffer);
} else {
// internal
// console.log("downsampledBuffer.buffer", downsampledBuffer.buffer);
this.requestChunks.push(downsampledBuffer.buffer);
}
//// リクエストバッファの中身が、リクエスト送信数と違う場合は処理終了。
if (this.requestChunks.length < this.setting.inputChunkNum) {
return;
}
// リクエスト用の入れ物を作成
const windowByteLength = this.requestChunks.reduce((prev, cur) => {
return prev + cur.byteLength;
}, 0);
const newBuffer = new Uint8Array(windowByteLength);
// リクエストのデータをセット
this.requestChunks.reduce((prev, cur) => {
newBuffer.set(new Uint8Array(cur), prev);
return prev + cur.byteLength;
}, 0);
this.sendBuffer(newBuffer);
this.requestChunks = [];
this.listener.notifySendBufferingTime(Date.now() - this.bufferStart);
this.bufferStart = Date.now();
} else {
console.warn(`[worklet_node][voice-changer-worklet-processor] unknown response ${event.data.responseType}`, event.data);
}
private _averageDownsampleBuffer(
buffer: Float32Array,
originalSampleRate: number,
destinationSamplerate: number
) {
if (originalSampleRate == destinationSamplerate) {
return buffer;
}
if (destinationSamplerate > originalSampleRate) {
throw "downsampling rate show be smaller than original sample rate";
}
const sampleRateRatio = originalSampleRate / destinationSamplerate;
const newLength = Math.round(buffer.length / sampleRateRatio);
const result = new Float32Array(newLength);
let offsetResult = 0;
let offsetBuffer = 0;
while (offsetResult < result.length) {
var nextOffsetBuffer = Math.round((offsetResult + 1) * sampleRateRatio);
// Use average value of skipped samples
var accum = 0,
count = 0;
for (
var i = offsetBuffer;
i < nextOffsetBuffer && i < buffer.length;
i++
) {
accum += buffer[i];
count++;
}
result[offsetResult] = accum / count;
// Or you can simply get rid of the skipped samples:
// result[offsetResult] = buffer[nextOffsetBuffer];
offsetResult++;
offsetBuffer = nextOffsetBuffer;
}
return result;
}
handleMessage(event: any) {
// console.log(`[Node:handleMessage_] `, event.data.volume);
if (event.data.responseType === "start_ok") {
if (this.startPromiseResolve) {
this.startPromiseResolve();
this.startPromiseResolve = null;
}
} else if (event.data.responseType === "stop_ok") {
if (this.stopPromiseResolve) {
this.stopPromiseResolve();
this.stopPromiseResolve = null;
}
} else if (event.data.responseType === "volume") {
this.listener.notifyVolume(event.data.volume as number);
} else if (event.data.responseType === "inputData") {
const inputData = event.data.inputData as Float32Array;
// console.log("receive input data", inputData);
private sendBuffer = async (newBuffer: Uint8Array) => {
const timestamp = Date.now();
if (this.setting.protocol === "sio") {
if (!this.socket) {
console.warn(`sio is not initialized`);
return;
}
// console.log("emit!")
this.socket.emit("request_message", [timestamp, newBuffer.buffer]);
} else if (this.setting.protocol === "rest") {
const restClient = new ServerRestClient(this.setting.serverUrl);
const res = await restClient.postVoice(timestamp, newBuffer.buffer);
if (res.byteLength < 128 * 2) {
this.listener.notifyException(VOICE_CHANGER_CLIENT_EXCEPTION.ERR_REST_INVALID_RESPONSE, `[REST] recevied data is too short ${res.byteLength}`);
} else {
if (this.outputNode != null) {
this.outputNode.postReceivedVoice(res);
} else {
this.postReceivedVoice(res);
}
this.listener.notifyResponseTime(Date.now() - timestamp);
}
} else if (this.setting.protocol == "internal") {
if (!this.internalCallback) {
this.listener.notifyException(VOICE_CHANGER_CLIENT_EXCEPTION.ERR_INTERNAL_AUDIO_PROCESS_CALLBACK_IS_NOT_INITIALIZED, `[AudioWorkletNode] internal audio process callback is not initialized`);
return;
}
const res = await this.internalCallback.processAudio(newBuffer);
if (res.length < 128 * 2) {
return;
}
if (this.outputNode != null) {
this.outputNode.postReceivedVoice(res.buffer);
} else {
this.postReceivedVoice(res.buffer);
}
// ダウンサンプリング
let downsampledBuffer: Float32Array | null = null;
if (this.setting.sendingSampleRate == 48000) {
downsampledBuffer = inputData;
} else if (this.setting.downSamplingMode == DownSamplingMode.decimate) {
//////// (Kind 1) 間引き //////////
//// 48000Hz で入ってくるので間引いて24000Hzに変換する。
downsampledBuffer = new Float32Array(inputData.length / 2);
for (let i = 0; i < inputData.length; i++) {
if (i % 2 == 0) {
downsampledBuffer[i / 2] = inputData[i];
}
}
} else {
//////// (Kind 2) 平均 //////////
// downsampledBuffer = this._averageDownsampleBuffer(buffer, 48000, 24000)
downsampledBuffer = this._averageDownsampleBuffer(
inputData,
48000,
this.setting.sendingSampleRate
);
}
// Float to Int16 (internalの場合はfloatのまま行く。)
if (this.setting.protocol != "internal") {
const arrayBuffer = new ArrayBuffer(downsampledBuffer.length * 2);
const dataView = new DataView(arrayBuffer);
for (let i = 0; i < downsampledBuffer.length; i++) {
let s = Math.max(-1, Math.min(1, downsampledBuffer[i]));
s = s < 0 ? s * 0x8000 : s * 0x7fff;
dataView.setInt16(i * 2, s, true);
}
// バッファリング
this.requestChunks.push(arrayBuffer);
} else {
// internal
// console.log("downsampledBuffer.buffer", downsampledBuffer.buffer);
this.requestChunks.push(downsampledBuffer.buffer);
}
//// リクエストバッファの中身が、リクエスト送信数と違う場合は処理終了。
if (this.requestChunks.length < this.setting.inputChunkNum) {
return;
}
// リクエスト用の入れ物を作成
const windowByteLength = this.requestChunks.reduce((prev, cur) => {
return prev + cur.byteLength;
}, 0);
const newBuffer = new Uint8Array(windowByteLength);
// リクエストのデータをセット
this.requestChunks.reduce((prev, cur) => {
newBuffer.set(new Uint8Array(cur), prev);
return prev + cur.byteLength;
}, 0);
this.sendBuffer(newBuffer);
this.requestChunks = [];
this.listener.notifySendBufferingTime(Date.now() - this.bufferStart);
this.bufferStart = Date.now();
} else {
console.warn(
`[worklet_node][voice-changer-worklet-processor] unknown response ${event.data.responseType}`,
event.data
);
}
}
private sendBuffer = async (newBuffer: Uint8Array) => {
const timestamp = Date.now();
if (this.setting.protocol === "sio") {
if (!this.socket) {
console.warn(`sio is not initialized`);
return;
}
// console.log("emit!")
this.socket.emit("request_message", [timestamp, newBuffer.buffer]);
} else if (this.setting.protocol === "rest") {
const restClient = new ServerRestClient(this.setting.serverUrl);
const res = await restClient.postVoice(timestamp, newBuffer.buffer);
if (res.byteLength < 128 * 2) {
this.listener.notifyException(
VOICE_CHANGER_CLIENT_EXCEPTION.ERR_REST_INVALID_RESPONSE,
`[REST] recevied data is too short ${res.byteLength}`
);
} else {
if (this.outputNode != null) {
this.outputNode.postReceivedVoice(res);
} else {
throw "unknown protocol";
this.postReceivedVoice(res);
}
};
// Worklet操作
configure = (setting: WorkletSetting) => {
const req: VoiceChangerWorkletProcessorRequest = {
requestType: "config",
voice: new Float32Array(1),
numTrancateTreshold: setting.numTrancateTreshold,
volTrancateThreshold: setting.volTrancateThreshold,
volTrancateLength: setting.volTrancateLength,
};
this.port.postMessage(req);
};
start = async () => {
const p = new Promise<void>((resolve) => {
this.startPromiseResolve = resolve;
});
const req: VoiceChangerWorkletProcessorRequest = {
requestType: "start",
voice: new Float32Array(1),
numTrancateTreshold: 0,
volTrancateThreshold: 0,
volTrancateLength: 0,
};
this.port.postMessage(req);
await p;
};
stop = async () => {
const p = new Promise<void>((resolve) => {
this.stopPromiseResolve = resolve;
});
const req: VoiceChangerWorkletProcessorRequest = {
requestType: "stop",
voice: new Float32Array(1),
numTrancateTreshold: 0,
volTrancateThreshold: 0,
volTrancateLength: 0,
};
this.port.postMessage(req);
await p;
};
trancateBuffer = () => {
const req: VoiceChangerWorkletProcessorRequest = {
requestType: "trancateBuffer",
voice: new Float32Array(1),
numTrancateTreshold: 0,
volTrancateThreshold: 0,
volTrancateLength: 0,
};
this.port.postMessage(req);
};
startOutputRecording = () => {
this.recordingOutputChunk = [];
this.isOutputRecording = true;
};
stopOutputRecording = () => {
this.isOutputRecording = false;
const dataSize = this.recordingOutputChunk.reduce((prev, cur) => {
return prev + cur.length;
}, 0);
const samples = new Float32Array(dataSize);
let sampleIndex = 0;
for (let i = 0; i < this.recordingOutputChunk.length; i++) {
for (let j = 0; j < this.recordingOutputChunk[i].length; j++) {
samples[sampleIndex] = this.recordingOutputChunk[i][j];
sampleIndex++;
}
this.listener.notifyResponseTime(Date.now() - timestamp);
}
} else if (this.setting.protocol == "internal") {
if (!this.internalCallback) {
this.listener.notifyException(
VOICE_CHANGER_CLIENT_EXCEPTION.ERR_INTERNAL_AUDIO_PROCESS_CALLBACK_IS_NOT_INITIALIZED,
`[AudioWorkletNode] internal audio process callback is not initialized`
);
return;
}
// const res = await this.internalCallback.processAudio(newBuffer);
// if (res.length < 128 * 2) {
// return;
// }
// if (this.outputNode != null) {
// this.outputNode.postReceivedVoice(res.buffer);
// } else {
// this.postReceivedVoice(res.buffer);
// }
this.internalCallback.processAudio(newBuffer).then((res) => {
if (res.length < 128 * 2) {
return;
}
return samples;
if (this.outputNode != null) {
this.outputNode.postReceivedVoice(res.buffer);
} else {
this.postReceivedVoice(res.buffer);
}
});
} else {
throw "unknown protocol";
}
};
// Worklet操作
configure = (setting: WorkletSetting) => {
const req: VoiceChangerWorkletProcessorRequest = {
requestType: "config",
voice: new Float32Array(1),
numTrancateTreshold: setting.numTrancateTreshold,
volTrancateThreshold: setting.volTrancateThreshold,
volTrancateLength: setting.volTrancateLength,
};
this.port.postMessage(req);
};
start = async () => {
const p = new Promise<void>((resolve) => {
this.startPromiseResolve = resolve;
});
const req: VoiceChangerWorkletProcessorRequest = {
requestType: "start",
voice: new Float32Array(1),
numTrancateTreshold: 0,
volTrancateThreshold: 0,
volTrancateLength: 0,
};
this.port.postMessage(req);
await p;
};
stop = async () => {
const p = new Promise<void>((resolve) => {
this.stopPromiseResolve = resolve;
});
const req: VoiceChangerWorkletProcessorRequest = {
requestType: "stop",
voice: new Float32Array(1),
numTrancateTreshold: 0,
volTrancateThreshold: 0,
volTrancateLength: 0,
};
this.port.postMessage(req);
await p;
};
trancateBuffer = () => {
const req: VoiceChangerWorkletProcessorRequest = {
requestType: "trancateBuffer",
voice: new Float32Array(1),
numTrancateTreshold: 0,
volTrancateThreshold: 0,
volTrancateLength: 0,
};
this.port.postMessage(req);
};
startOutputRecording = () => {
this.recordingOutputChunk = [];
this.isOutputRecording = true;
};
stopOutputRecording = () => {
this.isOutputRecording = false;
const dataSize = this.recordingOutputChunk.reduce((prev, cur) => {
return prev + cur.length;
}, 0);
const samples = new Float32Array(dataSize);
let sampleIndex = 0;
for (let i = 0; i < this.recordingOutputChunk.length; i++) {
for (let j = 0; j < this.recordingOutputChunk[i].length; j++) {
samples[sampleIndex] = this.recordingOutputChunk[i][j];
sampleIndex++;
}
}
return samples;
};
}

View File

@ -11,6 +11,9 @@ export const VoiceChangerType = {
RVC: "RVC",
"Diffusion-SVC": "Diffusion-SVC",
Beatrice: "Beatrice",
LLVC: "LLVC",
WebModel: "WebModel",
EasyVC: "EasyVC",
} as const;
export type VoiceChangerType = (typeof VoiceChangerType)[keyof typeof VoiceChangerType];
@ -37,6 +40,9 @@ export const ModelSamplingRate = {
export type ModelSamplingRate = (typeof InputSampleRate)[keyof typeof InputSampleRate];
export const CrossFadeOverlapSize = {
"128": 128,
"256": 256,
"512": 512,
"1024": 1024,
"2048": 2048,
"4096": 4096,
@ -51,6 +57,7 @@ export const F0Detector = {
crepe_tiny: "crepe_tiny",
rmvpe: "rmvpe",
rmvpe_onnx: "rmvpe_onnx",
fcpe: "fcpe",
} as const;
export type F0Detector = (typeof F0Detector)[keyof typeof F0Detector];
@ -296,7 +303,22 @@ export type BeatriceModelSlot = ModelSlot & {
speakers: { [key: number]: string };
};
export type ModelSlotUnion = RVCModelSlot | MMVCv13ModelSlot | MMVCv15ModelSlot | SoVitsSvc40ModelSlot | DDSPSVCModelSlot | DiffusionSVCModelSlot | BeatriceModelSlot;
export type LLVCModelSlot = ModelSlot & {
modelFile: string;
configFile: string;
speakers: { [key: number]: string };
};
export type WebModelSlot = ModelSlot & {
modelFile: string;
defaultTune: number;
modelType: RVCModelType;
f0: boolean;
samplingRate: number;
};
export type ModelSlotUnion = RVCModelSlot | MMVCv13ModelSlot | MMVCv15ModelSlot | SoVitsSvc40ModelSlot | DDSPSVCModelSlot | DiffusionSVCModelSlot | BeatriceModelSlot | LLVCModelSlot | WebModelSlot;
type ServerAudioDevice = {
kind: "audioinput" | "audiooutput";
@ -507,7 +529,7 @@ export const DefaultClientSettng: ClientSetting = {
serverUrl: "",
protocol: "sio",
sendingSampleRate: 48000,
inputChunkNum: 48,
inputChunkNum: 192,
downSamplingMode: "average",
},
voiceChangerClientSetting: {

View File

@ -127,7 +127,7 @@ export const useClient = (props: UseClientProps): ClientState => {
};
// 設定データ管理
const { setItem, getItem } = useIndexedDB({ clientType: null });
const { setItem, getItem, removeItem } = useIndexedDB({ clientType: null });
// 設定データの更新と保存
const _setSetting = (_setting: ClientSetting) => {
const storeData = { ..._setting };
@ -231,7 +231,7 @@ export const useClient = (props: UseClientProps): ClientState => {
}, [voiceChangerClientSetting.reloadClientSetting, serverSetting.reloadServerInfo]);
const clearSetting = async () => {
// TBD
await removeItem("clientSetting");
};
// 設定変更

View File

@ -1,221 +1,279 @@
import { useState, useMemo } from "react";
import { VoiceChangerServerSetting, ServerInfo, ServerSettingKey, OnnxExporterInfo, MergeModelRequest, VoiceChangerType, DefaultServerSetting } from "../const";
import {
VoiceChangerServerSetting,
ServerInfo,
ServerSettingKey,
OnnxExporterInfo,
MergeModelRequest,
VoiceChangerType,
DefaultServerSetting,
} from "../const";
import { VoiceChangerClient } from "../VoiceChangerClient";
export const ModelAssetName = {
iconFile: "iconFile",
iconFile: "iconFile",
} as const;
export type ModelAssetName = (typeof ModelAssetName)[keyof typeof ModelAssetName];
export type ModelAssetName =
(typeof ModelAssetName)[keyof typeof ModelAssetName];
export const ModelFileKind = {
mmvcv13Config: "mmvcv13Config",
mmvcv13Model: "mmvcv13Model",
mmvcv15Config: "mmvcv15Config",
mmvcv15Model: "mmvcv15Model",
mmvcv15Correspondence: "mmvcv15Correspondence",
mmvcv13Config: "mmvcv13Config",
mmvcv13Model: "mmvcv13Model",
mmvcv15Config: "mmvcv15Config",
mmvcv15Model: "mmvcv15Model",
mmvcv15Correspondence: "mmvcv15Correspondence",
soVitsSvc40Config: "soVitsSvc40Config",
soVitsSvc40Model: "soVitsSvc40Model",
soVitsSvc40Cluster: "soVitsSvc40Cluster",
soVitsSvc40Config: "soVitsSvc40Config",
soVitsSvc40Model: "soVitsSvc40Model",
soVitsSvc40Cluster: "soVitsSvc40Cluster",
rvcModel: "rvcModel",
rvcIndex: "rvcIndex",
rvcModel: "rvcModel",
rvcIndex: "rvcIndex",
ddspSvcModel: "ddspSvcModel",
ddspSvcModelConfig: "ddspSvcModelConfig",
ddspSvcDiffusion: "ddspSvcDiffusion",
ddspSvcDiffusionConfig: "ddspSvcDiffusionConfig",
ddspSvcModel: "ddspSvcModel",
ddspSvcModelConfig: "ddspSvcModelConfig",
ddspSvcDiffusion: "ddspSvcDiffusion",
ddspSvcDiffusionConfig: "ddspSvcDiffusionConfig",
diffusionSVCModel: "diffusionSVCModel",
diffusionSVCModel: "diffusionSVCModel",
beatriceModel: "beatriceModel",
beatriceModel: "beatriceModel",
llvcModel: "llvcModel",
llvcConfig: "llvcConfig",
easyVCModel: "easyVCModel",
} as const;
export type ModelFileKind = (typeof ModelFileKind)[keyof typeof ModelFileKind];
export type ModelFile = {
file: File;
kind: ModelFileKind;
dir: string;
file: File;
kind: ModelFileKind;
dir: string;
};
export type ModelUploadSetting = {
voiceChangerType: VoiceChangerType;
slot: number;
isSampleMode: boolean;
sampleId: string | null;
voiceChangerType: VoiceChangerType;
slot: number;
isSampleMode: boolean;
sampleId: string | null;
files: ModelFile[];
params: any;
files: ModelFile[];
params: any;
};
export type ModelFileForServer = Omit<ModelFile, "file"> & {
name: string;
kind: ModelFileKind;
name: string;
kind: ModelFileKind;
};
export type ModelUploadSettingForServer = Omit<ModelUploadSetting, "files"> & {
files: ModelFileForServer[];
files: ModelFileForServer[];
};
type AssetUploadSetting = {
slot: number;
name: ModelAssetName;
file: string;
slot: number;
name: ModelAssetName;
file: string;
};
export type UseServerSettingProps = {
voiceChangerClient: VoiceChangerClient | null;
voiceChangerClient: VoiceChangerClient | null;
};
export type ServerSettingState = {
serverSetting: ServerInfo;
updateServerSettings: (setting: ServerInfo) => Promise<void>;
reloadServerInfo: () => Promise<void>;
serverSetting: ServerInfo;
updateServerSettings: (setting: ServerInfo) => Promise<void>;
reloadServerInfo: () => Promise<void>;
uploadModel: (setting: ModelUploadSetting) => Promise<void>;
uploadProgress: number;
isUploading: boolean;
uploadModel: (setting: ModelUploadSetting) => Promise<void>;
uploadProgress: number;
isUploading: boolean;
getOnnx: () => Promise<OnnxExporterInfo>;
mergeModel: (request: MergeModelRequest) => Promise<ServerInfo>;
updateModelDefault: () => Promise<ServerInfo>;
updateModelInfo: (slot: number, key: string, val: string) => Promise<ServerInfo>;
uploadAssets: (slot: number, name: ModelAssetName, file: File) => Promise<void>;
getOnnx: () => Promise<OnnxExporterInfo>;
mergeModel: (request: MergeModelRequest) => Promise<ServerInfo>;
updateModelDefault: () => Promise<ServerInfo>;
updateModelInfo: (
slot: number,
key: string,
val: string
) => Promise<ServerInfo>;
uploadAssets: (
slot: number,
name: ModelAssetName,
file: File
) => Promise<void>;
};
export const useServerSetting = (props: UseServerSettingProps): ServerSettingState => {
const [serverSetting, setServerSetting] = useState<ServerInfo>(DefaultServerSetting);
export const useServerSetting = (
props: UseServerSettingProps
): ServerSettingState => {
const [serverSetting, _setServerSetting] =
useState<ServerInfo>(DefaultServerSetting);
const setServerSetting = (info: ServerInfo) => {
if (!info.modelSlots) {
// サーバが情報を空で返したとき。Web版対策
return;
}
_setServerSetting(info);
};
//////////////
// 設定
/////////////
const updateServerSettings = useMemo(() => {
return async (setting: ServerInfo) => {
if (!props.voiceChangerClient) return;
for (let i = 0; i < Object.values(ServerSettingKey).length; i++) {
const k = Object.values(ServerSettingKey)[i] as keyof VoiceChangerServerSetting;
const cur_v = serverSetting[k];
const new_v = setting[k];
//////////////
// 設定
/////////////
const updateServerSettings = useMemo(() => {
return async (setting: ServerInfo) => {
if (!props.voiceChangerClient) return;
for (let i = 0; i < Object.values(ServerSettingKey).length; i++) {
const k = Object.values(ServerSettingKey)[
i
] as keyof VoiceChangerServerSetting;
const cur_v = serverSetting[k];
const new_v = setting[k];
if (cur_v != new_v) {
const res = await props.voiceChangerClient.updateServerSettings(k, "" + new_v);
setServerSetting(res);
}
}
};
}, [props.voiceChangerClient, serverSetting]);
//////////////
// 操作
/////////////
const [uploadProgress, setUploadProgress] = useState<number>(0);
const [isUploading, setIsUploading] = useState<boolean>(false);
// (e) モデルアップロード
const _uploadFile2 = useMemo(() => {
return async (file: File, onprogress: (progress: number, end: boolean) => void, dir: string = "") => {
if (!props.voiceChangerClient) return;
const num = await props.voiceChangerClient.uploadFile2(dir, file, onprogress);
const res = await props.voiceChangerClient.concatUploadedFile(dir + file.name, num);
console.log("uploaded", num, res);
};
}, [props.voiceChangerClient]);
// 新しいアップローダ
const uploadModel = useMemo(() => {
return async (setting: ModelUploadSetting) => {
if (!props.voiceChangerClient) {
return;
}
setUploadProgress(0);
setIsUploading(true);
if (setting.isSampleMode == false) {
const progRate = 1 / setting.files.length;
for (let i = 0; i < setting.files.length; i++) {
const progOffset = 100 * i * progRate;
await _uploadFile2(
setting.files[i].file,
(progress: number, _end: boolean) => {
setUploadProgress(progress * progRate + progOffset);
},
setting.files[i].dir
);
}
}
const params: ModelUploadSettingForServer = {
...setting,
files: setting.files.map((f) => {
return { name: f.file.name, kind: f.kind, dir: f.dir };
}),
};
const loadPromise = props.voiceChangerClient.loadModel(0, false, JSON.stringify(params));
await loadPromise;
setUploadProgress(0);
setIsUploading(false);
reloadServerInfo();
};
}, [props.voiceChangerClient]);
const uploadAssets = useMemo(() => {
return async (slot: number, name: ModelAssetName, file: File) => {
if (!props.voiceChangerClient) return;
await _uploadFile2(file, (progress: number, _end: boolean) => {
console.log(progress, _end);
});
const assetUploadSetting: AssetUploadSetting = {
slot,
name,
file: file.name,
};
await props.voiceChangerClient.uploadAssets(JSON.stringify(assetUploadSetting));
reloadServerInfo();
};
}, [props.voiceChangerClient]);
const reloadServerInfo = useMemo(() => {
return async () => {
if (!props.voiceChangerClient) return;
const res = await props.voiceChangerClient.getServerSettings();
setServerSetting(res);
};
}, [props.voiceChangerClient]);
const getOnnx = async () => {
return props.voiceChangerClient!.getOnnx();
if (cur_v != new_v) {
const res = await props.voiceChangerClient.updateServerSettings(
k,
"" + new_v
);
setServerSetting(res);
}
}
};
}, [props.voiceChangerClient, serverSetting]);
const mergeModel = async (request: MergeModelRequest) => {
const serverInfo = await props.voiceChangerClient!.mergeModel(request);
setServerSetting(serverInfo);
return serverInfo;
};
//////////////
// 操作
/////////////
const [uploadProgress, setUploadProgress] = useState<number>(0);
const [isUploading, setIsUploading] = useState<boolean>(false);
const updateModelDefault = async () => {
const serverInfo = await props.voiceChangerClient!.updateModelDefault();
setServerSetting(serverInfo);
return serverInfo;
};
const updateModelInfo = async (slot: number, key: string, val: string) => {
const serverInfo = await props.voiceChangerClient!.updateModelInfo(slot, key, val);
setServerSetting(serverInfo);
return serverInfo;
// (e) モデルアップロード
const _uploadFile2 = useMemo(() => {
return async (
file: File,
onprogress: (progress: number, end: boolean) => void,
dir: string = ""
) => {
if (!props.voiceChangerClient) return;
const num = await props.voiceChangerClient.uploadFile2(
dir,
file,
onprogress
);
const res = await props.voiceChangerClient.concatUploadedFile(
dir + file.name,
num
);
console.log("uploaded", num, res);
};
}, [props.voiceChangerClient]);
return {
serverSetting,
updateServerSettings,
reloadServerInfo,
// 新しいアップローダ
const uploadModel = useMemo(() => {
return async (setting: ModelUploadSetting) => {
if (!props.voiceChangerClient) {
return;
}
uploadModel,
uploadProgress,
isUploading,
getOnnx,
mergeModel,
updateModelDefault,
updateModelInfo,
uploadAssets,
setUploadProgress(0);
setIsUploading(true);
if (setting.isSampleMode == false) {
const progRate = 1 / setting.files.length;
for (let i = 0; i < setting.files.length; i++) {
const progOffset = 100 * i * progRate;
await _uploadFile2(
setting.files[i].file,
(progress: number, _end: boolean) => {
setUploadProgress(progress * progRate + progOffset);
},
setting.files[i].dir
);
}
}
const params: ModelUploadSettingForServer = {
...setting,
files: setting.files.map((f) => {
return { name: f.file.name, kind: f.kind, dir: f.dir };
}),
};
const loadPromise = props.voiceChangerClient.loadModel(
0,
false,
JSON.stringify(params)
);
await loadPromise;
setUploadProgress(0);
setIsUploading(false);
reloadServerInfo();
};
}, [props.voiceChangerClient]);
const uploadAssets = useMemo(() => {
return async (slot: number, name: ModelAssetName, file: File) => {
if (!props.voiceChangerClient) return;
await _uploadFile2(file, (progress: number, _end: boolean) => {
console.log(progress, _end);
});
const assetUploadSetting: AssetUploadSetting = {
slot,
name,
file: file.name,
};
await props.voiceChangerClient.uploadAssets(
JSON.stringify(assetUploadSetting)
);
reloadServerInfo();
};
}, [props.voiceChangerClient]);
const reloadServerInfo = useMemo(() => {
return async () => {
if (!props.voiceChangerClient) return;
const res = await props.voiceChangerClient.getServerSettings();
setServerSetting(res);
};
}, [props.voiceChangerClient]);
const getOnnx = async () => {
return props.voiceChangerClient!.getOnnx();
};
const mergeModel = async (request: MergeModelRequest) => {
const serverInfo = await props.voiceChangerClient!.mergeModel(request);
setServerSetting(serverInfo);
return serverInfo;
};
const updateModelDefault = async () => {
const serverInfo = await props.voiceChangerClient!.updateModelDefault();
setServerSetting(serverInfo);
return serverInfo;
};
const updateModelInfo = async (slot: number, key: string, val: string) => {
const serverInfo = await props.voiceChangerClient!.updateModelInfo(
slot,
key,
val
);
setServerSetting(serverInfo);
return serverInfo;
};
return {
serverSetting,
updateServerSettings,
reloadServerInfo,
uploadModel,
uploadProgress,
isUploading,
getOnnx,
mergeModel,
updateModelDefault,
updateModelInfo,
uploadAssets,
};
};

View File

@ -48,6 +48,7 @@ class VoiceChangerWorkletProcessor extends AudioWorkletProcessor {
*/
constructor() {
super();
console.log("[AudioWorkletProcessor] created.");
this.initialized = true;
this.port.onmessage = this.handleMessage.bind(this);
}
@ -106,7 +107,7 @@ class VoiceChangerWorkletProcessor extends AudioWorkletProcessor {
// console.log(`[worklet] Truncate ${this.playBuffer.length} > ${this.numTrancateTreshold}`);
// this.trancateBuffer();
// }
if (this.playBuffer.length > f32Data.length / this.BLOCK_SIZE) {
if (this.playBuffer.length > (f32Data.length / this.BLOCK_SIZE) * 1.5) {
console.log(`[worklet] Truncate ${this.playBuffer.length} > ${f32Data.length / this.BLOCK_SIZE}`);
this.trancateBuffer();
}
@ -171,7 +172,6 @@ class VoiceChangerWorkletProcessor extends AudioWorkletProcessor {
// }
// }
let voice = this.playBuffer.shift();
if (voice) {
this.volume = this.calcVol(voice, this.volume);
const volumeResponse: VoiceChangerWorkletProcessorResponse = {

View File

@ -1,6 +1,6 @@
## VC Client for Docker
[English](./README_en.md)
[English](./README_en.md) [Korean](./README_ko.md)
## ビルド

View File

@ -1,6 +1,7 @@
## VC Client for Docker
[Japanese](./README.md)
[Japanese](./README.md)
[Korean](./README.md)
## Build

View File

@ -0,0 +1,47 @@
## VC Client for Docker
[Japanese](./README.md) [English](./README_en.md)
## 빌드
리포지토리 폴더의 최상위 위치에서
```
npm run build:docker:vcclient
```
## 실행
리포지토리 폴더의 최상위 위치에서
```
bash start_docker.sh
```
브라우저(Chrome에서만 지원)로 접속하면 화면이 나옵니다.
## RUN with options
GPU를 사용하지 않는 경우에는
```
USE_GPU=off bash start_docker.sh
```
포트 번호를 변경하고 싶은 경우에는
```
EX_PORT=<port> bash start_docker.sh
```
로컬 이미지를 사용하고 싶은 경우에는
```
USE_LOCAL=on bash start_docker.sh
```
## Push to Repo (only for devs)
```
npm run push:docker:vcclient
```

148
docs_i18n/README_ar.md Normal file
View File

@ -0,0 +1,148 @@
[اليابانية](/README.md) /
[الإنجليزية](/docs_i18n/README_en.md) /
[الكورية](/docs_i18n/README_ko.md)/
[الصينية](/docs_i18n/README_zh.md)/
[الألمانية](/docs_i18n/README_de.md)/
[العربية](/docs_i18n/README_ar.md)/
[اليونانية](/docs_i18n/README_el.md)/
[الإسبانية](/docs_i18n/README_es.md)/
[الفرنسية](/docs_i18n/README_fr.md)/
[الإيطالية](/docs_i18n/README_it.md)/
[اللاتينية](/docs_i18n/README_la.md)/
[الماليزية](/docs_i18n/README_ms.md)/
[الروسية](/docs_i18n/README_ru.md)
*جميع اللغات باستثناء اليابانية مترجمة آليًا.
## VCClient
VCClient هو برنامج يقوم بتحويل الصوت في الوقت الحقيقي باستخدام الذكاء الاصطناعي.
## ما الجديد!
* v.2.0.78-beta
* إصلاح خطأ: تم تجنب خطأ تحميل نموذج RVC
* أصبح من الممكن الآن التشغيل بالتزامن مع الإصدار 1.x
* تمت زيادة أحجام القطع القابلة للاختيار
* v.2.0.77-beta (لـ RTX 5090 فقط، تجريبي)
* دعم الوحدات المتعلقة بـ RTX 5090 (غير مثبت لأن المطور لا يمتلك RTX 5090)
* v.2.0.76-beta
* ميزة جديدة:
* Beatrice: تنفيذ دمج المتحدثين
* Beatrice: تحويل النغمة التلقائي
* إصلاح الأخطاء:
* حل مشكلة اختيار الجهاز في وضع الخادم
* v.2.0.73-beta
* ميزة جديدة:
* تحميل نموذج beatrice المعدل
* إصلاح الأخطاء:
* تم إصلاح خطأ عدم انعكاس النغمة والصيغة في beatrice v2
* تم إصلاح خطأ عدم إمكانية إنشاء ONNX للنماذج التي تستخدم embedder Applio
## التنزيل والروابط ذات الصلة
يمكن تنزيل نسخة الويندوز ونسخة M1 Mac من مستودع hugging face.
* [مستودع VCClient](https://huggingface.co/wok000/vcclient000/tree/main)
* [مستودع Light VCClient لـ Beatrice v2](https://huggingface.co/wok000/light_vcclient_beatrice/tree/main)
*1 بالنسبة للينكس، يرجى استنساخ المستودع لاستخدامه.
### روابط ذات صلة
* [مستودع كود التدريب لـ Beatrice V2](https://huggingface.co/fierce-cats/beatrice-trainer)
* [نسخة Colab من كود التدريب لـ Beatrice V2](https://github.com/w-okada/beatrice-trainer-colab)
### البرامج ذات الصلة
* [مغير الصوت في الوقت الحقيقي VCClient](https://github.com/w-okada/voice-changer)
* [برنامج قراءة النصوص TTSClient](https://github.com/w-okada/ttsclient)
* [برنامج التعرف على الصوت في الوقت الحقيقي ASRClient](https://github.com/w-okada/asrclient)
## ميزات VC Client
## يدعم نماذج الذكاء الاصطناعي المتنوعة
| نماذج الذكاء الاصطناعي | v.2 | v.1 | الترخيص |
| ------------------------------------------------------------------------------------------------------------ | --------- | -------------------- | ------------------------------------------------------------------------------------------ |
| [RVC ](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/jp/README.ja.md) | مدعوم | مدعوم | يرجى الرجوع إلى المستودع. |
| [Beatrice v1](https://prj-beatrice.com/) | غير متاح | مدعوم (فقط للويندوز) | [خاص](https://github.com/w-okada/voice-changer/tree/master/server/voice_changer/Beatrice) |
| [Beatrice v2](https://prj-beatrice.com/) | مدعوم | غير متاح | [خاص](https://huggingface.co/wok000/vcclient_model/blob/main/beatrice_v2_beta/readme.md) |
| [MMVC](https://github.com/isletennos/MMVC_Trainer) | غير متاح | مدعوم | يرجى الرجوع إلى المستودع. |
| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | غير متاح | مدعوم | يرجى الرجوع إلى المستودع. |
| [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) | غير متاح | مدعوم | يرجى الرجوع إلى المستودع. |
## يدعم كلا من التكوين المستقل وعبر الشبكة
يدعم تحويل الصوت المكتمل على جهاز الكمبيوتر المحلي وكذلك عبر الشبكة.
عند استخدامه عبر الشبكة، يمكن تفريغ عبء تحويل الصوت إلى الخارج عند استخدامه مع تطبيقات عالية التحميل مثل الألعاب.
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
## يدعم منصات متعددة
ويندوز، ماك (M1)، ينكس، جوجل كولاب
*1 بالنسبة للينكس، يرجى استنساخ المستودع لاستخدامه.
## يوفر REST API
يمكنك إنشاء عميل باستخدام لغات البرمجة المختلفة.
يمكنك أيضًا استخدام عملاء HTTP المدمجة في نظام التشغيل مثل curl للتحكم.
## استكشاف الأخطاء وإصلاحها
[قسم الاتصال](tutorials/trouble_shoot_communication_ja.md)
## حول توقيع المطور
هذا البرنامج غير موقع من قبل المطور. ستظهر تحذيرات كما هو موضح أدناه، ولكن يمكنك تشغيله بالضغط على مفتاح التحكم أثناء النقر على الأيقونة. هذا بسبب سياسة أمان Apple. التشغيل يكون على مسؤوليتك الخاصة.
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
## الشكر والتقدير
* [مواد Tachi Zundamon](https://seiga.nicovideo.jp/seiga/im10792934)
* [إيراستويا](https://www.irasutoya.com/)
* [Tsukuyomi-chan](https://tyc.rei-yumesaki.net/)
```
本ソフトウェアの音声合成には、フリー素材キャラクター「つくよみちゃん」が無料公開している音声データを使用しています。
■つくよみちゃんコーパスCV.夢前黎)
https://tyc.rei-yumesaki.net/material/corpus/
© Rei Yumesaki
```
* [ورشة عمل صوت Amitaro](https://amitaro.net/)
* [Replikadoru](https://kikyohiroto1227.wixsite.com/kikoto-utau)
## شروط الاستخدام
* بالنسبة لمغير الصوت في الوقت الحقيقي Tsukuyomi-chan، يُحظر استخدام الصوت المحول للأغراض التالية وفقًا لشروط استخدام كوربوس Tsukuyomi-chan.
```
■人を批判・攻撃すること。(「批判・攻撃」の定義は、つくよみちゃんキャラクターライセンスに準じます)
■特定の政治的立場・宗教・思想への賛同または反対を呼びかけること。
■刺激の強い表現をゾーニングなしで公開すること。
■他者に対して二次利用(素材としての利用)を許可する形で公開すること。
※鑑賞用の作品として配布・販売していただくことは問題ございません。
```
* بالنسبة لمغير الصوت في الوقت الحقيقي Amitaro، يُتبع شروط استخدام ورشة عمل صوت Amitaro. التفاصيل[هنا](https://amitaro.net/voice/faq/#index_id6)
```
あみたろの声素材やコーパス読み上げ音声を使って音声モデルを作ったり、ボイスチェンジャーや声質変換などを使用して、自分の声をあみたろの声に変換して使うのもOKです。
ただしその場合は絶対に、あみたろ(もしくは小春音アミ)の声に声質変換していることを明記し、あみたろ(および小春音アミ)が話しているわけではないことが誰でもわかるようにしてください。
また、あみたろの声で話す内容は声素材の利用規約の範囲内のみとし、センシティブな発言などはしないでください。
```
* بالنسبة لمغير الصوت في الوقت الحقيقي Kogane Mahiro، يُتبع شروط استخدام Replikadoru. التفاصيل[هنا](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)
## إخلاء المسؤولية
لا نتحمل أي مسؤولية عن أي أضرار مباشرة أو غير مباشرة أو تبعية أو خاصة تنشأ عن استخدام أو عدم القدرة على استخدام هذا البرنامج.

148
docs_i18n/README_de.md Normal file
View File

@ -0,0 +1,148 @@
[Japanisch](/README.md) /
[Englisch](/docs_i18n/README_en.md) /
[Koreanisch](/docs_i18n/README_ko.md)/
[Chinesisch](/docs_i18n/README_zh.md)/
[Deutsch](/docs_i18n/README_de.md)/
[Arabisch](/docs_i18n/README_ar.md)/
[Griechisch](/docs_i18n/README_el.md)/
[Spanisch](/docs_i18n/README_es.md)/
[Französisch](/docs_i18n/README_fr.md)/
[Italienisch](/docs_i18n/README_it.md)/
[Latein](/docs_i18n/README_la.md)/
[Malaiisch](/docs_i18n/README_ms.md)/
[Russisch](/docs_i18n/README_ru.md)
*Außer Japanisch sind alle Übersetzungen maschinell.
## VCClient
VCClient ist eine Software, die mithilfe von KI eine Echtzeit-Sprachumwandlung durchführt.
## What's New!
* v.2.0.78-beta
* Fehlerbehebung: Upload-Fehler für RVC-Modell vermieden
* Gleichzeitiger Start mit Version 1.x jetzt möglich
* Auswahlbare Chunk-Größen erhöht
* v.2.0.77-beta (nur für RTX 5090, experimentell)
* Unterstützung für RTX 5090 verwandte Module (nicht verifiziert, da Entwickler kein RTX 5090 besitzt)
* v.2.0.76-beta
* neues Feature:
* Beatrice: Implementierung der Sprecherzusammenführung
* Beatrice: Automatische Tonhöhenverschiebung
* Fehlerbehebung:
* Problembehebung bei der Gerätauswahl im Servermodus
* v.2.0.73-beta
* neues Feature:
* Download des bearbeiteten Beatrice-Modells
* Fehlerbehebung:
* Fehler behoben, bei dem Pitch und Formant von Beatrice v2 nicht reflektiert wurden
* Fehler behoben, bei dem das ONNX-Modell mit dem Applio-Embedder nicht erstellt werden konnte
## Downloads und verwandte Links
Windows- und M1 Mac-Versionen können aus dem Repository von Hugging Face heruntergeladen werden.
* [VCClient-Repository](https://huggingface.co/wok000/vcclient000/tree/main)
* [Light VCClient für Beatrice v2 Repository](https://huggingface.co/wok000/light_vcclient_beatrice/tree/main)
*1 Linux: Bitte klonen Sie das Repository zur Nutzung.
### Verwandte Links
* [Beatrice V2 Trainingscode-Repository](https://huggingface.co/fierce-cats/beatrice-trainer)
* [Beatrice V2 Trainingscode Colab-Version](https://github.com/w-okada/beatrice-trainer-colab)
### Verwandte Software
* [Echtzeit-Voice-Changer VCClient](https://github.com/w-okada/voice-changer)
* [Vorlesesoftware TTSClient](https://github.com/w-okada/ttsclient)
* [Echtzeit-Spracherkennungssoftware ASRClient](https://github.com/w-okada/asrclient)
## Merkmale des VC Clients
## Unterstützt verschiedene KI-Modelle
| KI-Modelle | v.2 | v.1 | Lizenz |
| ------------------------------------------------------------------------------------------------------------ | --------- | -------------------- | ------------------------------------------------------------------------------------------ |
| [RVC ](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/jp/README.ja.md) | unterstützt | unterstützt | Bitte das Repository konsultieren. |
| [Beatrice v1](https://prj-beatrice.com/) | n/a | unterstützt (nur Windows) | [Eigen](https://github.com/w-okada/voice-changer/tree/master/server/voice_changer/Beatrice) |
| [Beatrice v2](https://prj-beatrice.com/) | unterstützt | n/a | [Eigen](https://huggingface.co/wok000/vcclient_model/blob/main/beatrice_v2_beta/readme.md) |
| [MMVC](https://github.com/isletennos/MMVC_Trainer) | n/a | unterstützt | Bitte das Repository konsultieren. |
| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | n/a | unterstützt | Bitte das Repository konsultieren. |
| [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) | n/a | unterstützt | Bitte das Repository konsultieren. |
## Unterstützt sowohl Standalone- als auch Netzwerk-Konfigurationen
Unterstützt sowohl Sprachumwandlung auf dem lokalen PC als auch über das Netzwerk.
Durch die Nutzung über das Netzwerk kann die Belastung der Sprachumwandlung auf externe Ressourcen ausgelagert werden, wenn gleichzeitig ressourcenintensive Anwendungen wie Spiele genutzt werden.
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
## Unterstützt mehrere Plattformen
Windows, Mac(M1), Linux, Google Colab
*1 Linux: Bitte klonen Sie das Repository zur Nutzung.
## Bietet REST API
Clients können in verschiedenen Programmiersprachen erstellt werden.
Außerdem kann die Bedienung mit in das Betriebssystem integrierten HTTP-Clients wie curl erfolgen.
## Fehlerbehebung
[Kommunikationsprobleme](tutorials/trouble_shoot_communication_ja.md)
## Über die Signatur des Entwicklers
Diese Software ist nicht vom Entwickler signiert. Es wird eine Warnung wie unten angezeigt, aber Sie können sie ausführen, indem Sie die Steuerungstaste gedrückt halten und auf das Symbol klicken. Dies liegt an den Sicherheitsrichtlinien von Apple. Die Ausführung erfolgt auf eigenes Risiko.
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
## Danksagungen
* [Tachizundamon-Material](https://seiga.nicovideo.jp/seiga/im10792934)
* [Irasutoya](https://www.irasutoya.com/)
* [Tsukuyomi-chan](https://tyc.rei-yumesaki.net/)
```
本ソフトウェアの音声合成には、フリー素材キャラクター「つくよみちゃん」が無料公開している音声データを使用しています。
■つくよみちゃんコーパスCV.夢前黎)
https://tyc.rei-yumesaki.net/material/corpus/
© Rei Yumesaki
```
* [Amitaro's Voice Material Studio](https://amitaro.net/)
* [Replikador](https://kikyohiroto1227.wixsite.com/kikoto-utau)
## Nutzungsbedingungen
* Für den Echtzeit-Voice-Changer Tsukuyomi-chan gelten die Nutzungsbedingungen des Tsukuyomi-chan-Korpus, und die Verwendung der umgewandelten Stimme für die folgenden Zwecke ist untersagt.
```
■人を批判・攻撃すること。(「批判・攻撃」の定義は、つくよみちゃんキャラクターライセンスに準じます)
■特定の政治的立場・宗教・思想への賛同または反対を呼びかけること。
■刺激の強い表現をゾーニングなしで公開すること。
■他者に対して二次利用(素材としての利用)を許可する形で公開すること。
※鑑賞用の作品として配布・販売していただくことは問題ございません。
```
* Für den Echtzeit-Voice-Changer Amitaro gelten die folgenden Nutzungsbedingungen von Amitaro's Voice Material Studio. Details finden Sie[hier](https://amitaro.net/voice/faq/#index_id6)
```
あみたろの声素材やコーパス読み上げ音声を使って音声モデルを作ったり、ボイスチェンジャーや声質変換などを使用して、自分の声をあみたろの声に変換して使うのもOKです。
ただしその場合は絶対に、あみたろ(もしくは小春音アミ)の声に声質変換していることを明記し、あみたろ(および小春音アミ)が話しているわけではないことが誰でもわかるようにしてください。
また、あみたろの声で話す内容は声素材の利用規約の範囲内のみとし、センシティブな発言などはしないでください。
```
* Für den Echtzeit-Voice-Changer Koto Mahiro gelten die Nutzungsbedingungen von Replikador. Details finden Sie[hier](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)
## Haftungsausschluss
Wir übernehmen keine Verantwortung für direkte, indirekte, Folgeschäden, resultierende oder besondere Schäden, die durch die Nutzung oder Unfähigkeit zur Nutzung dieser Software entstehen.

148
docs_i18n/README_el.md Normal file
View File

@ -0,0 +1,148 @@
[Ιαπωνικά](/README.md) /
[Αγγλικά](/docs_i18n/README_en.md) /
[Κορεατικά](/docs_i18n/README_ko.md)/
[Κινέζικα](/docs_i18n/README_zh.md)/
[Γερμανικά](/docs_i18n/README_de.md)/
[Αραβικά](/docs_i18n/README_ar.md)/
[Ελληνικά](/docs_i18n/README_el.md)/
[Ισπανικά](/docs_i18n/README_es.md)/
[Γαλλικά](/docs_i18n/README_fr.md)/
[Ιταλικά](/docs_i18n/README_it.md)/
[Λατινικά](/docs_i18n/README_la.md)/
[Μαλαισιανά](/docs_i18n/README_ms.md)/
[Ρωσικά](/docs_i18n/README_ru.md)
*Οι γλώσσες εκτός των Ιαπωνικών είναι μεταφρασμένες αυτόματα.
## VCClient
Το VCClient είναι λογισμικό που χρησιμοποιεί AI για μετατροπή φωνής σε πραγματικό χρόνο.
## What's New!
* v.2.0.78-beta
* διόρθωση σφάλματος: αποφεύχθηκε το σφάλμα μεταφόρτωσης του μοντέλου RVC
* Τώρα είναι δυνατή η ταυτόχρονη εκκίνηση με την έκδοση 1.x
* Αυξήθηκαν τα διαθέσιμα μεγέθη chunk
* v.2.0.77-beta (μόνο για RTX 5090, πειραματικό)
* Υποστήριξη για σχετικές μονάδες RTX 5090 (δεν επαληθεύτηκε καθώς ο προγραμματιστής δεν διαθέτει RTX 5090)
* v.2.0.76-beta
* νέα δυνατότητα:
* Beatrice: Εφαρμογή συγχώνευσης ομιλητών
* Beatrice: Αυτόματη μετατόπιση τόνου
* διόρθωση σφαλμάτων:
* Αντιμετώπιση προβλημάτων κατά την επιλογή συσκευής σε λειτουργία διακομιστή
* v.2.0.73-beta
* νέα δυνατότητα:
* Λήψη του επεξεργασμένου μοντέλου beatrice
* διόρθωση σφαλμάτων:
* Διορθώθηκε το σφάλμα όπου το pitch και το formant του beatrice v2 δεν εφαρμόζονταν
* Διορθώθηκε το σφάλμα όπου δεν μπορούσε να δημιουργηθεί το ONNX για μοντέλα που χρησιμοποιούν το embedder του Applio
## Λήψη και σχετικοί σύνδεσμοι
Οι εκδόσεις για Windows και M1 Mac μπορούν να ληφθούν από το αποθετήριο του hugging face.
* [Αποθετήριο του VCClient](https://huggingface.co/wok000/vcclient000/tree/main)
* [Αποθετήριο για το Light VCClient for Beatrice v2](https://huggingface.co/wok000/light_vcclient_beatrice/tree/main)
*1 Για Linux, παρακαλώ κλωνοποιήστε το αποθετήριο.
### Σχετικοί σύνδεσμοι
* [Αποθετήριο κώδικα εκπαίδευσης Beatrice V2](https://huggingface.co/fierce-cats/beatrice-trainer)
* [Έκδοση Colab του κώδικα εκπαίδευσης Beatrice V2](https://github.com/w-okada/beatrice-trainer-colab)
### Σχετικό λογισμικό
* [Μετατροπέας φωνής σε πραγματικό χρόνο VCClient](https://github.com/w-okada/voice-changer)
* [Λογισμικό ανάγνωσης TTSClient](https://github.com/w-okada/ttsclient)
* [Λογισμικό αναγνώρισης φωνής σε πραγματικό χρόνο ASRClient](https://github.com/w-okada/asrclient)
## Χαρακτηριστικά του VC Client
## Υποστήριξη ποικίλων μοντέλων AI
| Μοντέλα AI | v.2 | v.1 | Άδεια |
| ------------------------------------------------------------------------------------------------------------ | --------- | -------------------- | ------------------------------------------------------------------------------------------ |
| [RVC ](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/jp/README.ja.md) | υποστηρίζεται | υποστηρίζεται | Παρακαλώ ανατρέξτε στο αποθετήριο. |
| [Beatrice v1](https://prj-beatrice.com/) | n/a | υποστηρίζεται (μόνο win) | [ιδιόκτητο](https://github.com/w-okada/voice-changer/tree/master/server/voice_changer/Beatrice) |
| [Beatrice v2](https://prj-beatrice.com/) | υποστηρίζεται | n/a | [ιδιόκτητο](https://huggingface.co/wok000/vcclient_model/blob/main/beatrice_v2_beta/readme.md) |
| [MMVC](https://github.com/isletennos/MMVC_Trainer) | n/a | υποστηρίζεται | Παρακαλώ ανατρέξτε στο αποθετήριο. |
| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | n/a | υποστηρίζεται | Παρακαλώ ανατρέξτε στο αποθετήριο. |
| [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) | n/a | υποστηρίζεται | Παρακαλώ ανατρέξτε στο αποθετήριο. |
## Υποστήριξη τόσο για αυτόνομη όσο και για δικτυακή διαμόρφωση
Υποστηρίζεται η μετατροπή φωνής που ολοκληρώνεται σε τοπικό υπολογιστή καθώς και μέσω δικτύου.
Χρησιμοποιώντας το μέσω δικτύου, μπορείτε να εκφορτώσετε το φορτίο της μετατροπής φωνής σε εξωτερικό χώρο όταν χρησιμοποιείτε ταυτόχρονα εφαρμογές υψηλής φόρτωσης όπως παιχνίδια.
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
## Υποστήριξη πολλαπλών πλατφορμών
Windows, Mac(M1), Linux, Google Colab
*1 Για Linux, παρακαλώ κλωνοποιήστε το αποθετήριο.
## Παροχή REST API
Μπορείτε να δημιουργήσετε πελάτες σε διάφορες γλώσσες προγραμματισμού.
Επίσης, μπορείτε να το χειριστείτε χρησιμοποιώντας HTTP πελάτες ενσωματωμένους στο λειτουργικό σύστημα όπως το curl.
## Αντιμετώπιση προβλημάτων
[Θέματα επικοινωνίας](tutorials/trouble_shoot_communication_ja.md)
## Σχετικά με την υπογραφή του προγραμματιστή
Αυτό το λογισμικό δεν είναι υπογεγραμμένο από τον προγραμματιστή. Εμφανίζεται προειδοποίηση όπως παρακάτω, αλλά μπορείτε να το εκτελέσετε κάνοντας κλικ στο εικονίδιο ενώ κρατάτε πατημένο το πλήκτρο ελέγχου. Αυτό οφείλεται στην πολιτική ασφαλείας της Apple. Η εκτέλεση γίνεται με δική σας ευθύνη.
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
## Ευχαριστίες
* [Υλικό από το Tachizundamon](https://seiga.nicovideo.jp/seiga/im10792934)
* [Irasutoya](https://www.irasutoya.com/)
* [Tsukuyomi-chan](https://tyc.rei-yumesaki.net/)
```
本ソフトウェアの音声合成には、フリー素材キャラクター「つくよみちゃん」が無料公開している音声データを使用しています。
■つくよみちゃんコーパスCV.夢前黎)
https://tyc.rei-yumesaki.net/material/corpus/
© Rei Yumesaki
```
* [Εργαστήριο φωνητικών υλικών Amitaro](https://amitaro.net/)
* [Reprikadoru](https://kikyohiroto1227.wixsite.com/kikoto-utau)
## Όροι χρήσης
* Για το μετατροπέα φωνής σε πραγματικό χρόνο Tsukuyomi-chan, απαγορεύεται η χρήση της μετατραπείσας φωνής για τους παρακάτω σκοπούς σύμφωνα με τους όρους χρήσης του Tsukuyomi-chan corpus.
```
■人を批判・攻撃すること。(「批判・攻撃」の定義は、つくよみちゃんキャラクターライセンスに準じます)
■特定の政治的立場・宗教・思想への賛同または反対を呼びかけること。
■刺激の強い表現をゾーニングなしで公開すること。
■他者に対して二次利用(素材としての利用)を許可する形で公開すること。
※鑑賞用の作品として配布・販売していただくことは問題ございません。
```
* Για το μετατροπέα φωνής σε πραγματικό χρόνο Amitaro, ισχύουν οι ακόλουθοι όροι χρήσης του εργαστηρίου φωνητικών υλικών Amitaro. Για λεπτομέρειες,[εδώ](https://amitaro.net/voice/faq/#index_id6)
```
あみたろの声素材やコーパス読み上げ音声を使って音声モデルを作ったり、ボイスチェンジャーや声質変換などを使用して、自分の声をあみたろの声に変換して使うのもOKです。
ただしその場合は絶対に、あみたろ(もしくは小春音アミ)の声に声質変換していることを明記し、あみたろ(および小春音アミ)が話しているわけではないことが誰でもわかるようにしてください。
また、あみたろの声で話す内容は声素材の利用規約の範囲内のみとし、センシティブな発言などはしないでください。
```
* Για το μετατροπέα φωνής σε πραγματικό χρόνο Kogane Mahiro, ισχύουν οι όροι χρήσης του Reprikadoru. Για λεπτομέρειες,[εδώ](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)
## Αποποίηση ευθυνών
Δεν φέρουμε καμία ευθύνη για οποιαδήποτε άμεση, έμμεση, επακόλουθη, ή ειδική ζημία που προκύπτει από τη χρήση ή την αδυναμία χρήσης αυτού του λογισμικού.

148
docs_i18n/README_en.md Normal file
View File

@ -0,0 +1,148 @@
[Japanese](/README.md) /
[English](/docs_i18n/README_en.md) /
[Korean](/docs_i18n/README_ko.md)/
[Chinese](/docs_i18n/README_zh.md)/
[German](/docs_i18n/README_de.md)/
[Arabic](/docs_i18n/README_ar.md)/
[Greek](/docs_i18n/README_el.md)/
[Spanish](/docs_i18n/README_es.md)/
[French](/docs_i18n/README_fr.md)/
[Italian](/docs_i18n/README_it.md)/
[Latin](/docs_i18n/README_la.md)/
[Malay](/docs_i18n/README_ms.md)/
[Russian](/docs_i18n/README_ru.md)
*Languages other than Japanese are machine translated.
## VCClient
VCClient is software that performs real-time voice conversion using AI.
## What's New!
* v.2.0.78-beta
* bugfix: Avoided upload error for RVC model
* Now possible to run simultaneously with ver.1.x
* Increased selectable chunk sizes
* v.2.0.77-beta (only for RTX 5090, experimental)
* Related modules support for RTX 5090 (not verified as developer does not own RTX 5090)
* v.2.0.76-beta
* new feature:
* Beatrice: Implementation of speaker merge
* Beatrice: Auto pitch shift
* bugfix:
* Fixed issue with device selection in server mode
* v.2.0.73-beta
* new feature:
* Download edited Beatrice model
* bugfix:
* Fixed a bug where pitch and formant of Beatrice v2 were not reflected
* Fixed a bug where ONNX could not be created for models using Applio's embedder
## Download and Related Links
Windows and M1 Mac versions can be downloaded from the hugging face repository.
* [VCClient Repository](https://huggingface.co/wok000/vcclient000/tree/main)
* [Light VCClient for Beatrice v2 Repository](https://huggingface.co/wok000/light_vcclient_beatrice/tree/main)
*1 Please clone the repository for Linux use.
### Related Links
* [Beatrice V2 Training Code Repository](https://huggingface.co/fierce-cats/beatrice-trainer)
* [Beatrice V2 Training Code Colab Version](https://github.com/w-okada/beatrice-trainer-colab)
### Related Software
* [Real-time Voice Changer VCClient](https://github.com/w-okada/voice-changer)
* [Text-to-Speech Software TTSClient](https://github.com/w-okada/ttsclient)
* [Real-time Speech Recognition Software ASRClient](https://github.com/w-okada/asrclient)
## Features of VC Client
## Supports various AI models
| AI Model | v.2 | v.1 | License |
| ------------------------------------------------------------------------------------------------------------ | --------- | -------------------- | ------------------------------------------------------------------------------------------ |
| [RVC ](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/jp/README.ja.md) | supported | supported | Please refer to the repository. |
| [Beatrice v1](https://prj-beatrice.com/) | n/a | supported (only win) | [Proprietary](https://github.com/w-okada/voice-changer/tree/master/server/voice_changer/Beatrice) |
| [Beatrice v2](https://prj-beatrice.com/) | supported | n/a | [Proprietary](https://huggingface.co/wok000/vcclient_model/blob/main/beatrice_v2_beta/readme.md) |
| [MMVC](https://github.com/isletennos/MMVC_Trainer) | n/a | supported | Please refer to the repository. |
| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | n/a | supported | Please refer to the repository. |
| [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) | n/a | supported | Please refer to the repository. |
## Supports both standalone and network configurations
Supports voice conversion completed on a local PC as well as voice conversion via network.
By using it over a network, you can offload the voice conversion load externally when using it simultaneously with high-load applications such as games.
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
## Compatible with multiple platforms
Windows, Mac(M1), Linux, Google Colab
*1 Please clone the repository for Linux use.
## Provides REST API
Clients can be created in various programming languages.
You can also operate it using HTTP clients built into the OS, such as curl.
## Troubleshoot
[Communication Edition](tutorials/trouble_shoot_communication_ja.md)
## About Developer Signature
This software is not signed by the developer. A warning will appear as shown below, but you can run it by clicking the icon while holding down the control key. This is due to Apple's security policy. Execution is at your own risk.
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
## Acknowledgments
* [Tachizundamon Materials](https://seiga.nicovideo.jp/seiga/im10792934)
* [Irasutoya](https://www.irasutoya.com/)
* [Tsukuyomi-chan](https://tyc.rei-yumesaki.net/)
```
本ソフトウェアの音声合成には、フリー素材キャラクター「つくよみちゃん」が無料公開している音声データを使用しています。
■つくよみちゃんコーパスCV.夢前黎)
https://tyc.rei-yumesaki.net/material/corpus/
© Rei Yumesaki
```
* [Amitaro's Voice Material Workshop](https://amitaro.net/)
* [Replica Doll](https://kikyohiroto1227.wixsite.com/kikoto-utau)
## Terms of Use
* Regarding the real-time voice changer Tsukuyomi-chan, it is prohibited to use the converted voice for the following purposes in accordance with the terms of use of the Tsukuyomi-chan corpus.
```
■人を批判・攻撃すること。(「批判・攻撃」の定義は、つくよみちゃんキャラクターライセンスに準じます)
■特定の政治的立場・宗教・思想への賛同または反対を呼びかけること。
■刺激の強い表現をゾーニングなしで公開すること。
■他者に対して二次利用(素材としての利用)を許可する形で公開すること。
※鑑賞用の作品として配布・販売していただくことは問題ございません。
```
* Regarding the real-time voice changer Amitaro, it complies with the following terms of use of Amitaro's Voice Material Workshop. For details,[here](https://amitaro.net/voice/faq/#index_id6)
```
あみたろの声素材やコーパス読み上げ音声を使って音声モデルを作ったり、ボイスチェンジャーや声質変換などを使用して、自分の声をあみたろの声に変換して使うのもOKです。
ただしその場合は絶対に、あみたろ(もしくは小春音アミ)の声に声質変換していることを明記し、あみたろ(および小春音アミ)が話しているわけではないことが誰でもわかるようにしてください。
また、あみたろの声で話す内容は声素材の利用規約の範囲内のみとし、センシティブな発言などはしないでください。
```
* Regarding the real-time voice changer Koto Mahiro, it complies with the terms of use of Replica Doll. For details,[here](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)
## Disclaimer
We are not responsible for any direct, indirect, consequential, or special damages arising from the use or inability to use this software.

148
docs_i18n/README_es.md Normal file
View File

@ -0,0 +1,148 @@
[Japonés](/README.md) /
[Inglés](/docs_i18n/README_en.md) /
[Coreano](/docs_i18n/README_ko.md)/
[Chino](/docs_i18n/README_zh.md)/
[Alemán](/docs_i18n/README_de.md)/
[Árabe](/docs_i18n/README_ar.md)/
[Griego](/docs_i18n/README_el.md)/
[Español](/docs_i18n/README_es.md)/
[Francés](/docs_i18n/README_fr.md)/
[Italiano](/docs_i18n/README_it.md)/
[Latín](/docs_i18n/README_la.md)/
[Malayo](/docs_i18n/README_ms.md)/
[Ruso](/docs_i18n/README_ru.md)
*Los idiomas distintos al japonés son traducciones automáticas.
## VCClient
VCClient es un software que utiliza IA para realizar conversión de voz en tiempo real.
## What's New!
* v.2.0.78-beta
* corrección de errores: se evitó el error de carga del modelo RVC
* Ahora es posible ejecutar simultáneamente con la versión 1.x
* Se aumentaron los tamaños de chunk seleccionables
* v.2.0.77-beta (solo para RTX 5090, experimental)
* Soporte para módulos relacionados con RTX 5090 (no verificado ya que el desarrollador no posee RTX 5090)
* v.2.0.76-beta
* nueva característica:
* Beatrice: Implementación de fusión de hablantes
* Beatrice: Cambio de tono automático
* corrección de errores:
* Solución de problemas al seleccionar dispositivos en modo servidor
* v.2.0.73-beta
* nueva característica:
* Descarga del modelo Beatrice editado
* corrección de errores:
* Se corrigió un error donde el pitch y el formante de Beatrice v2 no se reflejaban
* Se corrigió un error donde no se podía crear ONNX para modelos que usan el embedder de Applio
## Descargas y enlaces relacionados
Las versiones para Windows y Mac M1 se pueden descargar desde el repositorio de hugging face.
* [Repositorio de VCClient](https://huggingface.co/wok000/vcclient000/tree/main)
* [Repositorio de Light VCClient para Beatrice v2](https://huggingface.co/wok000/light_vcclient_beatrice/tree/main)
*1 Para Linux, clone el repositorio para su uso.
### Enlaces relacionados
* [Repositorio de código de entrenamiento de Beatrice V2](https://huggingface.co/fierce-cats/beatrice-trainer)
* [Versión Colab del código de entrenamiento de Beatrice V2](https://github.com/w-okada/beatrice-trainer-colab)
### Software relacionado
* [Cambiador de voz en tiempo real VCClient](https://github.com/w-okada/voice-changer)
* [Software de lectura TTSClient](https://github.com/w-okada/ttsclient)
* [Software de reconocimiento de voz en tiempo real ASRClient](https://github.com/w-okada/asrclient)
## Características de VC Client
## Soporta diversos modelos de IA
| Modelos de IA | v.2 | v.1 | Licencia |
| ------------------------------------------------------------------------------------------------------------ | --------- | -------------------- | ------------------------------------------------------------------------------------------ |
| [RVC ](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/jp/README.ja.md) | soportado | soportado | Consulte el repositorio. |
| [Beatrice v1](https://prj-beatrice.com/) | n/a | soportado (solo win) | [Propio](https://github.com/w-okada/voice-changer/tree/master/server/voice_changer/Beatrice) |
| [Beatrice v2](https://prj-beatrice.com/) | soportado | n/a | [Propio](https://huggingface.co/wok000/vcclient_model/blob/main/beatrice_v2_beta/readme.md) |
| [MMVC](https://github.com/isletennos/MMVC_Trainer) | n/a | soportado | Consulte el repositorio. |
| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | n/a | soportado | Consulte el repositorio. |
| [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) | n/a | soportado | Consulte el repositorio. |
## Soporta configuraciones tanto autónomas como a través de la red
Soporta tanto la conversión de voz completada en una PC local como la conversión de voz a través de la red.
Al utilizarlo a través de la red, puede descargar la carga de conversión de voz externamente cuando se usa simultáneamente con aplicaciones de alta carga como juegos.
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
## Compatible con múltiples plataformas
Windows, Mac(M1), Linux, Google Colab
*1 Para Linux, clone el repositorio para su uso.
## Proporciona API REST
Puede crear clientes en varios lenguajes de programación.
Además, puede operar usando clientes HTTP integrados en el sistema operativo como curl.
## Solución de problemas
[Sección de comunicación](tutorials/trouble_shoot_communication_ja.md)
## Sobre la firma del desarrollador
Este software no está firmado por el desarrollador. Aunque aparece una advertencia como se muestra a continuación, puede ejecutarlo haciendo clic en el icono mientras mantiene presionada la tecla de control. Esto se debe a la política de seguridad de Apple. La ejecución es bajo su propio riesgo.
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
## Agradecimientos
* [Material de Tachi Zundamon](https://seiga.nicovideo.jp/seiga/im10792934)
* [Ilustraciones de Irasutoya](https://www.irasutoya.com/)
* [Tsukuyomi-chan](https://tyc.rei-yumesaki.net/)
```
本ソフトウェアの音声合成には、フリー素材キャラクター「つくよみちゃん」が無料公開している音声データを使用しています。
■つくよみちゃんコーパスCV.夢前黎)
https://tyc.rei-yumesaki.net/material/corpus/
© Rei Yumesaki
```
* [Taller de voz de Amitaro](https://amitaro.net/)
* [Replikador](https://kikyohiroto1227.wixsite.com/kikoto-utau)
## Términos de uso
* En cuanto a Tsukuyomi-chan, el cambiador de voz en tiempo real, está prohibido usar la voz convertida para los siguientes propósitos, de acuerdo con los términos de uso del corpus de Tsukuyomi-chan.
```
■人を批判・攻撃すること。(「批判・攻撃」の定義は、つくよみちゃんキャラクターライセンスに準じます)
■特定の政治的立場・宗教・思想への賛同または反対を呼びかけること。
■刺激の強い表現をゾーニングなしで公開すること。
■他者に対して二次利用(素材としての利用)を許可する形で公開すること。
※鑑賞用の作品として配布・販売していただくことは問題ございません。
```
* En cuanto a Amitaro, el cambiador de voz en tiempo real, se adhiere a los siguientes términos de uso del Taller de voz de Amitaro. Para más detalles, [aquí](https://amitaro.net/voice/faq/#index_id6)
```
あみたろの声素材やコーパス読み上げ音声を使って音声モデルを作ったり、ボイスチェンジャーや声質変換などを使用して、自分の声をあみたろの声に変換して使うのもOKです。
ただしその場合は絶対に、あみたろ(もしくは小春音アミ)の声に声質変換していることを明記し、あみたろ(および小春音アミ)が話しているわけではないことが誰でもわかるようにしてください。
また、あみたろの声で話す内容は声素材の利用規約の範囲内のみとし、センシティブな発言などはしないでください。
```
* En cuanto a Koto Mahiro, el cambiador de voz en tiempo real, se adhiere a los términos de uso de Replikador. Para más detalles, [aquí](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)
## Descargo de responsabilidad
No nos hacemos responsables de ningún daño directo, indirecto, consecuente, resultante o especial que surja del uso o la imposibilidad de uso de este software.

148
docs_i18n/README_fr.md Normal file
View File

@ -0,0 +1,148 @@
[Japonais](/README.md) /
[Anglais](/docs_i18n/README_en.md) /
[Coréen](/docs_i18n/README_ko.md)/
[Chinois](/docs_i18n/README_zh.md)/
[Allemand](/docs_i18n/README_de.md)/
[Arabe](/docs_i18n/README_ar.md)/
[Grec](/docs_i18n/README_el.md)/
[Espagnol](/docs_i18n/README_es.md)/
[Français](/docs_i18n/README_fr.md)/
[Italien](/docs_i18n/README_it.md)/
[Latin](/docs_i18n/README_la.md)/
[Malais](/docs_i18n/README_ms.md)/
[Russe](/docs_i18n/README_ru.md)
*Les langues autres que le japonais sont traduites automatiquement.
## VCClient
VCClient est un logiciel qui utilise l'IA pour effectuer une conversion vocale en temps réel.
## What's New!
* v.2.0.78-beta
* correction de bug : évitement de l'erreur de téléchargement du modèle RVC
* Il est désormais possible de lancer simultanément avec la version 1.x
* Augmentation des tailles de chunk sélectionnables
* v.2.0.77-beta (uniquement pour RTX 5090, expérimental)
* Support des modules liés à RTX 5090 (non vérifié car le développeur ne possède pas de RTX 5090)
* v.2.0.76-beta
* nouvelle fonctionnalité :
* Beatrice : Implémentation de la fusion des locuteurs
* Beatrice : Pitch shift automatique
* correction de bug :
* Correction d'un problème lors de la sélection de l'appareil en mode serveur
* v.2.0.73-beta
* nouvelle fonctionnalité :
* Téléchargement du modèle Beatrice modifié
* correction de bug :
* Correction du bug où le pitch et le formant de Beatrice v2 n'étaient pas appliqués
* Correction du bug empêchant la création de l'ONNX pour les modèles utilisant l'embedder d'Applio
## Téléchargement et liens associés
Les versions Windows et Mac M1 peuvent être téléchargées depuis le référentiel hugging face.
* [Référentiel de VCClient](https://huggingface.co/wok000/vcclient000/tree/main)
* [Référentiel de Light VCClient pour Beatrice v2](https://huggingface.co/wok000/light_vcclient_beatrice/tree/main)
*1 Pour Linux, veuillez cloner le référentiel pour l'utiliser.
### Liens associés
* [Référentiel de code d'entraînement Beatrice V2](https://huggingface.co/fierce-cats/beatrice-trainer)
* [Version Colab du code d'entraînement Beatrice V2](https://github.com/w-okada/beatrice-trainer-colab)
### Logiciels associés
* [Changeur de voix en temps réel VCClient](https://github.com/w-okada/voice-changer)
* [Logiciel de synthèse vocale TTSClient](https://github.com/w-okada/ttsclient)
* [Logiciel de reconnaissance vocale en temps réel ASRClient](https://github.com/w-okada/asrclient)
## Caractéristiques de VC Client
## Prend en charge divers modèles d'IA
| Modèle d'IA | v.2 | v.1 | Licence |
| ------------------------------------------------------------------------------------------------------------ | --------- | -------------------- | ------------------------------------------------------------------------------------------ |
| [RVC ](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/jp/README.ja.md) | pris en charge | pris en charge | Veuillez consulter le référentiel. |
| [Beatrice v1](https://prj-beatrice.com/) | n/a | pris en charge (uniquement Windows) | [Propriétaire](https://github.com/w-okada/voice-changer/tree/master/server/voice_changer/Beatrice) |
| [Beatrice v2](https://prj-beatrice.com/) | pris en charge | n/a | [Propriétaire](https://huggingface.co/wok000/vcclient_model/blob/main/beatrice_v2_beta/readme.md) |
| [MMVC](https://github.com/isletennos/MMVC_Trainer) | n/a | pris en charge | Veuillez consulter le référentiel. |
| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | n/a | pris en charge | Veuillez consulter le référentiel. |
| [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) | n/a | pris en charge | Veuillez consulter le référentiel. |
## Prend en charge les configurations autonomes et via réseau
Prend en charge la conversion vocale entièrement sur PC local ainsi que via réseau.
En utilisant via réseau, la charge de conversion vocale peut être déportée à l'extérieur lors de l'utilisation simultanée avec des applications à forte charge comme les jeux.
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
## Compatible avec plusieurs plateformes
Windows, Mac(M1), Linux, Google Colab
*1 Pour Linux, veuillez cloner le référentiel pour l'utiliser.
## Fournit une API REST
Vous pouvez créer des clients dans divers langages de programmation.
Vous pouvez également utiliser des clients HTTP intégrés au système d'exploitation comme curl pour les opérations.
## Dépannage
[Communication](tutorials/trouble_shoot_communication_ja.md)
## À propos de la signature du développeur
Ce logiciel n'est pas signé par le développeur. Un avertissement s'affiche comme ci-dessous, mais vous pouvez l'exécuter en cliquant sur l'icône tout en maintenant la touche Contrôle. Ceci est dû à la politique de sécurité d'Apple. L'exécution est à vos propres risques.
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
## Remerciements
* [Matériel de Tachi Zundamon](https://seiga.nicovideo.jp/seiga/im10792934)
* [Irasutoya](https://www.irasutoya.com/)
* [Tsukuyomi-chan](https://tyc.rei-yumesaki.net/)
```
本ソフトウェアの音声合成には、フリー素材キャラクター「つくよみちゃん」が無料公開している音声データを使用しています。
■つくよみちゃんコーパスCV.夢前黎)
https://tyc.rei-yumesaki.net/material/corpus/
© Rei Yumesaki
```
* [Atelier de voix d'Amitaro](https://amitaro.net/)
* [Replika Doll](https://kikyohiroto1227.wixsite.com/kikoto-utau)
## Conditions d'utilisation
* En ce qui concerne le changeur de voix en temps réel Tsukuyomi-chan, l'utilisation de la voix convertie est interdite aux fins suivantes, conformément aux conditions d'utilisation du corpus Tsukuyomi-chan.
```
■人を批判・攻撃すること。(「批判・攻撃」の定義は、つくよみちゃんキャラクターライセンスに準じます)
■特定の政治的立場・宗教・思想への賛同または反対を呼びかけること。
■刺激の強い表現をゾーニングなしで公開すること。
■他者に対して二次利用(素材としての利用)を許可する形で公開すること。
※鑑賞用の作品として配布・販売していただくことは問題ございません。
```
* En ce qui concerne le changeur de voix en temps réel Amitaro, il est conforme aux conditions d'utilisation de l'atelier de voix d'Amitaro. Pour plus de détails, [ici](https://amitaro.net/voice/faq/#index_id6)
```
あみたろの声素材やコーパス読み上げ音声を使って音声モデルを作ったり、ボイスチェンジャーや声質変換などを使用して、自分の声をあみたろの声に変換して使うのもOKです。
ただしその場合は絶対に、あみたろ(もしくは小春音アミ)の声に声質変換していることを明記し、あみたろ(および小春音アミ)が話しているわけではないことが誰でもわかるようにしてください。
また、あみたろの声で話す内容は声素材の利用規約の範囲内のみとし、センシティブな発言などはしないでください。
```
* En ce qui concerne le changeur de voix en temps réel Koto Mahiro, il est conforme aux conditions d'utilisation de Replika Doll. Pour plus de détails, [ici](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)
## Clause de non-responsabilité
Nous déclinons toute responsabilité pour tout dommage direct, indirect, consécutif, résultant ou spécial causé par l'utilisation ou l'incapacité d'utiliser ce logiciel.

148
docs_i18n/README_it.md Normal file
View File

@ -0,0 +1,148 @@
[Giapponese](/README.md) /
[Inglese](/docs_i18n/README_en.md) /
[Coreano](/docs_i18n/README_ko.md)/
[Cinese](/docs_i18n/README_zh.md)/
[Tedesco](/docs_i18n/README_de.md)/
[Arabo](/docs_i18n/README_ar.md)/
[Greco](/docs_i18n/README_el.md)/
[Spagnolo](/docs_i18n/README_es.md)/
[Francese](/docs_i18n/README_fr.md)/
[Italiano](/docs_i18n/README_it.md)/
[Latino](/docs_i18n/README_la.md)/
[Malese](/docs_i18n/README_ms.md)/
[Russo](/docs_i18n/README_ru.md)
*Le lingue diverse dal giapponese sono tradotte automaticamente.
## VCClient
VCClient è un software che utilizza l'IA per la conversione vocale in tempo reale.
## What's New!
* v.2.0.78-beta
* correzione bug: evitato errore di upload del modello RVC
* Ora è possibile l'avvio simultaneo con la versione 1.x
* Aumentate le dimensioni dei chunk selezionabili
* v.2.0.77-beta (solo per RTX 5090, sperimentale)
* Supporto per moduli relativi a RTX 5090 (non verificato poiché lo sviluppatore non possiede RTX 5090)
* v.2.0.76-beta
* nuova funzionalità:
* Beatrice: Implementazione della fusione degli speaker
* Beatrice: Auto pitch shift
* correzione bug:
* Risolto il problema nella selezione del dispositivo in modalità server
* v.2.0.73-beta
* nuova funzionalità:
* Download del modello beatrice modificato
* correzione bug:
* Corretto un bug per cui pitch e formant di beatrice v2 non venivano applicati
* Corretto un bug per cui non era possibile creare ONNX per i modelli che utilizzano l'embedder di Applio
## Download e link correlati
Le versioni per Windows e Mac M1 possono essere scaricate dal repository di hugging face.
* [Repository di VCClient](https://huggingface.co/wok000/vcclient000/tree/main)
* [Repository di Light VCClient per Beatrice v2](https://huggingface.co/wok000/light_vcclient_beatrice/tree/main)
*1 Per Linux, clona il repository per l'uso.
### Link correlati
* [Repository del codice di allenamento Beatrice V2](https://huggingface.co/fierce-cats/beatrice-trainer)
* [Versione Colab del codice di allenamento Beatrice V2](https://github.com/w-okada/beatrice-trainer-colab)
### Software correlato
* [Cambiavoce in tempo reale VCClient](https://github.com/w-okada/voice-changer)
* [Software di sintesi vocale TTSClient](https://github.com/w-okada/ttsclient)
* [Software di riconoscimento vocale in tempo reale ASRClient](https://github.com/w-okada/asrclient)
## Caratteristiche di VC Client
## Supporta vari modelli di IA
| Modello di IA | v.2 | v.1 | Licenza |
| ------------------------------------------------------------------------------------------------------------ | --------- | -------------------- | ------------------------------------------------------------------------------------------ |
| [RVC ](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/jp/README.ja.md) | supportato | supportato | Si prega di consultare il repository. |
| [Beatrice v1](https://prj-beatrice.com/) | n/a | supportato (solo win) | [Proprietario](https://github.com/w-okada/voice-changer/tree/master/server/voice_changer/Beatrice) |
| [Beatrice v2](https://prj-beatrice.com/) | supportato | n/a | [Proprietario](https://huggingface.co/wok000/vcclient_model/blob/main/beatrice_v2_beta/readme.md) |
| [MMVC](https://github.com/isletennos/MMVC_Trainer) | n/a | supportato | Si prega di consultare il repository. |
| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | n/a | supportato | Si prega di consultare il repository. |
| [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) | n/a | supportato | Si prega di consultare il repository. |
## Supporta sia la configurazione standalone che tramite rete
Supporta sia la conversione vocale completata su PC locale che tramite rete.
Utilizzando tramite rete, è possibile scaricare il carico della conversione vocale su un dispositivo esterno quando si utilizzano applicazioni ad alto carico come i giochi.
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
## Compatibile con più piattaforme
Windows, Mac(M1), Linux, Google Colab
*1 Per Linux, clona il repository per l'uso.
## Fornisce un'API REST
È possibile creare client in vari linguaggi di programmazione.
È inoltre possibile operare utilizzando client HTTP incorporati nel sistema operativo come curl.
## Risoluzione dei problemi
[Sezione comunicazione](tutorials/trouble_shoot_communication_ja.md)
## Informazioni sulla firma dello sviluppatore
Questo software non è firmato dallo sviluppatore. Anche se viene visualizzato un avviso come di seguito, è possibile eseguirlo facendo clic sull'icona tenendo premuto il tasto di controllo. Questo è dovuto alla politica di sicurezza di Apple. L'esecuzione è a proprio rischio.
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
## Ringraziamenti
* [Materiale di Tachi Zundamon](https://seiga.nicovideo.jp/seiga/im10792934)
* [Irasutoya](https://www.irasutoya.com/)
* [Tsukuyomi-chan](https://tyc.rei-yumesaki.net/)
```
本ソフトウェアの音声合成には、フリー素材キャラクター「つくよみちゃん」が無料公開している音声データを使用しています。
■つくよみちゃんコーパスCV.夢前黎)
https://tyc.rei-yumesaki.net/material/corpus/
© Rei Yumesaki
```
* [Atelier di materiali vocali di Amitaro](https://amitaro.net/)
* [Replica Doll](https://kikyohiroto1227.wixsite.com/kikoto-utau)
## Termini di utilizzo
* Per quanto riguarda il cambiavoce in tempo reale Tsukuyomi-chan, è vietato utilizzare la voce convertita per i seguenti scopi in conformità con i termini di utilizzo del corpus di Tsukuyomi-chan.
```
■人を批判・攻撃すること。(「批判・攻撃」の定義は、つくよみちゃんキャラクターライセンスに準じます)
■特定の政治的立場・宗教・思想への賛同または反対を呼びかけること。
■刺激の強い表現をゾーニングなしで公開すること。
■他者に対して二次利用(素材としての利用)を許可する形で公開すること。
※鑑賞用の作品として配布・販売していただくことは問題ございません。
```
* Per quanto riguarda il cambiavoce in tempo reale Amitaro, si applicano i seguenti termini di utilizzo dell'Atelier di materiali vocali di Amitaro. Per dettagli, [qui](https://amitaro.net/voice/faq/#index_id6)
```
あみたろの声素材やコーパス読み上げ音声を使って音声モデルを作ったり、ボイスチェンジャーや声質変換などを使用して、自分の声をあみたろの声に変換して使うのもOKです。
ただしその場合は絶対に、あみたろ(もしくは小春音アミ)の声に声質変換していることを明記し、あみたろ(および小春音アミ)が話しているわけではないことが誰でもわかるようにしてください。
また、あみたろの声で話す内容は声素材の利用規約の範囲内のみとし、センシティブな発言などはしないでください。
```
* Per quanto riguarda il cambiavoce in tempo reale Koto Mahiro, si applicano i termini di utilizzo di Replica Doll. Per dettagli, [qui](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)
## Clausola di esclusione della responsabilità
Non ci assumiamo alcuna responsabilità per eventuali danni diretti, indiretti, consequenziali, risultanti o speciali derivanti dall'uso o dall'impossibilità di utilizzare questo software.

148
docs_i18n/README_ja.md Normal file
View File

@ -0,0 +1,148 @@
[日本語](/README.md) /
[英語](/docs_i18n/README_en.md) /
[韓国語](/docs_i18n/README_ko.md)/
[中国語](/docs_i18n/README_zh.md)/
[ドイツ語](/docs_i18n/README_de.md)/
[アラビア語](/docs_i18n/README_ar.md)/
[ギリシャ語](/docs_i18n/README_el.md)/
[スペイン語](/docs_i18n/README_es.md)/
[フランス語](/docs_i18n/README_fr.md)/
[イタリア語](/docs_i18n/README_it.md)/
[ラテン語](/docs_i18n/README_la.md)/
[マレー語](/docs_i18n/README_ms.md)/
[ロシア語](/docs_i18n/README_ru.md)
*日本語以外は機械翻訳です。
## VCClient
VCClientは、AIを用いてリアルタイム音声変換を行うソフトウェアです。
## What's New!
* v.2.0.78-beta
* bugfix: RVCモデルのアップロードエラーを回避
* ver.1.x との同時起動ができるようになりました。
* 選択できるchunk sizeを増やしました。
* v.2.0.77-beta (only for RTX 5090, experimental)
* 関連モジュールを5090対応 (開発者がRTX5090未所持のため、動作未検証)
* v.2.0.76-beta
* new feature:
* Beatrice: 話者マージの実装
* Beatrice: オートピッチシフト
* bugfix:
* サーバモードのデバイス選択時の不具合対応
* v.2.0.73-beta
* new feature:
* 編集したbeatrice modelのダウンロード
* bugfix:
* beatrice v2 のpitch, formantが反映されないバグを修正
* Applio のembedderを使用しているモデルのONNXができないバグを修正
## ダウンロードと関連リンク
Windows版、 M1 Mac版はhugging faceのリポジトリからダウンロードできます。
* [VCClient のリポジトリ](https://huggingface.co/wok000/vcclient000/tree/main)
* [Light VCClient for Beatrice v2 のリポジトリ](https://huggingface.co/wok000/light_vcclient_beatrice/tree/main)
*1 Linuxはリポジトリをcloneしてお使いください。
### 関連リンク
* [Beatrice V2 トレーニングコードのリポジトリ](https://huggingface.co/fierce-cats/beatrice-trainer)
* [Beatrice V2 トレーニングコード Colab版](https://github.com/w-okada/beatrice-trainer-colab)
### 関連ソフトウェア
* [リアルタイムボイスチェンジャ VCClient](https://github.com/w-okada/voice-changer)
* [読み上げソフトウェア TTSClient](https://github.com/w-okada/ttsclient)
* [リアルタイム音声認識ソフトウェア ASRClient](https://github.com/w-okada/asrclient)
## VC Clientの特徴
## 多様なAIモデルをサポート
| AIモデル | v.2 | v.1 | ライセンス |
| ------------------------------------------------------------------------------------------------------------ | --------- | -------------------- | ------------------------------------------------------------------------------------------ |
| [RVC ](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/jp/README.ja.md) | supported | supported | リポジトリを参照してください。 |
| [Beatrice v1](https://prj-beatrice.com/) | n/a | supported (only win) | [独自](https://github.com/w-okada/voice-changer/tree/master/server/voice_changer/Beatrice) |
| [Beatrice v2](https://prj-beatrice.com/) | supported | n/a | [独自](https://huggingface.co/wok000/vcclient_model/blob/main/beatrice_v2_beta/readme.md) |
| [MMVC](https://github.com/isletennos/MMVC_Trainer) | n/a | supported | リポジトリを参照してください。 |
| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | n/a | supported | リポジトリを参照してください。 |
| [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) | n/a | supported | リポジトリを参照してください。 |
## スタンドアロン、ネットワーク経由の両構成をサポート
ローカルPCで完結した音声変換も、ネットワークを介した音声変換もサポートしています。
ネットワークを介した利用を行うことで、ゲームなどの高負荷なアプリケーションと同時に使用する場合に音声変換の負荷を外部にオフロードすることができます。
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
## 複数プラットフォームに対応
Windows, Mac(M1), Linux, Google Colab
*1 Linuxはリポジトリをcloneしてお使いください。
## REST APIを提供
各種プログラミング言語でクライアントを作成することができます。
また、curlなどのOSに組み込まれているHTTPクライアントを使って操作ができます。
## トラブルシュート
[通信編](tutorials/trouble_shoot_communication_ja.md)
## 開発者の署名について
本ソフトウェアは開発元の署名しておりません。下記のように警告が出ますが、コントロールキーを押しながらアイコンをクリックすると実行できるようになります。これは Apple のセキュリティポリシーによるものです。実行は自己責任となります。
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
## Acknowledgments
* [立ちずんだもん素材](https://seiga.nicovideo.jp/seiga/im10792934)
* [いらすとや](https://www.irasutoya.com/)
* [つくよみちゃん](https://tyc.rei-yumesaki.net/)
```
本ソフトウェアの音声合成には、フリー素材キャラクター「つくよみちゃん」が無料公開している音声データを使用しています。
■つくよみちゃんコーパスCV.夢前黎)
https://tyc.rei-yumesaki.net/material/corpus/
© Rei Yumesaki
```
* [あみたろの声素材工房](https://amitaro.net/)
* [れぷりかどーる](https://kikyohiroto1227.wixsite.com/kikoto-utau)
## 利用規約
* リアルタイムボイスチェンジャーつくよみちゃんについては、つくよみちゃんコーパスの利用規約に準じ、次の目的で変換後の音声を使用することを禁止します。
```
■人を批判・攻撃すること。(「批判・攻撃」の定義は、つくよみちゃんキャラクターライセンスに準じます)
■特定の政治的立場・宗教・思想への賛同または反対を呼びかけること。
■刺激の強い表現をゾーニングなしで公開すること。
■他者に対して二次利用(素材としての利用)を許可する形で公開すること。
※鑑賞用の作品として配布・販売していただくことは問題ございません。
```
* リアルタイムボイスチェンジャーあみたろについては、あみたろの声素材工房様の次の利用規約に準じます。詳細は[こちら](https://amitaro.net/voice/faq/#index_id6)
```
あみたろの声素材やコーパス読み上げ音声を使って音声モデルを作ったり、ボイスチェンジャーや声質変換などを使用して、自分の声をあみたろの声に変換して使うのもOKです。
ただしその場合は絶対に、あみたろ(もしくは小春音アミ)の声に声質変換していることを明記し、あみたろ(および小春音アミ)が話しているわけではないことが誰でもわかるようにしてください。
また、あみたろの声で話す内容は声素材の利用規約の範囲内のみとし、センシティブな発言などはしないでください。
```
* リアルタイムボイスチェンジャー黄琴まひろについては、れぷりかどーるの利用規約に準じます。詳細は[こちら](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)
## 免責事項
本ソフトウェアの使用または使用不能により生じたいかなる直接損害・間接損害・波及的損害・結果的損害 または特別損害についても、一切責任を負いません。

148
docs_i18n/README_ko.md Normal file
View File

@ -0,0 +1,148 @@
[일본어](/README.md) /
[영어](/docs_i18n/README_en.md) /
[한국어](/docs_i18n/README_ko.md)/
[중국어](/docs_i18n/README_zh.md)/
[독일어](/docs_i18n/README_de.md)/
[아랍어](/docs_i18n/README_ar.md)/
[그리스어](/docs_i18n/README_el.md)/
[스페인어](/docs_i18n/README_es.md)/
[프랑스어](/docs_i18n/README_fr.md)/
[이탈리아어](/docs_i18n/README_it.md)/
[라틴어](/docs_i18n/README_la.md)/
[말레이어](/docs_i18n/README_ms.md)/
[러시아어](/docs_i18n/README_ru.md)
*일본어 외에는 기계 번역입니다.
## VCClient
VCClient는 AI를 사용하여 실시간 음성 변환을 수행하는 소프트웨어입니다.
## What's New!
* v.2.0.78-beta
* 버그 수정: RVC 모델 업로드 오류 회피
* ver.1.x와 동시에 실행 가능해졌습니다.
* 선택 가능한 chunk size를 늘렸습니다.
* v.2.0.77-beta (RTX 5090 전용, 실험적)
* RTX 5090 관련 모듈 지원 (개발자가 RTX 5090을 보유하지 않아 검증되지 않음)
* v.2.0.76-beta
* new feature:
* Beatrice: 화자 병합 구현
* Beatrice: 자동 피치 시프트
* bugfix:
* 서버 모드에서 장치 선택 시의 문제 해결
* v.2.0.73-beta
* new feature:
* 편집한 beatrice 모델 다운로드
* bugfix:
* beatrice v2의 pitch, formant가 반영되지 않는 버그를 수정
* Applio의 embedder를 사용하고 있는 모델의 ONNX가 생성되지 않는 버그를 수정
## 다운로드 및 관련 링크
Windows 버전, M1 Mac 버전은 hugging face의 리포지토리에서 다운로드할 수 있습니다.
* [VCClient의 리포지토리](https://huggingface.co/wok000/vcclient000/tree/main)
* [Light VCClient for Beatrice v2의 리포지토리](https://huggingface.co/wok000/light_vcclient_beatrice/tree/main)
*1 Linux는 리포지토리를 클론하여 사용하세요.
### 관련 링크
* [Beatrice V2 트레이닝 코드의 리포지토리](https://huggingface.co/fierce-cats/beatrice-trainer)
* [Beatrice V2 트레이닝 코드 Colab 버전](https://github.com/w-okada/beatrice-trainer-colab)
### 관련 소프트웨어
* [실시간 보이스 체인저 VCClient](https://github.com/w-okada/voice-changer)
* [읽기 소프트웨어 TTSClient](https://github.com/w-okada/ttsclient)
* [실시간 음성 인식 소프트웨어 ASRClient](https://github.com/w-okada/asrclient)
## VC Client의 특징
## 다양한 AI 모델을 지원
| AI 모델 | v.2 | v.1 | 라이선스 |
| ------------------------------------------------------------------------------------------------------------ | --------- | -------------------- | ------------------------------------------------------------------------------------------ |
| [RVC ](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/jp/README.ja.md) | supported | supported | 리포지토리를 참조하세요. |
| [Beatrice v1](https://prj-beatrice.com/) | n/a | supported (only win) | [독자](https://github.com/w-okada/voice-changer/tree/master/server/voice_changer/Beatrice) |
| [Beatrice v2](https://prj-beatrice.com/) | supported | n/a | [독자](https://huggingface.co/wok000/vcclient_model/blob/main/beatrice_v2_beta/readme.md) |
| [MMVC](https://github.com/isletennos/MMVC_Trainer) | n/a | supported | 리포지토리를 참조하세요. |
| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | n/a | supported | 리포지토리를 참조하세요. |
| [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) | n/a | supported | 리포지토리를 참조하세요. |
## 독립형, 네트워크 경유의 두 가지 구성을 지원
로컬 PC에서 완료된 음성 변환과 네트워크를 통한 음성 변환을 지원합니다.
네트워크를 통해 사용하면 게임 등 고부하 애플리케이션과 동시에 사용할 때 음성 변환의 부하를 외부로 오프로드할 수 있습니다.
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
## 다중 플랫폼에 대응
Windows, Mac(M1), Linux, Google Colab
*1 Linux는 리포지토리를 클론하여 사용하세요.
## REST API를 제공
각종 프로그래밍 언어로 클라이언트를 만들 수 있습니다.
또한, curl 등 OS에 내장된 HTTP 클라이언트를 사용하여 조작할 수 있습니다.
## 문제 해결
[통신 편](tutorials/trouble_shoot_communication_ja.md)
## 개발자의 서명에 대해
이 소프트웨어는 개발자의 서명이 되어 있지 않습니다. 아래와 같은 경고가 나오지만, 컨트롤 키를 누른 상태에서 아이콘을 클릭하면 실행할 수 있습니다. 이는 Apple의 보안 정책에 따른 것입니다. 실행은 본인의 책임입니다.
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
## Acknowledgments
* [타치준다몬 소재](https://seiga.nicovideo.jp/seiga/im10792934)
* [일러스트야](https://www.irasutoya.com/)
* [츠쿠요미짱](https://tyc.rei-yumesaki.net/)
```
本ソフトウェアの音声合成には、フリー素材キャラクター「つくよみちゃん」が無料公開している音声データを使用しています。
■つくよみちゃんコーパスCV.夢前黎)
https://tyc.rei-yumesaki.net/material/corpus/
© Rei Yumesaki
```
* [아미타로의 목소리 소재 공방](https://amitaro.net/)
* [레플리카돌](https://kikyohiroto1227.wixsite.com/kikoto-utau)
## 이용 약관
* 실시간 보이스 체인저 츠쿠요미짱에 대해서는 츠쿠요미짱 코퍼스의 이용 약관에 따라 다음 목적에서 변환 후 음성을 사용하는 것을 금지합니다.
```
■人を批判・攻撃すること。(「批判・攻撃」の定義は、つくよみちゃんキャラクターライセンスに準じます)
■特定の政治的立場・宗教・思想への賛同または反対を呼びかけること。
■刺激の強い表現をゾーニングなしで公開すること。
■他者に対して二次利用(素材としての利用)を許可する形で公開すること。
※鑑賞用の作品として配布・販売していただくことは問題ございません。
```
* 실시간 보이스 체인저 아미타로에 대해서는 아미타로の목소리 소재 공방의 다음 이용 약관에 따릅니다. 자세한 내용은[여기](https://amitaro.net/voice/faq/#index_id6)
```
あみたろの声素材やコーパス読み上げ音声を使って音声モデルを作ったり、ボイスチェンジャーや声質変換などを使用して、自分の声をあみたろの声に変換して使うのもOKです。
ただしその場合は絶対に、あみたろ(もしくは小春音アミ)の声に声質変換していることを明記し、あみたろ(および小春音アミ)が話しているわけではないことが誰でもわかるようにしてください。
また、あみたろの声で話す内容は声素材の利用規約の範囲内のみとし、センシティブな発言などはしないでください。
```
* 실시간 보이스 체인저 황금 마히로에 대해서는 레플리카돌의 이용 약관에 따릅니다. 자세한 내용은[여기](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)
## 면책 조항
이 소프트웨어의 사용 또는 사용 불가으로 인해 발생한 어떠한 직접 손해, 간접 손해, 파급적 손해, 결과적 손해 또는 특별 손해에 대해서도 일체 책임을 지지 않습니다.

148
docs_i18n/README_la.md Normal file
View File

@ -0,0 +1,148 @@
[Lingua Iaponica](/README.md) /
[Lingua Anglica](/docs_i18n/README_en.md) /
[Lingua Coreana](/docs_i18n/README_ko.md)/
[Lingua Sinica](/docs_i18n/README_zh.md)/
[Lingua Theodisca](/docs_i18n/README_de.md)/
[Lingua Arabica](/docs_i18n/README_ar.md)/
[Lingua Graeca](/docs_i18n/README_el.md)/
[Lingua Hispanica](/docs_i18n/README_es.md)/
[Lingua Francogallica](/docs_i18n/README_fr.md)/
[Lingua Italica](/docs_i18n/README_it.md)/
[Lingua Latina](/docs_i18n/README_la.md)/
[Lingua Malaica](/docs_i18n/README_ms.md)/
[Lingua Russica](/docs_i18n/README_ru.md)
*Praeter linguam Iaponicam, omnes linguae sunt a machina translatae.
## VCClient
VCClient est software quod conversionem vocis in tempore reali per AI facit.
## What's New!
* v.2.0.78-beta
* bugfix: error sublationis RVC exemplaris vitata est
* Nunc simul cum versione 1.x incipere potes
* Auctae sunt chunk magnitudines eligibiles
* v.2.0.77-beta (solum pro RTX 5090, experimentale)
* Auxilium pro modulis RTX 5090 relatis (non verificatum quia auctor non habet RTX 5090)
* v.2.0.76-beta
* nova functio:
* Beatrice: Implementatio coniunctionis loquentium
* Beatrice: Automatica mutatio toni
* bugfix:
* Solutio problematum in delectu machinae in modo servientis
* v.2.0.73-beta
* nova functio:
* Download model Beatrice editum
* bugfix:
* Correctus error ubi pitch et formant Beatrice v2 non reflectuntur
* Correctus error ubi ONNX non potest fieri pro modelis utentibus embedder Applio
## Download et nexus pertinentes
Versiones pro Windows et M1 Mac possunt ex repositorio hugging face depromi.
* [Repositorium VCClient](https://huggingface.co/wok000/vcclient000/tree/main)
* [Repositorium Light VCClient pro Beatrice v2](https://huggingface.co/wok000/light_vcclient_beatrice/tree/main)
*1 Linux utatur repositorio clone.
### Nexus pertinentes
* [Repositorium codicis disciplinae Beatrice V2](https://huggingface.co/fierce-cats/beatrice-trainer)
* [Codex disciplinae Beatrice V2 versio Colab](https://github.com/w-okada/beatrice-trainer-colab)
### Software pertinens
* [Mutator vocis in tempore reali VCClient](https://github.com/w-okada/voice-changer)
* [Software lectionis TTSClient](https://github.com/w-okada/ttsclient)
* [Software recognitionis vocis in tempore reali ASRClient](https://github.com/w-okada/asrclient)
## Proprietates VC Client
## Multa AI exempla sustinet
| Exempla AI | v.2 | v.1 | Licentia |
| ------------------------------------------------------------------------------------------------------------ | --------- | -------------------- | ------------------------------------------------------------------------------------------ |
| [RVC ](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/jp/README.ja.md) | sustinetur | sustinetur | Vide repositorium. |
| [Beatrice v1](https://prj-beatrice.com/) | n/a | sustinetur (solum win) | [Proprium](https://github.com/w-okada/voice-changer/tree/master/server/voice_changer/Beatrice) |
| [Beatrice v2](https://prj-beatrice.com/) | sustinetur | n/a | [Proprium](https://huggingface.co/wok000/vcclient_model/blob/main/beatrice_v2_beta/readme.md) |
| [MMVC](https://github.com/isletennos/MMVC_Trainer) | n/a | sustinetur | Vide repositorium. |
| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | n/a | sustinetur | Vide repositorium. |
| [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) | n/a | sustinetur | Vide repositorium. |
## Sustinetur tam structura stand-alone quam per rete
Sustinetur conversio vocis in PC locali et per rete.
Per usum per rete, onus conversionis vocis potest externari cum simul cum applicationibus altis oneribus ut ludis adhibetur.
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
## Pluribus suggestis compatitur
Windows, Mac(M1), Linux, Google Colab
*1 Linux utatur repositorio clone.
## REST API praebet
Clientem creare potes in variis linguis programmandi.
Etiam per HTTP clientem in OS incorporatum ut curl operari potes.
## Solutio problematum
[De communicatione](tutorials/trouble_shoot_communication_ja.md)
## De signature auctoris
Hoc software non signatur auctore. Monitio ut infra apparebit, sed si iconem cum claviatura control premes, poteris exsequi. Hoc est secundum securitatem Apple. Exsecutio est tuae responsabilitatis.
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
## Gratias
* [Materia Tachi Zundamon](https://seiga.nicovideo.jp/seiga/im10792934)
* [Irasuto ya](https://www.irasutoya.com/)
* [Tsukuyomi-chan](https://tyc.rei-yumesaki.net/)
```
本ソフトウェアの音声合成には、フリー素材キャラクター「つくよみちゃん」が無料公開している音声データを使用しています。
■つくよみちゃんコーパスCV.夢前黎)
https://tyc.rei-yumesaki.net/material/corpus/
© Rei Yumesaki
```
* [Amitaro vox materiae officina](https://amitaro.net/)
* [Reprica doll](https://kikyohiroto1227.wixsite.com/kikoto-utau)
## Termini usus
* De mutatore vocis in tempore reali Tsukuyomi-chan, secundum Tsukuyomi-chan corpus usus, prohibetur usus vocis post conversionem ad sequentes fines.
```
■人を批判・攻撃すること。(「批判・攻撃」の定義は、つくよみちゃんキャラクターライセンスに準じます)
■特定の政治的立場・宗教・思想への賛同または反対を呼びかけること。
■刺激の強い表現をゾーニングなしで公開すること。
■他者に対して二次利用(素材としての利用)を許可する形で公開すること。
※鑑賞用の作品として配布・販売していただくことは問題ございません。
```
* De mutatore vocis in tempore reali Amitaro, secundum Amitaro vox materiae officinae usus. Pro details[hic](https://amitaro.net/voice/faq/#index_id6)
```
あみたろの声素材やコーパス読み上げ音声を使って音声モデルを作ったり、ボイスチェンジャーや声質変換などを使用して、自分の声をあみたろの声に変換して使うのもOKです。
ただしその場合は絶対に、あみたろ(もしくは小春音アミ)の声に声質変換していることを明記し、あみたろ(および小春音アミ)が話しているわけではないことが誰でもわかるようにしてください。
また、あみたろの声で話す内容は声素材の利用規約の範囲内のみとし、センシティブな発言などはしないでください。
```
* De mutatore vocis in tempore reali Kogane Mahiro, secundum Reprica doll usus. Pro details[hic](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)
## Disclaimer
Non tenemur pro ullis damnis directis, indirectis, consequentibus, vel specialibus ex usu vel incapacitate usus huius software.

148
docs_i18n/README_ms.md Normal file
View File

@ -0,0 +1,148 @@
[Bahasa Jepun](/README.md) /
[Bahasa Inggeris](/docs_i18n/README_en.md) /
[Bahasa Korea](/docs_i18n/README_ko.md)/
[Bahasa Cina](/docs_i18n/README_zh.md)/
[Bahasa Jerman](/docs_i18n/README_de.md)/
[Bahasa Arab](/docs_i18n/README_ar.md)/
[Bahasa Greek](/docs_i18n/README_el.md)/
[Bahasa Sepanyol](/docs_i18n/README_es.md)/
[Bahasa Perancis](/docs_i18n/README_fr.md)/
[Bahasa Itali](/docs_i18n/README_it.md)/
[Bahasa Latin](/docs_i18n/README_la.md)/
[Bahasa Melayu](/docs_i18n/README_ms.md)/
[Bahasa Rusia](/docs_i18n/README_ru.md)
*Selain bahasa Jepun, semua terjemahan adalah terjemahan mesin.
## VCClient
VCClient adalah perisian yang menggunakan AI untuk menukar suara secara masa nyata.
## What's New!
* v.2.0.78-beta
* pembaikan pepijat: Elakkan ralat muat naik model RVC
* Kini boleh dijalankan serentak dengan ver.1.x
* Saiz chunk yang boleh dipilih telah ditambah
* v.2.0.77-beta (hanya untuk RTX 5090, eksperimen)
* Sokongan untuk modul berkaitan RTX 5090 (tidak disahkan kerana pembangun tidak memiliki RTX 5090)
* v.2.0.76-beta
* ciri baru:
* Beatrice: Pelaksanaan penggabungan pembicara
* Beatrice: Auto pitch shift
* pembaikan pepijat:
* Menangani masalah pemilihan peranti dalam mod pelayan
* v.2.0.73-beta
* ciri baru:
* Muat turun model beatrice yang telah diedit
* pembaikan pepijat:
* Memperbaiki pepijat di mana pitch dan formant beatrice v2 tidak diterapkan
* Memperbaiki pepijat di mana ONNX tidak dapat dibuat untuk model yang menggunakan embedder Applio
## Muat Turun dan Pautan Berkaitan
Versi Windows dan M1 Mac boleh dimuat turun dari repositori hugging face.
* [Repositori VCClient](https://huggingface.co/wok000/vcclient000/tree/main)
* [Repositori Light VCClient untuk Beatrice v2](https://huggingface.co/wok000/light_vcclient_beatrice/tree/main)
*1 Sila klon repositori untuk Linux.
### Pautan Berkaitan
* [Repositori Kod Latihan Beatrice V2](https://huggingface.co/fierce-cats/beatrice-trainer)
* [Versi Colab Kod Latihan Beatrice V2](https://github.com/w-okada/beatrice-trainer-colab)
### Perisian Berkaitan
* [Penukar Suara Masa Nyata VCClient](https://github.com/w-okada/voice-changer)
* [Perisian Pembacaan TTSClient](https://github.com/w-okada/ttsclient)
* [Perisian Pengecaman Suara Masa Nyata ASRClient](https://github.com/w-okada/asrclient)
## Ciri-ciri VC Client
## Menyokong pelbagai model AI
| Model AI | v.2 | v.1 | Lesen |
| ------------------------------------------------------------------------------------------------------------ | --------- | -------------------- | ------------------------------------------------------------------------------------------ |
| [RVC ](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/jp/README.ja.md) | disokong | disokong | Sila rujuk repositori. |
| [Beatrice v1](https://prj-beatrice.com/) | n/a | disokong (hanya win) | [Khas](https://github.com/w-okada/voice-changer/tree/master/server/voice_changer/Beatrice) |
| [Beatrice v2](https://prj-beatrice.com/) | disokong | n/a | [Khas](https://huggingface.co/wok000/vcclient_model/blob/main/beatrice_v2_beta/readme.md) |
| [MMVC](https://github.com/isletennos/MMVC_Trainer) | n/a | disokong | Sila rujuk repositori. |
| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | n/a | disokong | Sila rujuk repositori. |
| [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) | n/a | disokong | Sila rujuk repositori. |
## Menyokong kedua-dua konfigurasi berdiri sendiri dan melalui rangkaian
Menyokong penukaran suara yang lengkap di PC tempatan dan juga melalui rangkaian.
Dengan menggunakan melalui rangkaian, beban penukaran suara boleh dialihkan ke luar apabila digunakan serentak dengan aplikasi yang memerlukan beban tinggi seperti permainan.
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
## Menyokong pelbagai platform
Windows, Mac(M1), Linux, Google Colab
*1 Sila klon repositori untuk Linux.
## Menyediakan REST API
Pelanggan boleh dibina dalam pelbagai bahasa pengaturcaraan.
Juga boleh dikendalikan menggunakan klien HTTP yang dibina dalam OS seperti curl.
## Penyelesaian Masalah
[Bahagian Komunikasi](tutorials/trouble_shoot_communication_ja.md)
## Mengenai Tandatangan Pembangun
Perisian ini tidak ditandatangani oleh pembangun. Amaran seperti di bawah akan muncul, tetapi anda boleh menjalankannya dengan menekan kekunci kawalan sambil mengklik ikon. Ini adalah disebabkan oleh dasar keselamatan Apple. Pelaksanaan adalah atas tanggungjawab sendiri.
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
## Penghargaan
* [Bahan Tachizundamon](https://seiga.nicovideo.jp/seiga/im10792934)
* [Irasutoya](https://www.irasutoya.com/)
* [Tsukuyomi-chan](https://tyc.rei-yumesaki.net/)
```
本ソフトウェアの音声合成には、フリー素材キャラクター「つくよみちゃん」が無料公開している音声データを使用しています。
■つくよみちゃんコーパスCV.夢前黎)
https://tyc.rei-yumesaki.net/material/corpus/
© Rei Yumesaki
```
* [Studio Bahan Suara Amitaro](https://amitaro.net/)
* [Replikadol](https://kikyohiroto1227.wixsite.com/kikoto-utau)
## Syarat Penggunaan
* Mengenai penukar suara masa nyata Tsukuyomi-chan, penggunaan suara yang ditukar untuk tujuan berikut adalah dilarang mengikut syarat penggunaan korpus Tsukuyomi-chan.
```
■人を批判・攻撃すること。(「批判・攻撃」の定義は、つくよみちゃんキャラクターライセンスに準じます)
■特定の政治的立場・宗教・思想への賛同または反対を呼びかけること。
■刺激の強い表現をゾーニングなしで公開すること。
■他者に対して二次利用(素材としての利用)を許可する形で公開すること。
※鑑賞用の作品として配布・販売していただくことは問題ございません。
```
* Mengenai penukar suara masa nyata Amitaro, ia mematuhi syarat penggunaan Studio Bahan Suara Amitaro. Untuk maklumat lanjut, sila lihat[di sini](https://amitaro.net/voice/faq/#index_id6)
```
あみたろの声素材やコーパス読み上げ音声を使って音声モデルを作ったり、ボイスチェンジャーや声質変換などを使用して、自分の声をあみたろの声に変換して使うのもOKです。
ただしその場合は絶対に、あみたろ(もしくは小春音アミ)の声に声質変換していることを明記し、あみたろ(および小春音アミ)が話しているわけではないことが誰でもわかるようにしてください。
また、あみたろの声で話す内容は声素材の利用規約の範囲内のみとし、センシティブな発言などはしないでください。
```
* Mengenai penukar suara masa nyata Kogane Mahiro, ia mematuhi syarat penggunaan Replikadol. Untuk maklumat lanjut, sila lihat[di sini](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)
## Penafian
Kami tidak bertanggungjawab ke atas sebarang kerosakan langsung, tidak langsung, berbangkit, akibat atau khas yang timbul daripada penggunaan atau ketidakupayaan untuk menggunakan perisian ini.

148
docs_i18n/README_ru.md Normal file
View File

@ -0,0 +1,148 @@
[японский](/README.md) /
[английский](/docs_i18n/README_en.md) /
[корейский](/docs_i18n/README_ko.md)/
[китайский](/docs_i18n/README_zh.md)/
[немецкий](/docs_i18n/README_de.md)/
[арабский](/docs_i18n/README_ar.md)/
[греческий](/docs_i18n/README_el.md)/
[испанский](/docs_i18n/README_es.md)/
[французский](/docs_i18n/README_fr.md)/
[итальянский](/docs_i18n/README_it.md)/
[латинский](/docs_i18n/README_la.md)/
[малайский](/docs_i18n/README_ms.md)/
[русский](/docs_i18n/README_ru.md)
*Кроме японского, все переводы выполнены машинным переводом.
## VCClient
VCClient — это программное обеспечение, использующее ИИ для преобразования голоса в реальном времени.
## Что нового!
* v.2.0.78-beta
* Исправление ошибки: предотвращена ошибка загрузки модели RVC
* Теперь возможно одновременное использование с версией 1.x
* Увеличено количество доступных размеров chunk
* v.2.0.77-beta (только для RTX 5090, экспериментальная)
* Поддержка модулей, связанных с RTX 5090 (не проверено, так как разработчик не имеет RTX 5090)
* v.2.0.76-beta
* новая функция:
* Beatrice: реализация слияния говорящих
* Beatrice: автоматический сдвиг тона
* исправление ошибок:
* Исправление ошибки при выборе устройства в серверном режиме
* v.2.0.73-beta
* новая функция:
* Загрузка отредактированной модели beatrice
* исправление ошибок:
* Исправлена ошибка, из-за которой pitch и formant в beatrice v2 не применялись
* Исправлена ошибка, из-за которой ONNX не создавался для моделей, использующих embedder Applio
## Загрузки и связанные ссылки
Версии для Windows и M1 Mac можно скачать из репозитория hugging face.
* [Репозиторий VCClient](https://huggingface.co/wok000/vcclient000/tree/main)
* [Репозиторий Light VCClient для Beatrice v2](https://huggingface.co/wok000/light_vcclient_beatrice/tree/main)
*1 Для Linux клонируйте репозиторий.
### Связанные ссылки
* [Репозиторий кода обучения Beatrice V2](https://huggingface.co/fierce-cats/beatrice-trainer)
* [<EFBFBD><EFBFBD>од обучения Beatrice V2 для Colab](https://github.com/w-okada/beatrice-trainer-colab)
### Связанное программное обеспечение
* [Реалтайм голосовой преобразователь VCClient](https://github.com/w-okada/voice-changer)
* [Программное обеспечение для чтения текста TTSClient](https://github.com/w-okada/ttsclient)
* [Программное обеспечение для распознавания речи в реальном времени ASRClient](https://github.com/w-okada/asrclient)
## Особенности VC Client
## Поддержка различных моделей ИИ
| Модель ИИ | v.2 | v.1 | Лицензия |
| ------------------------------------------------------------------------------------------------------------ | --------- | -------------------- | ------------------------------------------------------------------------------------------ |
| [RVC ](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/jp/README.ja.md) | поддерживается | поддерживается | См. репозиторий. |
| [Beatrice v1](https://prj-beatrice.com/) | n/a | поддерживается (только win) | [собственная](https://github.com/w-okada/voice-changer/tree/master/server/voice_changer/Beatrice) |
| [Beatrice v2](https://prj-beatrice.com/) | поддерживается | n/a | [собственная](https://huggingface.co/wok000/vcclient_model/blob/main/beatrice_v2_beta/readme.md) |
| [MMVC](https://github.com/isletennos/MMVC_Trainer) | n/a | поддерживается | См. репозиторий. |
| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | n/a | поддерживается | См. репозиторий. |
| [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) | n/a | поддерживается | См. репозиторий. |
## Поддержка как автономной, так и сетевой конфигурации
Поддерживается как локальное преобразование голоса на ПК, так и преобразование через сеть.
Использование через сеть позволяет разгрузить преобразование голоса на внешние ресурсы при одновременном использовании с ресурсоемкими приложениями, такими как игры.
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
## Поддержка нескольких платформ
Windows, Mac(M1), Linux, Google Colab
*1 Для Linux клонируйте репозиторий.
## Предоставление REST API
Можно создавать клиентов на различных языках программирования.
Также можно управлять с помощью встроенных в ОС HTTP-клиентов, таких как curl.
## Устранение неполадок
[Связь](tutorials/trouble_shoot_communication_ja.md)
## О подписи разработчика
Это программное обеспечение не подписано разработчиком. Появится предупреждение, как показано ниже, но вы можете запустить его, нажав на иконку, удерживая клавишу Control. Это связано с политикой безопасности Apple. Запуск осуществляется на ваш страх и риск.
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
## Благодарности
* [Материалы от Tachi Zundamon](https://seiga.nicovideo.jp/seiga/im10792934)
* [Иллюстрации](https://www.irasutoya.com/)
* [Tsukuyomi-chan](https://tyc.rei-yumesaki.net/)
```
本ソフトウェアの音声合成には、フリー素材キャラクター「つくよみちゃん」が無料公開している音声データを使用しています。
■つくよみちゃんコーパスCV.夢前黎)
https://tyc.rei-yumesaki.net/material/corpus/
© Rei Yumesaki
```
* [Голосовые материалы от Amitaro](https://amitaro.net/)
* [Replikador](https://kikyohiroto1227.wixsite.com/kikoto-utau)
## Условия использования
* Что касается реалтайм голосового преобразователя Tsukuyomi-chan, использование преобразованного голоса запрещено для следующих целей в соответствии с условиями использования корпуса Tsukuyomi-chan.
```
■人を批判・攻撃すること。(「批判・攻撃」の定義は、つくよみちゃんキャラクターライセンスに準じます)
■特定の政治的立場・宗教・思想への賛同または反対を呼びかけること。
■刺激の強い表現をゾーニングなしで公開すること。
■他者に対して二次利用(素材としての利用)を許可する形で公開すること。
※鑑賞用の作品として配布・販売していただくことは問題ございません。
```
* Что касается реалтайм голосового преобразователя Amitaro, он подчиняется следующим условиям использования от Amitaro's Voice Material Workshop. Подробности[здесь](https://amitaro.net/voice/faq/#index_id6)
```
あみたろの声素材やコーパス読み上げ音声を使って音声モデルを作ったり、ボイスチェンジャーや声質変換などを使用して、自分の声をあみたろの声に変換して使うのもOKです。
ただしその場合は絶対に、あみたろ(もしくは小春音アミ)の声に声質変換していることを明記し、あみたろ(および小春音アミ)が話しているわけではないことが誰でもわかるようにしてください。
また、あみたろの声で話す内容は声素材の利用規約の範囲内のみとし、センシティブな発言などはしないでください。
```
* Что касается реалтайм голосового преобразователя Kogane Mahiro, он подчиняется условиям использования Replikador. Подробности[здесь](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)
## Отказ от ответственности
Мы не несем ответственности за любые прямые, косвенные, побочные, косвенные или особые убытки, возникшие в результате использования или невозможности использования этого программного обеспечения.

148
docs_i18n/README_zh.md Normal file
View File

@ -0,0 +1,148 @@
[日语](/README.md) /
[英语](/docs_i18n/README_en.md) /
[韩语](/docs_i18n/README_ko.md)/
[中文](/docs_i18n/README_zh.md)/
[德语](/docs_i18n/README_de.md)/
[阿拉伯语](/docs_i18n/README_ar.md)/
[希腊语](/docs_i18n/README_el.md)/
[西班牙语](/docs_i18n/README_es.md)/
[法语](/docs_i18n/README_fr.md)/
[意大利语](/docs_i18n/README_it.md)/
[拉丁语](/docs_i18n/README_la.md)/
[马来语](/docs_i18n/README_ms.md)/
[俄语](/docs_i18n/README_ru.md)
*除日语外,其他语言均为机器翻译。
## VCClient
VCClient是一款利用AI进行实时语音转换的软件。
## What's New!
* v.2.0.78-beta
* bug修复避免RVC模型上传错误
* 现在可以与ver.1.x同时启动
* 增加了可选择的chunk size
* v.2.0.77-beta (仅适用于 RTX 5090实验性)
* 相关模块支持 RTX 5090由于开发者未拥有 RTX 5090未经验证
* v.2.0.76-beta
* 新功能:
* Beatrice: 实现说话者合并
* Beatrice: 自动音高转换
* 错误修复:
* 修复服务器模式下设备选择的问题
* v.2.0.73-beta
* 新功能:
* 下载编辑后的beatrice模型
* 错误修复:
* 修复了beatrice v2的音高和共振峰未反映的错误
* 修复了使用Applio的embedder的模型无法生成ONNX的错误
## 下载和相关链接
Windows版、M1 Mac版可以从hugging face的仓库下载。
* [VCClient 的仓库](https://huggingface.co/wok000/vcclient000/tree/main)
* [Light VCClient for Beatrice v2 的仓库](https://huggingface.co/wok000/light_vcclient_beatrice/tree/main)
*1 Linux请克隆仓库使用。
### 相关链接
* [Beatrice V2 训练代码的仓库](https://huggingface.co/fierce-cats/beatrice-trainer)
* [Beatrice V2 训练代码 Colab版](https://github.com/w-okada/beatrice-trainer-colab)
### 相关软件
* [实时变声器 VCClient](https://github.com/w-okada/voice-changer)
* [语音合成软件 TTSClient](https://github.com/w-okada/ttsclient)
* [实时语音识别软件 ASRClient](https://github.com/w-okada/asrclient)
## VC Client的特点
## 支持多种AI模型
| AI模型 | v.2 | v.1 | 许可证 |
| ------------------------------------------------------------------------------------------------------------ | --------- | -------------------- | ------------------------------------------------------------------------------------------ |
| [RVC ](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/jp/README.ja.md) | supported | supported | 请参阅仓库。 |
| [Beatrice v1](https://prj-beatrice.com/) | n/a | supported (only win) | [独立](https://github.com/w-okada/voice-changer/tree/master/server/voice_changer/Beatrice) |
| [Beatrice v2](https://prj-beatrice.com/) | supported | n/a | [独立](https://huggingface.co/wok000/vcclient_model/blob/main/beatrice_v2_beta/readme.md) |
| [MMVC](https://github.com/isletennos/MMVC_Trainer) | n/a | supported | 请参阅仓库。 |
| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | n/a | supported | 请参阅仓库。 |
| [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC) | n/a | supported | 请参阅仓库。 |
## 支持独立和通过网络的两种配置
支持在本地PC上完成的语音转换和通过网络的语音转换。
通过网络使用时,可以在与游戏等高负荷应用程序同时使用时将语音转换的负荷转移到外部。
![image](https://user-images.githubusercontent.com/48346627/206640768-53f6052d-0a96-403b-a06c-6714a0b7471d.png)
## 支持多平台
Windows, Mac(M1), Linux, Google Colab
*1 Linux请克隆仓库使用。
## 提供REST API
可以用各种编程语言创建客户端。
还可以使用curl等操作系统内置的HTTP客户端进行操作。
## 故障排除
[通信篇](tutorials/trouble_shoot_communication_ja.md)
## 关于开发者的签名
本软件未由开发者签名。虽然会出现如下警告但按住Control键并点击图标即可运行。这是由于Apple的安全策略所致。运行需自行承担风险。
![image](https://user-images.githubusercontent.com/48346627/212567711-c4a8d599-e24c-4fa3-8145-a5df7211f023.png)
## 致谢
* [立ちずんだもん素材](https://seiga.nicovideo.jp/seiga/im10792934)
* [いらすとや](https://www.irasutoya.com/)
* [つくよみちゃん](https://tyc.rei-yumesaki.net/)
```
本ソフトウェアの音声合成には、フリー素材キャラクター「つくよみちゃん」が無料公開している音声データを使用しています。
■つくよみちゃんコーパスCV.夢前黎)
https://tyc.rei-yumesaki.net/material/corpus/
© Rei Yumesaki
```
* [あみたろの声素材工房](https://amitaro.net/)
* [れぷりかどーる](https://kikyohiroto1227.wixsite.com/kikoto-utau)
## 使用条款
* 关于实时变声器つくよみちゃん,禁止将转换后的语音用于以下目的,遵循つくよみちゃん语料库的使用条款。
```
■人を批判・攻撃すること。(「批判・攻撃」の定義は、つくよみちゃんキャラクターライセンスに準じます)
■特定の政治的立場・宗教・思想への賛同または反対を呼びかけること。
■刺激の強い表現をゾーニングなしで公開すること。
■他者に対して二次利用(素材としての利用)を許可する形で公開すること。
※鑑賞用の作品として配布・販売していただくことは問題ございません。
```
* 关于实时变声器あみたろ,遵循あみたろの声素材工房的以下使用条款。详情请见[这里](https://amitaro.net/voice/faq/#index_id6)
```
あみたろの声素材やコーパス読み上げ音声を使って音声モデルを作ったり、ボイスチェンジャーや声質変換などを使用して、自分の声をあみたろの声に変換して使うのもOKです。
ただしその場合は絶対に、あみたろ(もしくは小春音アミ)の声に声質変換していることを明記し、あみたろ(および小春音アミ)が話しているわけではないことが誰でもわかるようにしてください。
また、あみたろの声で話す内容は声素材の利用規約の範囲内のみとし、センシティブな発言などはしないでください。
```
* 关于实时变声器黄琴まひろ,遵循れぷりかどーる的使用条款。详情请见[这里](https://kikyohiroto1227.wixsite.com/kikoto-utau/ter%EF%BD%8Ds-of-service)
## 免责声明
对于因使用或无法使用本软件而导致的任何直接、间接、衍生、结果性或特殊损害,本软件概不负责。

View File

@ -58,12 +58,16 @@ def setupArgParser():
parser.add_argument("--hubert_base", type=str, default="pretrain/hubert_base.pt", help="path to hubert_base model(pytorch)")
parser.add_argument("--hubert_base_jp", type=str, default="pretrain/rinna_hubert_base_jp.pt", help="path to hubert_base_jp model(pytorch)")
parser.add_argument("--hubert_soft", type=str, default="pretrain/hubert/hubert-soft-0d54a1f4.pt", help="path to hubert_soft model(pytorch)")
parser.add_argument("--whisper_tiny", type=str, default="pretrain/whisper_tiny.pt", help="path to hubert_soft model(pytorch)")
parser.add_argument("--nsf_hifigan", type=str, default="pretrain/nsf_hifigan/model", help="path to nsf_hifigan model(pytorch)")
parser.add_argument("--crepe_onnx_full", type=str, default="pretrain/crepe_onnx_full.onnx", help="path to crepe_onnx_full")
parser.add_argument("--crepe_onnx_tiny", type=str, default="pretrain/crepe_onnx_tiny.onnx", help="path to crepe_onnx_tiny")
parser.add_argument("--rmvpe", type=str, default="pretrain/rmvpe.pt", help="path to rmvpe")
parser.add_argument("--rmvpe_onnx", type=str, default="pretrain/rmvpe.onnx", help="path to rmvpe onnx")
parser.add_argument("--host", type=str, default='127.0.0.1', help="IP address of the network interface to listen for HTTP connections. Specify 0.0.0.0 to listen on all interfaces.")
parser.add_argument("--allowed-origins", action='append', default=[], help="List of URLs to allow connection from, i.e. https://example.com. Allows http(s)://127.0.0.1:{port} and http(s)://localhost:{port} by default.")
return parser
@ -106,22 +110,26 @@ voiceChangerParams = VoiceChangerParams(
rmvpe=args.rmvpe,
rmvpe_onnx=args.rmvpe_onnx,
sample_mode=args.sample_mode,
whisper_tiny=args.whisper_tiny,
)
vcparams = VoiceChangerParamsManager.get_instance()
vcparams.setParams(voiceChangerParams)
printMessage(f"Booting PHASE :{__name__}", level=2)
HOST = args.host
PORT = args.p
def localServer(logLevel: str = "critical"):
def localServer(logLevel: str = "critical", key_path: str | None = None, cert_path: str | None = None):
try:
uvicorn.run(
f"{os.path.basename(__file__)[:-3]}:app_socketio",
host="0.0.0.0",
host=HOST,
port=int(PORT),
reload=False if hasattr(sys, "_MEIPASS") else True,
ssl_keyfile=key_path,
ssl_certfile=cert_path,
log_level=logLevel,
)
except Exception as e:
@ -132,8 +140,8 @@ if __name__ == "MMVCServerSIO":
mp.freeze_support()
voiceChangerManager = VoiceChangerManager.get_instance(voiceChangerParams)
app_fastapi = MMVC_Rest.get_instance(voiceChangerManager, voiceChangerParams)
app_socketio = MMVC_SocketIOApp.get_instance(app_fastapi, voiceChangerManager)
app_fastapi = MMVC_Rest.get_instance(voiceChangerManager, voiceChangerParams, args.allowed_origins, PORT)
app_socketio = MMVC_SocketIOApp.get_instance(app_fastapi, voiceChangerManager, args.allowed_origins, PORT)
if __name__ == "__mp_main__":
@ -218,34 +226,26 @@ if __name__ == "__main__":
printMessage("In many cases, it will launch when you access any of the following URLs.", level=2)
if "EX_PORT" in locals() and "EX_IP" in locals(): # シェルスクリプト経由起動(docker)
if args.https == 1:
printMessage(f"https://127.0.0.1:{EX_PORT}/", level=1)
printMessage(f"https://localhost:{EX_PORT}/", level=1)
for ip in EX_IP.strip().split(" "):
printMessage(f"https://{ip}:{EX_PORT}/", level=1)
else:
printMessage(f"http://127.0.0.1:{EX_PORT}/", level=1)
printMessage(f"http://localhost:{EX_PORT}/", level=1)
else: # 直接python起動
if args.https == 1:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect((args.test_connect, 80))
hostname = s.getsockname()[0]
printMessage(f"https://127.0.0.1:{PORT}/", level=1)
printMessage(f"https://localhost:{PORT}/", level=1)
printMessage(f"https://{hostname}:{PORT}/", level=1)
else:
printMessage(f"http://127.0.0.1:{PORT}/", level=1)
printMessage(f"http://localhost:{PORT}/", level=1)
# サーバ起動
if args.https:
# HTTPS サーバ起動
try:
uvicorn.run(
f"{os.path.basename(__file__)[:-3]}:app_socketio",
host="0.0.0.0",
port=int(PORT),
reload=False if hasattr(sys, "_MEIPASS") else True,
ssl_keyfile=key_path,
ssl_certfile=cert_path,
log_level=args.logLevel,
)
localServer(args.logLevel, key_path, cert_path)
except Exception as e:
logger.error(f"[Voice Changer] Web Server(https) Launch Exception, {e}")
@ -254,12 +254,12 @@ if __name__ == "__main__":
p.start()
try:
if sys.platform.startswith("win"):
process = subprocess.Popen([NATIVE_CLIENT_FILE_WIN, "--disable-gpu", "-u", f"http://127.0.0.1:{PORT}/"])
process = subprocess.Popen([NATIVE_CLIENT_FILE_WIN, "--disable-gpu", "-u", f"http://localhost:{PORT}/"])
return_code = process.wait()
logger.info("client closed.")
p.terminate()
elif sys.platform.startswith("darwin"):
process = subprocess.Popen([NATIVE_CLIENT_FILE_MAC, "--disable-gpu", "-u", f"http://127.0.0.1:{PORT}/"])
process = subprocess.Popen([NATIVE_CLIENT_FILE_MAC, "--disable-gpu", "-u", f"http://localhost:{PORT}/"])
return_code = process.wait()
logger.info("client closed.")
p.terminate()

View File

@ -13,6 +13,8 @@ VoiceChangerType: TypeAlias = Literal[
"RVC",
"Diffusion-SVC",
"Beatrice",
"LLVC",
"EasyVC",
]
StaticSlot: TypeAlias = Literal["Beatrice-JVS",]
@ -55,7 +57,12 @@ def getFrontendPath():
return frontend_path
EmbedderType: TypeAlias = Literal["hubert_base", "contentvec", "hubert-base-japanese"]
EmbedderType: TypeAlias = Literal[
"hubert_base",
"contentvec",
"hubert-base-japanese",
"whisper",
]
class EnumInferenceTypes(Enum):
@ -69,6 +76,8 @@ class EnumInferenceTypes(Enum):
onnxRVC = "onnxRVC"
onnxRVCNono = "onnxRVCNono"
easyVC = "easyVC"
DiffusionSVCInferenceType: TypeAlias = Literal["combo",]
@ -81,6 +90,7 @@ PitchExtractorType: TypeAlias = Literal[
"crepe_tiny",
"rmvpe",
"rmvpe_onnx",
"fcpe",
]
ServerAudioDeviceType: TypeAlias = Literal["audioinput", "audiooutput"]
@ -97,11 +107,9 @@ RVCSampleMode: TypeAlias = Literal[
def getSampleJsonAndModelIds(mode: RVCSampleMode):
if mode == "production":
return [
# "https://huggingface.co/wok000/vcclient_model/raw/main/samples_0001.json",
# "https://huggingface.co/wok000/vcclient_model/raw/main/samples_0002.json",
"https://huggingface.co/wok000/vcclient_model/raw/main/samples_0003_t2.json",
"https://huggingface.co/wok000/vcclient_model/raw/main/samples_0003_o2.json",
"https://huggingface.co/wok000/vcclient_model/raw/main/samples_0003_d2.json",
"https://huggingface.co/wok000/vcclient_model/raw/main/samples_0004_t.json",
"https://huggingface.co/wok000/vcclient_model/raw/main/samples_0004_o.json",
"https://huggingface.co/wok000/vcclient_model/raw/main/samples_0004_d.json",
], [
("Tsukuyomi-chan_o", {"useIndex": False}),
("Amitaro_o", {"useIndex": False}),
@ -203,4 +211,4 @@ def getSampleJsonAndModelIds(mode: RVCSampleMode):
RVC_MODEL_DIRNAME = "rvc"
MAX_SLOT_NUM = 200
MAX_SLOT_NUM = 500

View File

@ -134,7 +134,33 @@ class BeatriceModelSlot(ModelSlot):
speakers: dict = field(default_factory=lambda: {1: "user1", 2: "user2"})
ModelSlots: TypeAlias = Union[ModelSlot, RVCModelSlot, MMVCv13ModelSlot, MMVCv15ModelSlot, SoVitsSvc40ModelSlot, DDSPSVCModelSlot, DiffusionSVCModelSlot, BeatriceModelSlot]
@dataclass
class LLVCModelSlot(ModelSlot):
voiceChangerType: VoiceChangerType = "LLVC"
modelFile: str = ""
configFile: str = ""
@dataclass
class EasyVCModelSlot(ModelSlot):
voiceChangerType: VoiceChangerType = "EasyVC"
modelFile: str = ""
version: str = ""
samplingRate: int = -1
ModelSlots: TypeAlias = Union[
ModelSlot,
RVCModelSlot,
MMVCv13ModelSlot,
MMVCv15ModelSlot,
SoVitsSvc40ModelSlot,
DDSPSVCModelSlot,
DiffusionSVCModelSlot,
BeatriceModelSlot,
LLVCModelSlot,
EasyVCModelSlot,
]
def loadSlotInfo(model_dir: str, slotIndex: int | StaticSlot) -> ModelSlots:
@ -165,10 +191,15 @@ def loadSlotInfo(model_dir: str, slotIndex: int | StaticSlot) -> ModelSlots:
return DiffusionSVCModelSlot(**{k: v for k, v in jsonDict.items() if k in slotInfoKey})
elif slotInfo.voiceChangerType == "Beatrice":
slotInfoKey.extend(list(BeatriceModelSlot.__annotations__.keys()))
if slotIndex == "Beatrice-JVS":
if slotIndex == "Beatrice-JVS": # STATIC Model
return BeatriceModelSlot(**{k: v for k, v in jsonDict.items() if k in slotInfoKey})
return BeatriceModelSlot(**{k: v for k, v in jsonDict.items() if k in slotInfoKey})
elif slotInfo.voiceChangerType == "LLVC":
slotInfoKey.extend(list(LLVCModelSlot.__annotations__.keys()))
return LLVCModelSlot(**{k: v for k, v in jsonDict.items() if k in slotInfoKey})
elif slotInfo.voiceChangerType == "EasyVC":
slotInfoKey.extend(list(EasyVCModelSlot.__annotations__.keys()))
return EasyVCModelSlot(**{k: v for k, v in jsonDict.items() if k in slotInfoKey})
else:
return ModelSlot()

View File

@ -19,9 +19,19 @@ def downloadWeight(voiceChangerParams: VoiceChangerParams):
crepe_onnx_tiny = voiceChangerParams.crepe_onnx_tiny
rmvpe = voiceChangerParams.rmvpe
rmvpe_onnx = voiceChangerParams.rmvpe_onnx
whisper_tiny = voiceChangerParams.whisper_tiny
weight_files = [content_vec_500_onnx, hubert_base, hubert_base_jp, hubert_soft,
nsf_hifigan, crepe_onnx_full, crepe_onnx_tiny, rmvpe]
weight_files = [
content_vec_500_onnx,
hubert_base,
hubert_base_jp,
hubert_soft,
nsf_hifigan,
crepe_onnx_full,
crepe_onnx_tiny,
rmvpe,
whisper_tiny,
]
# file exists check (currently only for rvc)
downloadParams = []
@ -119,6 +129,15 @@ def downloadWeight(voiceChangerParams: VoiceChangerParams):
}
)
if os.path.exists(whisper_tiny) is False:
downloadParams.append(
{
"url": "https://openaipublic.azureedge.net/main/whisper/models/65147644a518d12f04e32d6f3b26facc3f8dd46e5390956a9424a650c0ce22b9/tiny.pt",
"saveTo": whisper_tiny,
"position": 10,
}
)
with ThreadPoolExecutor() as pool:
pool.map(download, downloadParams)

View File

@ -1,4 +1,5 @@
import logging
import traceback
class UvicornSuppressFilter(logging.Filter):
@ -11,6 +12,24 @@ class NullHandler(logging.Handler):
pass
class DebugStreamHandler(logging.StreamHandler):
def emit(self, record):
try:
super().emit(record)
except Exception as e:
print(f"Error logging message: {e}", file=sys.stderr)
traceback.print_exc()
class DebugFileHandler(logging.FileHandler):
def emit(self, record):
try:
super().emit(record)
except Exception as e:
print(f"Error writing log message to file: {e}", file=sys.stderr)
traceback.print_exc()
class VoiceChangaerLogger:
_instance = None
@ -60,16 +79,19 @@ class VoiceChangaerLogger:
def initialize(self, initialize: bool):
if not self.logger.handlers:
if initialize:
file_handler = logging.FileHandler('vcclient.log', encoding='utf-8', mode='w')
# file_handler = logging.FileHandler("vcclient.log", encoding="utf-8", mode="w")
file_handler = DebugFileHandler("vcclient.log", encoding="utf-8", mode="w")
else:
file_handler = logging.FileHandler('vcclient.log', encoding='utf-8')
file_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(process)d - %(message)s')
# file_handler = logging.FileHandler("vcclient.log", encoding="utf-8")
file_handler = DebugFileHandler("vcclient.log", encoding="utf-8")
file_formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(process)d - %(message)s")
file_handler.setFormatter(file_formatter)
file_handler.setLevel(logging.DEBUG)
self.logger.addHandler(file_handler)
stream_formatter = logging.Formatter('%(message)s')
stream_handler = logging.StreamHandler()
stream_formatter = logging.Formatter("%(message)s")
# stream_handler = logging.StreamHandler()
stream_handler = DebugStreamHandler()
stream_handler.setFormatter(stream_formatter)
stream_handler.setLevel(logging.INFO)
self.logger.addHandler(stream_handler)

24
server/mods/origins.py Normal file
View File

@ -0,0 +1,24 @@
from typing import Optional, Sequence
from urllib.parse import urlparse
ENFORCE_URL_ORIGIN_FORMAT = "Input origins must be well-formed URLs, i.e. https://google.com or https://www.google.com."
SCHEMAS = ('http', 'https')
LOCAL_ORIGINS = ('127.0.0.1', 'localhost')
def compute_local_origins(port: Optional[int] = None) -> list[str]:
local_origins = [f'{schema}://{origin}' for schema in SCHEMAS for origin in LOCAL_ORIGINS]
if port is not None:
local_origins = [f'{origin}:{port}' for origin in local_origins]
return local_origins
def normalize_origins(origins: Sequence[str]) -> set[str]:
allowed_origins = set()
for origin in origins:
url = urlparse(origin)
assert url.scheme, ENFORCE_URL_ORIGIN_FORMAT
valid_origin = f'{url.scheme}://{url.hostname}'
if url.port:
valid_origin += f':{url.port}'
allowed_origins.add(valid_origin)
return allowed_origins

View File

@ -27,3 +27,4 @@ websockets==11.0.2
sounddevice==0.4.6
dataclasses_json==0.5.7
onnxsim==0.4.28
torchfcpe==0.0.3

View File

@ -1,12 +1,12 @@
import os
import sys
from restapi.mods.trustedorigin import TrustedOriginMiddleware
from fastapi import FastAPI, Request, Response, HTTPException
from fastapi.routing import APIRoute
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
from fastapi.exceptions import RequestValidationError
from typing import Callable
from typing import Callable, Optional, Sequence, Literal
from mods.log_control import VoiceChangaerLogger
from voice_changer.VoiceChangerManager import VoiceChangerManager
@ -43,17 +43,17 @@ class MMVC_Rest:
cls,
voiceChangerManager: VoiceChangerManager,
voiceChangerParams: VoiceChangerParams,
allowedOrigins: Optional[Sequence[str]] = None,
port: Optional[int] = None,
):
if cls._instance is None:
logger.info("[Voice Changer] MMVC_Rest initializing...")
app_fastapi = FastAPI()
app_fastapi.router.route_class = ValidationErrorLoggingRoute
app_fastapi.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
TrustedOriginMiddleware,
allowed_origins=allowedOrigins,
port=port
)
app_fastapi.mount(
@ -75,7 +75,10 @@ class MMVC_Rest:
)
app_fastapi.mount("/tmp", StaticFiles(directory=f"{TMP_DIR}"), name="static")
app_fastapi.mount("/upload_dir", StaticFiles(directory=f"{UPLOAD_DIR}"), name="static")
app_fastapi.mount("/model_dir_static", StaticFiles(directory=f"{MODEL_DIR_STATIC}"), name="static")
try:
app_fastapi.mount("/model_dir_static", StaticFiles(directory=f"{MODEL_DIR_STATIC}"), name="static")
except Exception as e:
print("Locating model_dir_static failed", e)
if sys.platform.startswith("darwin"):
p1 = os.path.dirname(sys._MEIPASS)

View File

@ -39,9 +39,8 @@ class MMVC_Rest_VoiceChanger:
# struct.unpack("<%sh" % (len(wav) // struct.calcsize("<h")), wav)
# )
unpackedData = np.array(
struct.unpack("<%sh" % (len(wav) // struct.calcsize("<h")), wav)
)
unpackedData = np.array(struct.unpack("<%sh" % (len(wav) // struct.calcsize("<h")), wav)).astype(np.int16)
# print(f"[REST] unpackedDataType {unpackedData.dtype}")
self.tlock.acquire()
changedVoice = self.voiceChangerManager.changeVoice(unpackedData)

View File

@ -2,12 +2,22 @@ import os
import shutil
from fastapi import UploadFile
# UPLOAD_DIR = "model_upload_dir"
def sanitize_filename(filename: str) -> str:
safe_filename = os.path.basename(filename)
max_length = 255
if len(safe_filename) > max_length:
file_root, file_ext = os.path.splitext(safe_filename)
safe_filename = file_root[: max_length - len(file_ext)] + file_ext
return safe_filename
def upload_file(upload_dirname: str, file: UploadFile, filename: str):
if file and filename:
fileobj = file.file
filename = sanitize_filename(filename)
target_path = os.path.join(upload_dirname, filename)
target_dir = os.path.dirname(target_path)
os.makedirs(target_dir, exist_ok=True)
@ -19,9 +29,8 @@ def upload_file(upload_dirname: str, file: UploadFile, filename: str):
return {"status": "ERROR", "msg": "uploaded file is not found."}
def concat_file_chunks(
upload_dirname: str, filename: str, chunkNum: int, dest_dirname: str
):
def concat_file_chunks(upload_dirname: str, filename: str, chunkNum: int, dest_dirname: str):
filename = sanitize_filename(filename)
target_path = os.path.join(upload_dirname, filename)
target_dir = os.path.dirname(target_path)
os.makedirs(target_dir, exist_ok=True)

View File

@ -0,0 +1,43 @@
from typing import Optional, Sequence, Literal
from mods.origins import compute_local_origins, normalize_origins
from starlette.datastructures import Headers
from starlette.responses import PlainTextResponse
from starlette.types import ASGIApp, Receive, Scope, Send
class TrustedOriginMiddleware:
def __init__(
self,
app: ASGIApp,
allowed_origins: Optional[Sequence[str]] = None,
port: Optional[int] = None,
) -> None:
self.allowed_origins: set[str] = set()
local_origins = compute_local_origins(port)
self.allowed_origins.update(local_origins)
if allowed_origins is not None:
normalized_origins = normalize_origins(allowed_origins)
self.allowed_origins.update(normalized_origins)
self.app = app
async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
if scope["type"] not in (
"http",
"websocket",
): # pragma: no cover
await self.app(scope, receive, send)
return
headers = Headers(scope=scope)
origin = headers.get("origin", "")
# Origin header is not present for same origin
if not origin or origin in self.allowed_origins:
await self.app(scope, receive, send)
return
response = PlainTextResponse("Invalid origin header", status_code=400)
await response(scope, receive, send)

View File

@ -1,6 +1,8 @@
import socketio
from mods.log_control import VoiceChangaerLogger
from mods.origins import compute_local_origins, normalize_origins
from typing import Sequence, Optional
from sio.MMVC_SocketIOServer import MMVC_SocketIOServer
from voice_changer.VoiceChangerManager import VoiceChangerManager
from const import getFrontendPath
@ -12,10 +14,24 @@ class MMVC_SocketIOApp:
_instance: socketio.ASGIApp | None = None
@classmethod
def get_instance(cls, app_fastapi, voiceChangerManager: VoiceChangerManager):
def get_instance(
cls,
app_fastapi,
voiceChangerManager: VoiceChangerManager,
allowedOrigins: Optional[Sequence[str]] = None,
port: Optional[int] = None,
):
if cls._instance is None:
logger.info("[Voice Changer] MMVC_SocketIOApp initializing...")
sio = MMVC_SocketIOServer.get_instance(voiceChangerManager)
allowed_origins: set[str] = set()
local_origins = compute_local_origins(port)
allowed_origins.update(local_origins)
if allowedOrigins is not None:
normalized_origins = normalize_origins(allowedOrigins)
allowed_origins.update(normalized_origins)
sio = MMVC_SocketIOServer.get_instance(voiceChangerManager, list(allowed_origins))
app_socketio = socketio.ASGIApp(
sio,
other_asgi_app=app_fastapi,
@ -44,6 +60,14 @@ class MMVC_SocketIOApp:
"filename": f"{getFrontendPath()}/ort-wasm-simd.wasm",
"content_type": "application/wasm",
},
"/assets/beatrice/female-clickable.svg": {
"filename": f"{getFrontendPath()}/assets/beatrice/female-clickable.svg",
"content_type": "image/svg+xml",
},
"/assets/beatrice/male-clickable.svg": {
"filename": f"{getFrontendPath()}/assets/beatrice/male-clickable.svg",
"content_type": "image/svg+xml",
},
"": f"{getFrontendPath()}",
"/": f"{getFrontendPath()}/index.html",
},

View File

@ -8,9 +8,13 @@ class MMVC_SocketIOServer:
_instance: socketio.AsyncServer | None = None
@classmethod
def get_instance(cls, voiceChangerManager: VoiceChangerManager):
def get_instance(
cls,
voiceChangerManager: VoiceChangerManager,
allowedOrigins: list[str],
):
if cls._instance is None:
sio = socketio.AsyncServer(async_mode="asgi", cors_allowed_origins="*")
sio = socketio.AsyncServer(async_mode="asgi", cors_allowed_origins=allowedOrigins)
namespace = MMVC_Namespace.get_instance(voiceChangerManager)
sio.register_namespace(namespace)
cls._instance = sio

View File

@ -20,7 +20,7 @@ else:
from .models.diffusion.infer_gt_mel import DiffGtMel
from voice_changer.utils.VoiceChangerModel import AudioInOut
from voice_changer.utils.VoiceChangerModel import AudioInOut, VoiceChangerModel
from voice_changer.utils.VoiceChangerParams import VoiceChangerParams
from voice_changer.DDSP_SVC.DDSP_SVCSetting import DDSP_SVCSettings
from voice_changer.RVC.embedder.EmbedderManager import EmbedderManager
@ -44,15 +44,20 @@ def phase_vocoder(a, b, fade_out, fade_in):
deltaphase = deltaphase - 2 * np.pi * torch.floor(deltaphase / 2 / np.pi + 0.5)
w = 2 * np.pi * torch.arange(n // 2 + 1).to(a) + deltaphase
t = torch.arange(n).unsqueeze(-1).to(a) / n
result = a * (fade_out**2) + b * (fade_in**2) + torch.sum(absab * torch.cos(w * t + phia), -1) * fade_out * fade_in / n
result = (
a * (fade_out**2)
+ b * (fade_in**2)
+ torch.sum(absab * torch.cos(w * t + phia), -1) * fade_out * fade_in / n
)
return result
class DDSP_SVC:
class DDSP_SVC(VoiceChangerModel):
initialLoad: bool = True
def __init__(self, params: VoiceChangerParams, slotInfo: DDSPSVCModelSlot):
print("[Voice Changer] [DDSP-SVC] Creating instance ")
self.voiceChangerType = "DDSP-SVC"
self.deviceManager = DeviceManager.get_instance()
self.gpu_num = torch.cuda.device_count()
self.params = params
@ -71,8 +76,18 @@ class DDSP_SVC:
def initialize(self):
self.device = self.deviceManager.getDevice(self.settings.gpu)
vcparams = VoiceChangerParamsManager.get_instance().params
modelPath = os.path.join(vcparams.model_dir, str(self.slotInfo.slotIndex), "model", self.slotInfo.modelFile)
diffPath = os.path.join(vcparams.model_dir, str(self.slotInfo.slotIndex), "diff", self.slotInfo.diffModelFile)
modelPath = os.path.join(
vcparams.model_dir,
str(self.slotInfo.slotIndex),
"model",
self.slotInfo.modelFile,
)
diffPath = os.path.join(
vcparams.model_dir,
str(self.slotInfo.slotIndex),
"diff",
self.slotInfo.diffModelFile,
)
self.svc_model = SvcDDSP()
self.svc_model.setVCParams(self.params)
@ -112,11 +127,15 @@ class DDSP_SVC:
# newData = newData.astype(np.float32)
if self.audio_buffer is not None:
self.audio_buffer = np.concatenate([self.audio_buffer, newData], 0) # 過去のデータに連結
self.audio_buffer = np.concatenate(
[self.audio_buffer, newData], 0
) # 過去のデータに連結
else:
self.audio_buffer = newData
convertSize = inputSize + crossfadeSize + solaSearchFrame + self.settings.extraConvertSize
convertSize = (
inputSize + crossfadeSize + solaSearchFrame + self.settings.extraConvertSize
)
# if convertSize % self.hop_size != 0: # モデルの出力のホップサイズで切り捨てが発生するので補う。
# convertSize = convertSize + (self.hop_size - (convertSize % self.hop_size))
@ -147,7 +166,8 @@ class DDSP_SVC:
f0_min=50,
f0_max=1100,
# safe_prefix_pad_length=0, # TBD なにこれ?
safe_prefix_pad_length=self.settings.extraConvertSize / self.svc_model.args.data.sampling_rate,
safe_prefix_pad_length=self.settings.extraConvertSize
/ self.svc_model.args.data.sampling_rate,
diff_model=self.diff_model,
diff_acc=self.settings.diffAcc, # TBD なにこれ?
diff_spk_id=self.settings.diffSpkId,
@ -155,7 +175,9 @@ class DDSP_SVC:
# diff_use_dpm=True if self.settings.useDiffDpm == 1 else False, # TBD なにこれ?
method=self.settings.diffMethod,
k_step=self.settings.kStep, # TBD なにこれ?
diff_silence=True if self.settings.useDiffSilence == 1 else False, # TBD なにこれ?
diff_silence=True
if self.settings.useDiffSilence == 1
else False, # TBD なにこれ?
)
return _audio.cpu().numpy() * 32768.0
@ -182,5 +204,4 @@ class DDSP_SVC:
pass
def get_model_current(self):
return [
]
return []

View File

@ -6,16 +6,28 @@ from voice_changer.DiffusionSVC.DiffusionSVCSettings import DiffusionSVCSettings
from voice_changer.DiffusionSVC.inferencer.InferencerManager import InferencerManager
from voice_changer.DiffusionSVC.pipeline.Pipeline import Pipeline
from voice_changer.DiffusionSVC.pipeline.PipelineGenerator import createPipeline
from voice_changer.DiffusionSVC.pitchExtractor.PitchExtractorManager import PitchExtractorManager
from voice_changer.DiffusionSVC.pitchExtractor.PitchExtractorManager import (
PitchExtractorManager,
)
from voice_changer.ModelSlotManager import ModelSlotManager
from voice_changer.utils.VoiceChangerModel import AudioInOut, PitchfInOut, FeatureInOut, VoiceChangerModel
from voice_changer.utils.VoiceChangerModel import (
AudioInOut,
PitchfInOut,
FeatureInOut,
VoiceChangerModel,
)
from voice_changer.utils.VoiceChangerParams import VoiceChangerParams
from voice_changer.RVC.embedder.EmbedderManager import EmbedderManager
# from voice_changer.RVC.onnxExporter.export2onnx import export2onnx
from voice_changer.RVC.deviceManager.DeviceManager import DeviceManager
from Exceptions import DeviceCannotSupportHalfPrecisionException, PipelineCreateException, PipelineNotInitializedException
from Exceptions import (
DeviceCannotSupportHalfPrecisionException,
PipelineCreateException,
PipelineNotInitializedException,
)
logger = VoiceChangaerLogger.get_instance().getLogger()
@ -23,6 +35,7 @@ logger = VoiceChangaerLogger.get_instance().getLogger()
class DiffusionSVC(VoiceChangerModel):
def __init__(self, params: VoiceChangerParams, slotInfo: DiffusionSVCModelSlot):
logger.info("[Voice Changer] [DiffusionSVC] Creating instance ")
self.voiceChangerType = "Diffusion-SVC"
self.deviceManager = DeviceManager.get_instance()
EmbedderManager.initialize(params)
PitchExtractorManager.initialize(params)
@ -46,9 +59,17 @@ class DiffusionSVC(VoiceChangerModel):
# pipelineの生成
try:
self.pipeline = createPipeline(self.slotInfo, self.settings.gpu, self.settings.f0Detector, self.inputSampleRate, self.outputSampleRate)
self.pipeline = createPipeline(
self.slotInfo,
self.settings.gpu,
self.settings.f0Detector,
self.inputSampleRate,
self.outputSampleRate,
)
except PipelineCreateException as e: # NOQA
logger.error("[Voice Changer] pipeline create failed. check your model is valid.")
logger.error(
"[Voice Changer] pipeline create failed. check your model is valid."
)
return
# その他の設定
@ -76,7 +97,9 @@ class DiffusionSVC(VoiceChangerModel):
elif key in self.settings.strData:
setattr(self.settings, key, str(val))
if key == "f0Detector" and self.pipeline is not None:
pitchExtractor = PitchExtractorManager.getPitchExtractor(self.settings.f0Detector, self.settings.gpu)
pitchExtractor = PitchExtractorManager.getPitchExtractor(
self.settings.f0Detector, self.settings.gpu
)
self.pipeline.setPitchExtractor(pitchExtractor)
else:
return False
@ -100,30 +123,65 @@ class DiffusionSVC(VoiceChangerModel):
crossfadeSize: int,
solaSearchFrame: int = 0,
):
newData = newData.astype(np.float32) / 32768.0 # DiffusionSVCのモデルのサンプリングレートで入ってきている。extraDataLength, Crossfade等も同じSRで処理(★1)
new_feature_length = int(((newData.shape[0] / self.inputSampleRate) * self.slotInfo.samplingRate) / 512) # 100 は hubertのhosizeから (16000 / 160).
newData = (
newData.astype(np.float32) / 32768.0
) # DiffusionSVCのモデルのサンプリングレートで入ってきている。extraDataLength, Crossfade等も同じSRで処理(★1)
new_feature_length = int(
((newData.shape[0] / self.inputSampleRate) * self.slotInfo.samplingRate)
/ 512
) # 100 は hubertのhosizeから (16000 / 160).
# ↑newData.shape[0]//sampleRate でデータ秒数。これに16000かけてhubertの世界でのデータ長。これにhop数(160)でわるとfeatsのデータサイズになる。
if self.audio_buffer is not None:
# 過去のデータに連結
self.audio_buffer = np.concatenate([self.audio_buffer, newData], 0)
self.pitchf_buffer = np.concatenate([self.pitchf_buffer, np.zeros(new_feature_length)], 0)
self.feature_buffer = np.concatenate([self.feature_buffer, np.zeros([new_feature_length, self.slotInfo.embChannels])], 0)
self.pitchf_buffer = np.concatenate(
[self.pitchf_buffer, np.zeros(new_feature_length)], 0
)
self.feature_buffer = np.concatenate(
[
self.feature_buffer,
np.zeros([new_feature_length, self.slotInfo.embChannels]),
],
0,
)
else:
self.audio_buffer = newData
self.pitchf_buffer = np.zeros(new_feature_length)
self.feature_buffer = np.zeros([new_feature_length, self.slotInfo.embChannels])
self.feature_buffer = np.zeros(
[new_feature_length, self.slotInfo.embChannels]
)
convertSize = newData.shape[0] + crossfadeSize + solaSearchFrame + self.settings.extraConvertSize
convertSize = (
newData.shape[0]
+ crossfadeSize
+ solaSearchFrame
+ self.settings.extraConvertSize
)
if convertSize % 128 != 0: # モデルの出力のホップサイズで切り捨てが発生するので補う。
convertSize = convertSize + (128 - (convertSize % 128))
# バッファがたまっていない場合はzeroで補う
generateFeatureLength = int(((convertSize / self.inputSampleRate) * self.slotInfo.samplingRate) / 512) + 1
generateFeatureLength = (
int(
((convertSize / self.inputSampleRate) * self.slotInfo.samplingRate)
/ 512
)
+ 1
)
if self.audio_buffer.shape[0] < convertSize:
self.audio_buffer = np.concatenate([np.zeros([convertSize]), self.audio_buffer])
self.pitchf_buffer = np.concatenate([np.zeros(generateFeatureLength), self.pitchf_buffer])
self.feature_buffer = np.concatenate([np.zeros([generateFeatureLength, self.slotInfo.embChannels]), self.feature_buffer])
self.audio_buffer = np.concatenate(
[np.zeros([convertSize]), self.audio_buffer]
)
self.pitchf_buffer = np.concatenate(
[np.zeros(generateFeatureLength), self.pitchf_buffer]
)
self.feature_buffer = np.concatenate(
[
np.zeros([generateFeatureLength, self.slotInfo.embChannels]),
self.feature_buffer,
]
)
convertOffset = -1 * convertSize
featureOffset = -1 * generateFeatureLength
@ -139,9 +197,17 @@ class DiffusionSVC(VoiceChangerModel):
vol = float(max(vol, self.prevVol * 0.0))
self.prevVol = vol
return (self.audio_buffer, self.pitchf_buffer, self.feature_buffer, convertSize, vol)
return (
self.audio_buffer,
self.pitchf_buffer,
self.feature_buffer,
convertSize,
vol,
)
def inference(self, receivedData: AudioInOut, crossfade_frame: int, sola_search_frame: int):
def inference(
self, receivedData: AudioInOut, crossfade_frame: int, sola_search_frame: int
):
if self.pipeline is None:
logger.info("[Voice Changer] Pipeline is not initialized.")
raise PipelineNotInitializedException()
@ -169,7 +235,11 @@ class DiffusionSVC(VoiceChangerModel):
speedUp = self.settings.speedUp
embOutputLayer = 12
useFinalProj = False
silenceFrontSec = self.settings.extraConvertSize / self.inputSampleRate if self.settings.silenceFront else 0. # extaraConvertSize(既にモデルのサンプリングレートにリサンプリング済み)の秒数。モデルのサンプリングレートで処理(★1)。
silenceFrontSec = (
self.settings.extraConvertSize / self.inputSampleRate
if self.settings.silenceFront
else 0.0
) # extaraConvertSize(既にモデルのサンプリングレートにリサンプリング済み)の秒数。モデルのサンプリングレートで処理(★1)。
try:
audio_out, self.pitchf_buffer, self.feature_buffer = self.pipeline.exec(
@ -190,7 +260,9 @@ class DiffusionSVC(VoiceChangerModel):
result = audio_out.detach().cpu().numpy()
return result
except DeviceCannotSupportHalfPrecisionException as e: # NOQA
logger.warn("[Device Manager] Device cannot support half precision. Fallback to float....")
logger.warn(
"[Device Manager] Device cannot support half precision. Fallback to float...."
)
self.deviceManager.setForceTensor(True)
self.initialize()
# raise e

View File

@ -7,7 +7,7 @@ from voice_changer.DiffusionSVC.inferencer.diffusion_svc_model.diffusion.vocoder
from voice_changer.DiffusionSVC.inferencer.onnx.VocoderOnnx import VocoderOnnx
from voice_changer.RVC.deviceManager.DeviceManager import DeviceManager
from voice_changer.utils.Timer import Timer
from voice_changer.utils.Timer import Timer2
class DiffusionSVCInferencer(Inferencer):
@ -49,18 +49,14 @@ class DiffusionSVCInferencer(Inferencer):
return model_block_size, model_sampling_rate
@torch.no_grad() # 最基本推理代码,将输入标准化为tensor,只与mel打交道
def __call__(self, units, f0, volume, spk_id=1, spk_mix_dict=None, aug_shift=0,
gt_spec=None, infer_speedup=10, method='dpm-solver', k_step=None, use_tqdm=True,
spk_emb=None):
def __call__(self, units, f0, volume, spk_id=1, spk_mix_dict=None, aug_shift=0, gt_spec=None, infer_speedup=10, method="dpm-solver", k_step=None, use_tqdm=True, spk_emb=None):
if self.diff_args.model.k_step_max is not None:
if k_step is None:
raise ValueError("k_step must not None when Shallow Diffusion Model inferring")
if k_step > int(self.diff_args.model.k_step_max):
raise ValueError("k_step must <= k_step_max of Shallow Diffusion Model")
if gt_spec is None:
raise ValueError("gt_spec must not None when Shallow Diffusion Model inferring, gt_spec can from "
"input mel or output of naive model")
raise ValueError("gt_spec must not None when Shallow Diffusion Model inferring, gt_spec can from " "input mel or output of naive model")
aug_shift = torch.from_numpy(np.array([[float(aug_shift)]])).float().to(self.dev)
@ -75,8 +71,7 @@ class DiffusionSVCInferencer(Inferencer):
return self.diff_model(units, f0, volume, spk_id=spk_id, spk_mix_dict=spk_mix_dict, aug_shift=aug_shift, gt_spec=gt_spec, infer=True, infer_speedup=infer_speedup, method=method, k_step=k_step, use_tqdm=use_tqdm, spk_emb=spk_emb, spk_emb_dict=spk_emb_dict)
@torch.no_grad()
def naive_model_call(self, units, f0, volume, spk_id=1, spk_mix_dict=None,
aug_shift=0, spk_emb=None):
def naive_model_call(self, units, f0, volume, spk_id=1, spk_mix_dict=None, aug_shift=0, spk_emb=None):
# spk_id
spk_emb_dict = None
if self.diff_args.model.use_speaker_encoder: # with speaker encoder
@ -85,9 +80,7 @@ class DiffusionSVCInferencer(Inferencer):
else:
spk_id = torch.LongTensor(np.array([[int(spk_id)]])).to(self.dev)
aug_shift = torch.from_numpy(np.array([[float(aug_shift)]])).float().to(self.dev)
out_spec = self.naive_model(units, f0, volume, spk_id=spk_id, spk_mix_dict=spk_mix_dict,
aug_shift=aug_shift, infer=True,
spk_emb=spk_emb, spk_emb_dict=spk_emb_dict)
out_spec = self.naive_model(units, f0, volume, spk_id=spk_id, spk_mix_dict=spk_mix_dict, aug_shift=aug_shift, infer=True, spk_emb=spk_emb, spk_emb_dict=spk_emb_dict)
return out_spec
@torch.no_grad()
@ -114,19 +107,19 @@ class DiffusionSVCInferencer(Inferencer):
silence_front: float,
skip_diffusion: bool = True,
) -> torch.Tensor:
with Timer("pre-process", False) as t:
use_timer = False
with Timer2(" Naive", use_timer) as t:
gt_spec = self.naive_model_call(feats, pitch, volume, spk_id=sid, spk_mix_dict=None, aug_shift=0, spk_emb=None)
# print("[ ----Timer::1: ]", t.secs)
with Timer("pre-process", False) as t:
with Timer2(" Diffuser", use_timer) as t:
if skip_diffusion == 0:
out_mel = self.__call__(feats, pitch, volume, spk_id=sid, spk_mix_dict=None, aug_shift=0, gt_spec=gt_spec, infer_speedup=infer_speedup, method='dpm-solver', k_step=k_step, use_tqdm=False, spk_emb=None)
out_mel = self.__call__(feats, pitch, volume, spk_id=sid, spk_mix_dict=None, aug_shift=0, gt_spec=gt_spec, infer_speedup=infer_speedup, method="dpm-solver", k_step=k_step, use_tqdm=False, spk_emb=None)
gt_spec = out_mel
# print("[ ----Timer::2: ]", t.secs)
with Timer("pre-process", False) as t: # NOQA
with Timer2(" Vocoder", use_timer) as t: # NOQA
if self.vocoder_onnx is None:
start_frame = int(silence_front * self.vocoder.vocoder_sample_rate / self.vocoder.vocoder_hop_size)
out_wav = self.mel2wav(gt_spec, pitch, start_frame=start_frame)

View File

@ -17,7 +17,7 @@ from voice_changer.RVC.embedder.Embedder import Embedder
from voice_changer.common.VolumeExtractor import VolumeExtractor
from torchaudio.transforms import Resample
from voice_changer.utils.Timer import Timer
from voice_changer.utils.Timer import Timer2
logger = VoiceChangaerLogger.get_instance().getLogger()
@ -45,7 +45,7 @@ class Pipeline(object):
device,
isHalf,
resamplerIn: Resample,
resamplerOut: Resample
resamplerOut: Resample,
):
self.inferencer = inferencer
inferencer_block_size, inferencer_sampling_rate = inferencer.getConfig()
@ -64,7 +64,7 @@ class Pipeline(object):
logger.info("GENERATE INFERENCER" + str(self.inferencer))
logger.info("GENERATE EMBEDDER" + str(self.embedder))
logger.info("GENERATE PITCH EXTRACTOR" + str(self.pitchExtractor))
self.targetSR = targetSR
self.device = device
self.isHalf = False
@ -102,8 +102,9 @@ class Pipeline(object):
protect=0.5,
skip_diffusion=True,
):
use_timer = False
# print("---------- pipe line --------------------")
with Timer("pre-process", False) as t:
with Timer2("pre-process", use_timer) as t:
audio_t = torch.from_numpy(audio).float().unsqueeze(0).to(self.device)
audio16k = self.resamplerIn(audio_t)
volume, mask = self.extract_volume_and_mask(audio16k, threshold=-60.0)
@ -111,7 +112,7 @@ class Pipeline(object):
n_frames = int(audio16k.size(-1) // self.hop_size + 1)
# print("[Timer::1: ]", t.secs)
with Timer("pre-process", False) as t:
with Timer2("extract pitch", use_timer) as t:
# ピッチ検出
try:
# pitch = self.pitchExtractor.extract(
@ -141,8 +142,7 @@ class Pipeline(object):
feats = feats.view(1, -1)
# print("[Timer::2: ]", t.secs)
with Timer("pre-process", False) as t:
with Timer2("extract feature", use_timer) as t:
# embedding
with autocast(enabled=self.isHalf):
try:
@ -156,28 +156,17 @@ class Pipeline(object):
raise DeviceChangingException()
else:
raise e
feats = F.interpolate(feats.permute(0, 2, 1), size=int(n_frames), mode='nearest').permute(0, 2, 1)
feats = F.interpolate(feats.permute(0, 2, 1), size=int(n_frames), mode="nearest").permute(0, 2, 1)
# print("[Timer::3: ]", t.secs)
with Timer("pre-process", False) as t:
with Timer2("infer", use_timer) as t:
# 推論実行
try:
with torch.no_grad():
with autocast(enabled=self.isHalf):
audio1 = (
torch.clip(
self.inferencer.infer(
audio16k,
feats,
pitch.unsqueeze(-1),
volume,
mask,
sid,
k_step,
infer_speedup,
silence_front=silence_front,
skip_diffusion=skip_diffusion
).to(dtype=torch.float32),
self.inferencer.infer(audio16k, feats, pitch.unsqueeze(-1), volume, mask, sid, k_step, infer_speedup, silence_front=silence_front, skip_diffusion=skip_diffusion).to(dtype=torch.float32),
-1.0,
1.0,
)
@ -191,7 +180,7 @@ class Pipeline(object):
raise e
# print("[Timer::4: ]", t.secs)
with Timer("pre-process", False) as t: # NOQA
with Timer2("post-process", use_timer) as t: # NOQA
feats_buffer = feats.squeeze(0).detach().cpu()
if pitch is not None:
pitch_buffer = pitch.squeeze(0).detach().cpu()

View File

@ -0,0 +1,326 @@
"""
VoiceChangerV2向け
"""
from dataclasses import asdict
import numpy as np
import torch
from data.ModelSlot import RVCModelSlot
from mods.log_control import VoiceChangaerLogger
from voice_changer.EasyVC.EasyVCSettings import EasyVCSettings
from voice_changer.EasyVC.pipeline.Pipeline import Pipeline
from voice_changer.EasyVC.pipeline.PipelineGenerator import createPipeline
from voice_changer.RVC.RVCSettings import RVCSettings
from voice_changer.RVC.embedder.EmbedderManager import EmbedderManager
from voice_changer.utils.Timer import Timer2
from voice_changer.utils.VoiceChangerModel import (
AudioInOut,
PitchfInOut,
FeatureInOut,
VoiceChangerModel,
)
from voice_changer.utils.VoiceChangerParams import VoiceChangerParams
from voice_changer.RVC.onnxExporter.export2onnx import export2onnx
from voice_changer.RVC.pitchExtractor.PitchExtractorManager import PitchExtractorManager
from voice_changer.RVC.deviceManager.DeviceManager import DeviceManager
from Exceptions import (
DeviceCannotSupportHalfPrecisionException,
PipelineCreateException,
PipelineNotInitializedException,
)
import resampy
from typing import cast
logger = VoiceChangaerLogger.get_instance().getLogger()
class EasyVC(VoiceChangerModel):
def __init__(self, params: VoiceChangerParams, slotInfo: RVCModelSlot):
logger.info("[Voice Changer] [EasyVC] Creating instance ")
self.voiceChangerType = "RVC"
self.deviceManager = DeviceManager.get_instance()
EmbedderManager.initialize(params)
PitchExtractorManager.initialize(params)
self.settings = EasyVCSettings()
self.params = params
# self.pitchExtractor = PitchExtractorManager.getPitchExtractor(self.settings.f0Detector, self.settings.gpu)
self.pipeline: Pipeline | None = None
self.audio_buffer: AudioInOut | None = None
self.pitchf_buffer: PitchfInOut | None = None
self.feature_buffer: FeatureInOut | None = None
self.prevVol = 0.0
self.slotInfo = slotInfo
# self.initialize()
def initialize(self):
logger.info("[Voice Changer][EasyVC] Initializing... ")
# pipelineの生成
try:
self.pipeline = createPipeline(self.params, self.slotInfo, self.settings.gpu, self.settings.f0Detector)
except PipelineCreateException as e: # NOQA
logger.error("[Voice Changer] pipeline create failed. check your model is valid.")
return
# その他の設定
logger.info("[Voice Changer] [EasyVC] Initializing... done")
def setSamplingRate(self, inputSampleRate, outputSampleRate):
self.inputSampleRate = inputSampleRate
self.outputSampleRate = outputSampleRate
# self.initialize()
def update_settings(self, key: str, val: int | float | str):
logger.info(f"[Voice Changer][RVC]: update_settings {key}:{val}")
if key in self.settings.intData:
setattr(self.settings, key, int(val))
if key == "gpu":
self.deviceManager.setForceTensor(False)
self.initialize()
elif key in self.settings.floatData:
setattr(self.settings, key, float(val))
elif key in self.settings.strData:
setattr(self.settings, key, str(val))
if key == "f0Detector" and self.pipeline is not None:
pitchExtractor = PitchExtractorManager.getPitchExtractor(self.settings.f0Detector, self.settings.gpu)
self.pipeline.setPitchExtractor(pitchExtractor)
else:
return False
return True
def get_info(self):
data = asdict(self.settings)
if self.pipeline is not None:
pipelineInfo = self.pipeline.getPipelineInfo()
data["pipelineInfo"] = pipelineInfo
else:
data["pipelineInfo"] = "None"
return data
def get_processing_sampling_rate(self):
return self.slotInfo.samplingRate
def generate_input(
self,
newData: AudioInOut,
crossfadeSize: int,
solaSearchFrame: int,
extra_frame: int,
):
# 16k で入ってくる。
inputSize = newData.shape[0]
newData = newData.astype(np.float32) / 32768.0
newFeatureLength = inputSize // 160 # hopsize:=160
if self.audio_buffer is not None:
# 過去のデータに連結
self.audio_buffer = np.concatenate([self.audio_buffer, newData], 0)
# if self.slotInfo.f0:
# self.pitchf_buffer = np.concatenate([self.pitchf_buffer, np.zeros(newFeatureLength)], 0)
self.feature_buffer = np.concatenate(
[
self.feature_buffer,
# np.zeros([newFeatureLength, self.slotInfo.embChannels]),
np.zeros([newFeatureLength, 768]),
],
0,
)
else:
self.audio_buffer = newData
# if self.slotInfo.f0:
# self.pitchf_buffer = np.zeros(newFeatureLength)
self.feature_buffer = np.zeros([newFeatureLength, 768])
convertSize = inputSize + crossfadeSize + solaSearchFrame + extra_frame
if convertSize % 160 != 0: # モデルの出力のホップサイズで切り捨てが発生するので補う。
convertSize = convertSize + (160 - (convertSize % 160))
outSize = int(((convertSize - extra_frame) / 16000) * self.slotInfo.samplingRate)
# バッファがたまっていない場合はzeroで補う
if self.audio_buffer.shape[0] < convertSize:
self.audio_buffer = np.concatenate([np.zeros([convertSize]), self.audio_buffer])
# if self.slotInfo.f0:
# self.pitchf_buffer = np.concatenate([np.zeros([convertSize // 160]), self.pitchf_buffer])
self.feature_buffer = np.concatenate(
[
np.zeros([convertSize // 160, 768]),
self.feature_buffer,
]
)
# 不要部分をトリミング
convertOffset = -1 * convertSize
featureOffset = convertOffset // 160
self.audio_buffer = self.audio_buffer[convertOffset:] # 変換対象の部分だけ抽出
# if self.slotInfo.f0:
# self.pitchf_buffer = self.pitchf_buffer[featureOffset:]
self.feature_buffer = self.feature_buffer[featureOffset:]
# 出力部分だけ切り出して音量を確認。(TODO:段階的消音にする)
cropOffset = -1 * (inputSize + crossfadeSize)
cropEnd = -1 * (crossfadeSize)
crop = self.audio_buffer[cropOffset:cropEnd]
vol = np.sqrt(np.square(crop).mean())
vol = max(vol, self.prevVol * 0.0)
self.prevVol = vol
return (
self.audio_buffer,
self.pitchf_buffer,
self.feature_buffer,
convertSize,
vol,
outSize,
)
def inference(self, receivedData: AudioInOut, crossfade_frame: int, sola_search_frame: int):
if self.pipeline is None:
logger.info("[Voice Changer] Pipeline is not initialized.")
raise PipelineNotInitializedException()
enableTimer = False
with Timer2("infer-easyvc", enableTimer) as t:
# 処理は16Kで実施(Pitch, embed, (infer))
receivedData = cast(
AudioInOut,
resampy.resample(
receivedData,
self.inputSampleRate,
16000,
filter="kaiser_fast",
),
)
crossfade_frame = int((crossfade_frame / self.inputSampleRate) * 16000)
sola_search_frame = int((sola_search_frame / self.inputSampleRate) * 16000)
extra_frame = int((self.settings.extraConvertSize / self.inputSampleRate) * 16000)
# 入力データ生成
data = self.generate_input(receivedData, crossfade_frame, sola_search_frame, extra_frame)
t.record("generate-input")
audio = data[0]
pitchf = data[1]
feature = data[2]
convertSize = data[3]
vol = data[4]
outSize = data[5]
if vol < self.settings.silentThreshold:
return np.zeros(convertSize).astype(np.int16) * np.sqrt(vol)
device = self.pipeline.device
audio = torch.from_numpy(audio).to(device=device, dtype=torch.float32)
repeat = 0
sid = self.settings.dstId
f0_up_key = self.settings.tran
index_rate = self.settings.indexRatio
protect = self.settings.protect
# if_f0 = 1 if self.slotInfo.f0 else 0
if_f0 = 0
# embOutputLayer = self.slotInfo.embOutputLayer
# useFinalProj = self.slotInfo.useFinalProj
t.record("pre-process")
try:
audio_out, self.pitchf_buffer, self.feature_buffer = self.pipeline.exec(
sid,
audio,
pitchf,
feature,
f0_up_key,
index_rate,
if_f0,
# 0,
self.settings.extraConvertSize / self.inputSampleRate if self.settings.silenceFront else 0.0, # extaraDataSizeの秒数。入力のサンプリングレートで算出
repeat,
outSize,
)
t.record("pipeline-exec")
# result = audio_out.detach().cpu().numpy() * np.sqrt(vol)
result = audio_out[-outSize:].detach().cpu().numpy() * np.sqrt(vol)
result = cast(
AudioInOut,
resampy.resample(
result,
16000,
self.outputSampleRate,
filter="kaiser_fast",
),
)
t.record("resample")
return result
except DeviceCannotSupportHalfPrecisionException as e: # NOQA
logger.warn("[Device Manager] Device cannot support half precision. Fallback to float....")
self.deviceManager.setForceTensor(True)
self.initialize()
# raise e
return
def __del__(self):
del self.pipeline
# print("---------- REMOVING ---------------")
# remove_path = os.path.join("RVC")
# sys.path = [x for x in sys.path if x.endswith(remove_path) is False]
# for key in list(sys.modules):
# val = sys.modules.get(key)
# try:
# file_path = val.__file__
# if file_path.find("RVC" + os.path.sep) >= 0:
# # print("remove", key, file_path)
# sys.modules.pop(key)
# except Exception: # type:ignore
# # print(e)
# pass
def export2onnx(self):
modelSlot = self.slotInfo
if modelSlot.isONNX:
logger.warn("[Voice Changer] export2onnx, No pyTorch filepath.")
return {"status": "ng", "path": ""}
if self.pipeline is not None:
del self.pipeline
self.pipeline = None
torch.cuda.empty_cache()
self.initialize()
output_file_simple = export2onnx(self.settings.gpu, modelSlot)
return {
"status": "ok",
"path": f"/tmp/{output_file_simple}",
"filename": output_file_simple,
}
def get_model_current(self):
return [
{
"key": "defaultTune",
"val": self.settings.tran,
},
{
"key": "defaultIndexRatio",
"val": self.settings.indexRatio,
},
{
"key": "defaultProtect",
"val": self.settings.protect,
},
]

View File

@ -0,0 +1,17 @@
import os
from data.ModelSlot import EasyVCModelSlot
from voice_changer.utils.LoadModelParams import LoadModelParams
from voice_changer.utils.ModelSlotGenerator import ModelSlotGenerator
class EasyVCModelSlotGenerator(ModelSlotGenerator):
@classmethod
def loadModel(cls, props: LoadModelParams):
slotInfo: EasyVCModelSlot = EasyVCModelSlot()
for file in props.files:
if file.kind == "easyVCModel":
slotInfo.modelFile = file.name
slotInfo.name = os.path.splitext(os.path.basename(slotInfo.modelFile))[0]
slotInfo.slotIndex = props.slot
return slotInfo

View File

@ -0,0 +1,33 @@
from dataclasses import dataclass, field
from const import PitchExtractorType
@dataclass
class EasyVCSettings:
gpu: int = -9999
dstId: int = 0
f0Detector: PitchExtractorType = "rmvpe_onnx" # dio or harvest
tran: int = 12
silentThreshold: float = 0.00001
extraConvertSize: int = 1024 * 4
indexRatio: float = 0
protect: float = 0.5
rvcQuality: int = 0
silenceFront: int = 1 # 0:off, 1:on
modelSamplingRate: int = 48000
speakers: dict[str, int] = field(default_factory=lambda: {})
intData = [
"gpu",
"dstId",
"tran",
"extraConvertSize",
"rvcQuality",
"silenceFront",
]
floatData = ["silentThreshold", "indexRatio", "protect"]
strData = ["f0Detector"]

View File

@ -0,0 +1,237 @@
from typing import Any
import math
import torch
import torch.nn.functional as F
from torch.cuda.amp import autocast
from Exceptions import (
DeviceCannotSupportHalfPrecisionException,
DeviceChangingException,
HalfPrecisionChangingException,
NotEnoughDataExtimateF0,
)
from mods.log_control import VoiceChangaerLogger
from voice_changer.RVC.embedder.Embedder import Embedder
from voice_changer.RVC.inferencer.Inferencer import Inferencer
from voice_changer.RVC.inferencer.OnnxRVCInferencer import OnnxRVCInferencer
from voice_changer.RVC.inferencer.OnnxRVCInferencerNono import OnnxRVCInferencerNono
from voice_changer.RVC.pitchExtractor.PitchExtractor import PitchExtractor
from voice_changer.utils.Timer import Timer2
logger = VoiceChangaerLogger.get_instance().getLogger()
class Pipeline(object):
embedder: Embedder
inferencer: Inferencer
pitchExtractor: PitchExtractor
index: Any | None
big_npy: Any | None
# feature: Any | None
targetSR: int
device: torch.device
isHalf: bool
def __init__(
self,
embedder: Embedder,
inferencer: Inferencer,
pitchExtractor: PitchExtractor,
targetSR,
device,
isHalf,
):
self.embedder = embedder
self.inferencer = inferencer
self.pitchExtractor = pitchExtractor
logger.info("GENERATE INFERENCER" + str(self.inferencer))
logger.info("GENERATE EMBEDDER" + str(self.embedder))
logger.info("GENERATE PITCH EXTRACTOR" + str(self.pitchExtractor))
self.targetSR = targetSR
self.device = device
self.isHalf = isHalf
self.sr = 16000
self.window = 160
def getPipelineInfo(self):
inferencerInfo = self.inferencer.getInferencerInfo() if self.inferencer else {}
embedderInfo = self.embedder.getEmbedderInfo()
pitchExtractorInfo = self.pitchExtractor.getPitchExtractorInfo()
return {"inferencer": inferencerInfo, "embedder": embedderInfo, "pitchExtractor": pitchExtractorInfo, "isHalf": self.isHalf}
def setPitchExtractor(self, pitchExtractor: PitchExtractor):
self.pitchExtractor = pitchExtractor
def extractPitch(self, audio_pad, if_f0, pitchf, f0_up_key, silence_front):
try:
if if_f0 == 1:
pitch, pitchf = self.pitchExtractor.extract(
audio_pad,
pitchf,
f0_up_key,
self.sr,
self.window,
silence_front=silence_front,
)
# pitch = pitch[:p_len]
# pitchf = pitchf[:p_len]
pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
pitchf = torch.tensor(pitchf, device=self.device, dtype=torch.float).unsqueeze(0)
else:
pitch = None
pitchf = None
except IndexError as e: # NOQA
print(e)
import traceback
traceback.print_exc()
raise NotEnoughDataExtimateF0()
return pitch, pitchf
def extractFeatures(self, feats):
with autocast(enabled=self.isHalf):
try:
feats = self.embedder.extractFeatures(feats)
if torch.isnan(feats).all():
raise DeviceCannotSupportHalfPrecisionException()
return feats
except RuntimeError as e:
if "HALF" in e.__str__().upper():
raise HalfPrecisionChangingException()
elif "same device" in e.__str__():
raise DeviceChangingException()
else:
raise e
def infer(self, feats, p_len, pitch, pitchf, sid, out_size):
try:
with torch.no_grad():
with autocast(enabled=self.isHalf):
audio1 = self.inferencer.infer(feats, p_len, pitch, pitchf, sid, out_size)
audio1 = (audio1 * 32767.5).data.to(dtype=torch.int16)
return audio1
except RuntimeError as e:
if "HALF" in e.__str__().upper():
print("HalfPresicion Error:", e)
raise HalfPrecisionChangingException()
else:
raise e
def exec(
self,
sid,
audio, # torch.tensor [n]
pitchf, # np.array [m]
feature, # np.array [m, feat]
f0_up_key,
index_rate,
if_f0,
silence_front,
repeat,
out_size=None,
):
# print(f"pipeline exec input, audio:{audio.shape}, pitchf:{pitchf.shape}, feature:{feature.shape}")
# print(f"pipeline exec input, silence_front:{silence_front}, out_size:{out_size}")
enablePipelineTimer = False
with Timer2("Pipeline-Exec", enablePipelineTimer) as t: # NOQA
# 16000のサンプリングレートで入ってきている。以降この世界は16000で処理。
# self.t_pad = self.sr * repeat # 1秒
# self.t_pad_tgt = self.targetSR * repeat # 1秒 出力時のトリミング(モデルのサンプリングで出力される)
audio = audio.unsqueeze(0)
quality_padding_sec = (repeat * (audio.shape[1] - 1)) / self.sr # padding(reflect)のサイズは元のサイズより小さい必要がある。
self.t_pad = round(self.sr * quality_padding_sec) # 前後に音声を追加
self.t_pad_tgt = round(self.targetSR * quality_padding_sec) # 前後に音声を追加 出力時のトリミング(モデルのサンプリングで出力される)
audio_pad = F.pad(audio, (self.t_pad, self.t_pad), mode="reflect").squeeze(0)
p_len = audio_pad.shape[0] // self.window
sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
# # RVC QualityがOnのときにはsilence_frontをオフに。
# silence_front = silence_front if repeat == 0 else 0
# pitchf = pitchf if repeat == 0 else np.zeros(p_len)
# out_size = out_size if repeat == 0 else None
# tensor型調整
feats = audio_pad
if feats.dim() == 2: # double channels
feats = feats.mean(-1)
assert feats.dim() == 1, feats.dim()
feats = feats.view(1, -1)
t.record("pre-process")
# ピッチ検出
pitch, pitchf = self.extractPitch(audio_pad, if_f0, pitchf, f0_up_key, silence_front)
t.record("extract-pitch")
# embedding
feats = self.extractFeatures(feats)
t.record("extract-feats")
feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
# if protect < 0.5 and search_index:
# feats0 = feats.clone()
# ピッチサイズ調整
p_len = audio_pad.shape[0] // self.window
if feats.shape[1] < p_len:
p_len = feats.shape[1]
if pitch is not None and pitchf is not None:
pitch = pitch[:, :p_len]
pitchf = pitchf[:, :p_len]
feats_len = feats.shape[1]
if pitch is not None and pitchf is not None:
pitch = pitch[:, -feats_len:]
pitchf = pitchf[:, -feats_len:]
p_len = torch.tensor([feats_len], device=self.device).long()
# apply silent front for inference
if type(self.inferencer) in [OnnxRVCInferencer, OnnxRVCInferencerNono]:
npyOffset = math.floor(silence_front * 16000) // 360
feats = feats[:, npyOffset * 2 :, :] # NOQA
feats_len = feats.shape[1]
if pitch is not None and pitchf is not None:
pitch = pitch[:, -feats_len:]
pitchf = pitchf[:, -feats_len:]
p_len = torch.tensor([feats_len], device=self.device).long()
t.record("mid-precess")
# 推論実行
audio1 = self.infer(feats, p_len, pitch, pitchf, sid, out_size)
t.record("infer")
feats_buffer = feats.squeeze(0).detach().cpu()
if pitchf is not None:
pitchf_buffer = pitchf.squeeze(0).detach().cpu()
else:
pitchf_buffer = None
del p_len, pitch, pitchf, feats
# torch.cuda.empty_cache()
# inferで出力されるサンプリングレートはモデルのサンプリングレートになる。
# pipelineに入力されるときはhubertように16k
if self.t_pad_tgt != 0:
offset = self.t_pad_tgt
end = -1 * self.t_pad_tgt
audio1 = audio1[offset:end]
del sid
t.record("post-process")
# torch.cuda.empty_cache()
# print("EXEC AVERAGE:", t.avrSecs)
return audio1, pitchf_buffer, feats_buffer
def __del__(self):
del self.embedder
del self.inferencer
del self.pitchExtractor
print("Pipeline has been deleted")

Some files were not shown because too many files have changed in this diff Show More