move data soft link (#1010)

* [feature]add dataset classs

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [dev]combine agent and tts infer

* [feature]:update inference

* [feature]:update uv.lock

* [Merge]:merge upstream/main

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [fix]:remove unused files

* [fix]:remove unused files

* [fix]:remove unused files

* [fix]:fix infer bugs

* [docs]:update introduction and optinize front appearence

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [docs]:update README for OpenAudio-S1

* [docs]:update docs

* [docs]:Update video

* [docs]:fix video

* [docs]:fix video

* [fix]:fix timbre problem

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [fix]:remove unused files

* [fix]:move unused files

* [fix]:fix gitignore

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
Whale and Dolphin 2025-06-05 19:33:49 +08:00 committed by GitHub
parent 9021a57dce
commit e4d71110b7
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 6 additions and 25 deletions

12
.gitignore vendored
View File

@ -62,12 +62,12 @@ venv.bak/
# Data and Model Files
# --------------------
/data/
/results/
/checkpoints/
/references/
/demo-audios/
/example/
data/
results/
checkpoints/
references/
demo-audios/
example/
filelists/
*.filelist

1
data
View File

@ -1 +0,0 @@
/mnt/users/whaledolphin/data

View File

@ -365,24 +365,6 @@ def generate_long(
texts = split_text(text, chunk_length) if iterative_prompt else [text]
max_length = model.config.max_seq_len
# if use_prompt:
# base_content_sequence.append(
# [
# TextPart(text=prompt_text[0]),
# VQPart(codes=prompt_tokens[0]),
# ],
# add_end=True,
# )
# for text in texts:
# content_sequence = ContentSequence(modality=None)
# base_content_sequence.append(
# [
# TextPart(text=text),
# ],
# add_end=True,
# )
if use_prompt:
for t, c in zip(prompt_text, prompt_tokens):
base_content_sequence.append(