Skip to content

Conversation

@lucaerion
Copy link
Contributor

No description provided.

lucaerion added 3 commits May 27, 2025 15:47
Refactored attentions, mel_processing, and language_base for improved clarity, input handling, and logging. Fixed minor bugs and improved docstrings and code style for maintainability.
Updated and refactored all melo test files to align with recent core module changes. Improved test coverage and structure for reliability.
@w4ffl35 w4ffl35 added 🛸 enhancement New feature or request 😵‍💫 QA Something needs testing distribution Dev Environment labels May 27, 2025
@w4ffl35 w4ffl35 moved this to In Progress in AI Runner May 27, 2025
lucaerion added 7 commits May 27, 2025 19:09
Add more comprehensive patching and input data to OpenVoice handler
unit test to ensure deterministic and isolated test runs.
Guard QCoreApplication patching and use valid Chinese input to avoid
assertion errors. Improves test reliability across environments.
Add more assertions, dummy language modules, and edge case coverage
for Cleaner. Accepts more flexible outputs for future-proofing.
Patch expanders in the correct module, accept more flexible output
shapes, and clarify assertions for tones and word2ph.
Import QObject directly from PySide6.QtCore and update DummyParent
usage for SubGraph repr test. Ensures compatibility with latest PySide6.
…or test

Add assertion to guarantee tensor w is not empty before model call,
preventing shape errors in edge cases.
Add comprehensive unit tests for melo.text language modules:
- test_chinese.py: Chinese handler
- test_english.py: English handler
- test_language_base.py: LanguageBase abstract and utilities
- test_multilang.py: ChineseMix, Japanese, Korean, French, Spanish handlers

These tests cover normalization, phoneme/tones extraction, error and edge cases, and dictionary/model logic. Ensures correctness and robustness of language processing components.
@w4ffl35 w4ffl35 requested a review from Copilot May 28, 2025 15:50
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR enhances the unit tests for the openvoice_handler by mocking file I/O and model loading to verify that the handler’s load method correctly initializes its internal model.

  • Added a builtins.open mock with an inline JSON fixture to simulate the handler’s config file.
  • Introduced a torch.load mock to stub out actual model deserialization.
  • Expanded the test_load signature and assertion to ensure handler.model is set.
Comments suppressed due to low confidence (2)

src/airunner/handlers/tts/tests/test_openvoice_handler.py:61

  • [nitpick] The test name test_load is generic and doesn't indicate what behavior you're verifying. Consider renaming it to something like test_load_initializes_model_with_config for clarity.
def test_load(

src/airunner/handlers/tts/tests/test_openvoice_handler.py:72

  • This assertion only checks that the model is not None; it would be valuable to add more assertions verifying that key configuration fields (e.g., sampling_rate, n_speakers) are correctly applied to the handler.
self.assertIsNotNone(self.handler.model)

@w4ffl35 w4ffl35 moved this from In Progress to Done in AI Runner Jun 5, 2025
@w4ffl35 w4ffl35 moved this from Done to Todo in AI Runner Jun 5, 2025
@w4ffl35 w4ffl35 removed the status in AI Runner Jun 5, 2025
@w4ffl35 w4ffl35 closed this Oct 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Dev Environment distribution 🛸 enhancement New feature or request 😵‍💫 QA Something needs testing

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

2 participants