-
-
Notifications
You must be signed in to change notification settings - Fork 99
Feature add tests for melo #1665
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature add tests for melo #1665
Conversation
Refactored attentions, mel_processing, and language_base for improved clarity, input handling, and logging. Fixed minor bugs and improved docstrings and code style for maintainability.
Updated and refactored all melo test files to align with recent core module changes. Improved test coverage and structure for reliability.
Add more comprehensive patching and input data to OpenVoice handler unit test to ensure deterministic and isolated test runs.
Guard QCoreApplication patching and use valid Chinese input to avoid assertion errors. Improves test reliability across environments.
Add more assertions, dummy language modules, and edge case coverage for Cleaner. Accepts more flexible outputs for future-proofing.
Patch expanders in the correct module, accept more flexible output shapes, and clarify assertions for tones and word2ph.
Import QObject directly from PySide6.QtCore and update DummyParent usage for SubGraph repr test. Ensures compatibility with latest PySide6.
…or test Add assertion to guarantee tensor w is not empty before model call, preventing shape errors in edge cases.
Add comprehensive unit tests for melo.text language modules: - test_chinese.py: Chinese handler - test_english.py: English handler - test_language_base.py: LanguageBase abstract and utilities - test_multilang.py: ChineseMix, Japanese, Korean, French, Spanish handlers These tests cover normalization, phoneme/tones extraction, error and edge cases, and dictionary/model logic. Ensures correctness and robustness of language processing components.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR enhances the unit tests for the openvoice_handler by mocking file I/O and model loading to verify that the handler’s load method correctly initializes its internal model.
- Added a
builtins.openmock with an inline JSON fixture to simulate the handler’s config file. - Introduced a
torch.loadmock to stub out actual model deserialization. - Expanded the
test_loadsignature and assertion to ensurehandler.modelis set.
Comments suppressed due to low confidence (2)
src/airunner/handlers/tts/tests/test_openvoice_handler.py:61
- [nitpick] The test name
test_loadis generic and doesn't indicate what behavior you're verifying. Consider renaming it to something liketest_load_initializes_model_with_configfor clarity.
def test_load(
src/airunner/handlers/tts/tests/test_openvoice_handler.py:72
- This assertion only checks that the model is not None; it would be valuable to add more assertions verifying that key configuration fields (e.g., sampling_rate, n_speakers) are correctly applied to the handler.
self.assertIsNotNone(self.handler.model)
No description provided.