Skip to content

Conversation

chadf
Copy link

@chadf chadf commented Jul 13, 2025

Currently the entire model is always loaded into memory to perform hash, and once again if a new model is downloaded. This could have a significant impact on memory usage when large models are used, only to immediately discard.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant