Skip to content

Conversation

VirenS13117
Copy link

Use bias gradients (b1_grad, b2_grad, etc.) instead of bias values (b1, b2, etc.) in momentum updates. This critical fix ensures proper backpropagation and training convergence in the 2-layer autoencoder.

Use bias gradients (b1_grad, b2_grad, etc.) instead of bias values
(b1, b2, etc.) in momentum updates. This critical fix ensures proper
backpropagation and training convergence in the 2-layer autoencoder.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
@j143
Copy link
Member

j143 commented Sep 24, 2025

Hi @VirenS13117 , thanks for looking into this. Have you verified the fix? and in which scenario you've found this as bug?

@Baunsgaard
Copy link
Contributor

It is definitely a bug, and a nice catch. Will merge once I verify the GitHub actions.

@VirenS13117
Copy link
Author

@j143 well it's a logical bug as this was taking bias values instead of gradients.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: In Progress

Development

Successfully merging this pull request may close these issues.

3 participants