-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Open
Description
I have the following playground that I was tinkering with:
After 100 epochs, I get the following decision boundary
However, I notice something weird. Notice the output of the hidden layers in the above pic. They are all white/blue (>= 0, which makes sense because I am using ReLU). However, all of the 3 final weights are negative. This must mean that all of the output is negative (since positive inputs multiplied with negative weights must be negative). However, in the outputted decision boundary shown, there's plenty of values >=0 (in blue). How does this make sense? Is there normalization or some bias being added to the output neuron? If so, why is it now shown in the diagram?
piojanu
Metadata
Metadata
Assignees
Labels
No labels
