Skip to content

Final output values do not make mathematical sense.Β #194

@jeffreywolberg

Description

@jeffreywolberg

I have the following playground that I was tinkering with:

https://playground.tensorflow.org/#activation=relu&batchSize=10&dataset=circle&regDataset=reg-plane&learningRate=0.03&regularizationRate=0&noise=15&networkShape=3&seed=0.97439&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false

After 100 epochs, I get the following decision boundary

image

However, I notice something weird. Notice the output of the hidden layers in the above pic. They are all white/blue (>= 0, which makes sense because I am using ReLU). However, all of the 3 final weights are negative. This must mean that all of the output is negative (since positive inputs multiplied with negative weights must be negative). However, in the outputted decision boundary shown, there's plenty of values >=0 (in blue). How does this make sense? Is there normalization or some bias being added to the output neuron? If so, why is it now shown in the diagram?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions