Skip to content

mr-mapache/drop-connect

Repository files navigation

Drop Connect

The paper DropConnectPapper introduces a regularization technique that is similar to Dropout, but instead of dropping out individual units, it drops out individual connections between units. This is done by applying a mask to the weights of the network, which is sampled from a Bernoulli distribution.

DropConnectImage

NOT READY YET!

Training

Let $X \in \mathbb{R}^{n \times d}$ a tensor with $n$ examples and $d$ features a $W \in \mathbb{R}^{l \times d}$ a tensor of weights.

For training, a mask matrix $M$ is created from a Bernoulli distribution to mask elements of the weight matrix $W$ , using the Hadamard product, in order to drop neuron connections instead of turning off neurons like in dropout

For a single example, the implementation is straightforward, just apply a mask $M$ to a weight tensor $W$. However, according to the paper: "A key component to successfully training with DropConnect is the selection of a different mask for each training example. Selecting a single mask for a subset of training examples, such as a mini-batch of 128 examples, does not regularize the model enough in practice."

Therefore, a mask tensor $M \in \mathbb{R}^{n \times l \times d}$ must be chosen, so the linear layer with DropConnect should be implemented as:

$$ \text{DropConnect}(X, W, M) = \begin{bmatrix} \frac{1}{1-p}\begin{bmatrix} x^1{}_1 & x^1{}_2 & \cdots & x^1{}_d \end{bmatrix} \left(\begin{bmatrix} m^{11}{}_1 & m^{11}{}_2 & \cdots & m^{11}{}_l \\ m^{12}{}_1 & m^{12}{}_2 & \cdots & m^{12}{}_l \\ \vdots & \vdots & \ddots & \vdots \\ m^{1d}{}_1 & m^{1d}{}_2 & \cdots & m^{1d}{}_l \\ \end{bmatrix} \odot \begin{bmatrix} w^1{}_1 & w^1{}_2 & \cdots & w^1{}_l \\ w^2{}_1 & w^2{}_2 & \cdots & w^2{}_l \\ \vdots & \vdots & \ddots & \vdots \\ w^d{}_1 & w^d{}_2 & \cdots & w^d{}_l \\ \end{bmatrix} \right) \\ \\ \\ \vdots \\ \\ \frac{1}{1-p}\begin{bmatrix} x^n{}_1 & x^n{}_2 & \cdots & x^n{}_d \end{bmatrix} \left(\begin{bmatrix} m^{n1}{}_1 & m^{n1}{}_2 & \cdots & m^{n1}{}_l \\ m^{n2}{}_1 & m^{n2}{}_2 & \cdots & m^{n2}{}_l \\ \vdots & \vdots & \ddots & \vdots \\ m^{nd}{}_1 & m^{nd}{}_2 & \cdots & m^{nd}{}_l \\ \end{bmatrix} \odot \begin{bmatrix} w^1{}_1 & w^1{}_2 & \cdots & w^1{}_l \\ w^2{}_1 & w^2{}_2 & \cdots & w^2{}_l \\ \vdots & \vdots & \ddots & \vdots \\ w^d{}_1 & w^d{}_2 & \cdots & w^d{}_l \\ \end{bmatrix} \right) \\ \end{bmatrix} $$

Backpropagation

In order to update the weight matrix $W$ in a DropConnect layer, the mask is applied to the gradient to update only those elements that were active in the forward pass. but this is already done by the automatic differentiation in Pytorch, since if $J$ is the gradient coming from the linear operation, the gradient propagated by the Hadamard product with respect to $W$ will be:

$$ J \odot M $$

So there is no need to implement an additional backpropagation operation, and only the Hadamard product already provided by Pytorch is needed.

About

A PyTorch implementation of the Drop Connect layers.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages