Skip to content

Conversation

rvinas
Copy link

@rvinas rvinas commented Jan 18, 2021

No description provided.

@arrigonialberto86
Copy link
Owner

I see that in the previous implementation random.normal returned a Tensor, while changing this to Variable effectively makes this a trainable parameter

@rvinas
Copy link
Author

rvinas commented Jan 18, 2021

Exactly. I believe both the seed vectors and inducing points are trainable in the paper, right?

Many thanks for the implementation!

@michaelpoluektov
Copy link

michaelpoluektov commented Jun 24, 2023

From the official PyTorch implementation:

class PMA(nn.Module):
    def __init__(self, dim, num_heads, num_seeds, ln=False):
        super(PMA, self).__init__()
        self.S = nn.Parameter(torch.Tensor(1, num_seeds, dim))
        nn.init.xavier_uniform_(self.S)
        self.mab = MAB(dim, dim, dim, num_heads, ln=ln)

    def forward(self, X):
        return self.mab(self.S.repeat(X.size(0), 1, 1), X)

They're using a trainable nn.Parameter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants