Skip to content

Conversation

@oxinabox
Copy link
Member

@oxinabox oxinabox commented Jun 8, 2018

I took a look at the FastText Binary format.
It is not actually a word embedding format.
It is basically an entire serialized model, which needs to be executed to get word embeddings.

This code loads the format,
but to actually get word embeddings out of it,
would require building up the ngram/subword tables etc.
Then running the computations to calcuate the word embeddings.

The file basically has to be loaded in it's entirety.
Because you need to read out parts to get the the right part of the file.
It actually loads really fast as most of the data is in contiguous matrices

After it is fully loaded, when executing it to get the actual word embedding,
then it is possible to avoid doing the whole vocabulary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants