We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
not only the vector in the training data. Course you know, it's hard to predict which entities will appear in real life
The text was updated successfully, but these errors were encountered:
Since I trained entity embeddings on GPU (see the lookup table here https://github.com/dalab/deep-ed/blob/master/entities/learn_e2v/model_a.lua#L28), I am afraid if one wants to get the full set of Wikipedia entities (i.e. 6M), one has to train them on CPU (which will be slower as far as I remember) and have enough RAM to keep a 6M x 300 lookup table. To do that, you have to modify the files in https://github.com/dalab/deep-ed/tree/master/entities/learn_e2v to use all Wikipedia entities and words for training. Setting the flag: -entities 'ALL' in entities/learn_e2v/learn_a.lua should do the job, but this code was not tested as far as I remember.
Sorry, something went wrong.
Thank you very much, i‘ll try it
No branches or pull requests
not only the vector in the training data. Course you know, it's hard to predict which entities will appear in real life
The text was updated successfully, but these errors were encountered: