We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training the MLLM backend can be a bit slow. Most of the time is spent generating candidates from the training documents. This could probably be done faster by using parallel processing. It's noted as a TODO item in the code: https://github.com/NatLibFi/Annif/blob/master/annif/backend/mllm.py#L23
The text was updated successfully, but these errors were encountered:
Actually the TODO item mentioned above is only relevant for the hyperparameter optimization functionality of the MLLM backend.
The training docs are processed in this loop and it could probably be parallelized.
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
Training the MLLM backend can be a bit slow. Most of the time is spent generating candidates from the training documents. This could probably be done faster by using parallel processing. It's noted as a TODO item in the code: https://github.com/NatLibFi/Annif/blob/master/annif/backend/mllm.py#L23
The text was updated successfully, but these errors were encountered: