You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A discussion started at neovim about the benefits that LSP could bring to spell checking, but I'm not sure whether this has been discussed here yet.
The LSP server is very aware of the file structure and programming language specificities, which makes it a very good candidate for delimiting regions eligible to spellchecking (comments, docstrings, strings maybe, variable names maybe, etc.).
Could this spellcheck-regions-delimiting activity be part of the protocol? What would it imply in terms of coordination with existing spellcheckers/editors?
The most basic support would be to not delimitate anything and return the whole file as one big region, leaving the burden of ignoring keywords, etc. to the spellchecker itself. Such a trivial implementation is dummy, but it would still be useful in 2 cases I think:
For a not-a-programming-language-LSP server, dedicated to parse plain, free .txt-like files written in natural language.
For transient, basic support in other LSP servers, as a temporary solution so they comply to the protocol even though they have not implemented precise region-delimiting logic yet.
The text was updated successfully, but these errors were encountered:
@matklad Of course :) I forgot to acknowledge how ignorant I am regarding the detailed process and every problem met on the way. The intent is rather to check whether this has already been discussed before (so thank you for having brought this other discussion here), and to take the temperature regarding spellchecking at LSP.
I think this is partly related to #18 . Semantic Highlighting actually seems to involve file tokenization and the LS returning file tokens to the client for further styling.
The "natural language text" could then be just a token type.
@mickaelistria I also think it is. At its core, spellchecking is essentially a linting process, and token highlights is its most natural output. However, I have no idea how standardized existing spellcheckers already are, where they would best fit in the process (e.g. interacting rather with the LSP client or the server?), or how hard it will be to correctly specify that interaction.
For instance, highlighting/linting is a thing, but there is also fixing, fleshing up dictionary, etc. are all these needs already listed somewhere?
A discussion started at neovim about the benefits that LSP could bring to spell checking, but I'm not sure whether this has been discussed here yet.
The LSP server is very aware of the file structure and programming language specificities, which makes it a very good candidate for delimiting regions eligible to spellchecking (comments, docstrings, strings maybe, variable names maybe, etc.).
Could this spellcheck-regions-delimiting activity be part of the protocol? What would it imply in terms of coordination with existing spellcheckers/editors?
The most basic support would be to not delimitate anything and return the whole file as one big region, leaving the burden of ignoring keywords, etc. to the spellchecker itself. Such a trivial implementation is dummy, but it would still be useful in 2 cases I think:
not-a-programming-language
-LSP server, dedicated to parse plain, free.txt
-like files written in natural language.The text was updated successfully, but these errors were encountered: