Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Single Provider Process #1264

Merged
merged 2 commits into from
Apr 19, 2022
Merged

Single Provider Process #1264

merged 2 commits into from
Apr 19, 2022

Conversation

robertoaloi
Copy link
Member

Erlang LS historically used one process per provider. This was
problematic, since requests handled by different providers could lead
to race conditions, causing a server crash. For example, a completion
request could not see the latest version of the text in case of
incremental text synchronization.

There was very little gain in parallelizing providers, so we decided to
simplify the architecture and have a single process, instead. There's
still room for refactoring and cleanup, but those will be handled as a
follow up.

As part of the architectural change, we also dropped support for BSP
since it has been stalling for too long time. Instead, we will try to
implement a simpler solution which allows the extraction of paths from
the build system (e.g. via the rebar.conifg).

  • Drop BSP Support
  • Introduce text synchronization provider
  • Run providers as a single process
  • Remove buffer server, index in the background instead
  • Decouple DAP provider from LSP one
  • Brutally kill background jobs
  • Set standard_io as default value for the io_device

Erlang LS historically used one process per provider. This was
problematic, since requests handled by different providers could lead
to race conditions, causing a server crash. For example, a completion
request could not see the latest version of the text in case of
incremental text synchronization.

There was very little gain in parallelizing providers, so we decided to
simplify the architecture and have a single process, instead. There's
still room for refactoring and cleanup, but those will be handled as a
follow up.

As part of the architectural change, we also dropped support for BSP
since it has been stalling for too long time. Instead, we will try to
implement a simpler solution which allows the extraction of paths from
the build system (e.g. via the rebar.conifg).

* Drop BSP Support
* Introduce text synchronization provider
* Run providers as a single process
* Remove buffer server, index in the background instead
* Decouple DAP provider from LSP one
* Brutally kill background jobs
* Set standard_io as default value for the io_device
@robertoaloi robertoaloi mentioned this pull request Apr 12, 2022
Copy link
Contributor

@michalmuskala michalmuskala left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this looks fairly reasonable, just some minor comments.

On the bigger design decisions:

  • dropping BSP seems to me like a sensible choice - we don't really have plans to work on that, and could instead integrate with rebar3 more directly
  • making the processing less "async" seems like a good choice - main event loop + async processes for specific things seems like a simpler programming model, than every type of request running in parallel

= State,
case Provider:handle_request(Request, State) of
{async, Uri, Job} ->
{reply, {async, Job}, State#{in_progress => [{Uri, Job}|InProgress]}};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there some monitoring of the jobs to remove them from the map when they are externally terminated/crash?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not that I can see, even if the consequence of leaving an orphan entry in the map shouldn't be critical. I will create a ticket about this. Also, I now realize that there's some logic running in the terminate function (to clean up the spinning wheel), so I will roll back the brutal_kill strategy bit.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

{noreply, State}
end.
?LOG_DEBUG("Cancelling request [job=~p]", [Job]),
els_background_job:stop(Job),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if we get a cancel request twice for the same job?


%%==============================================================================
%% API
%%==============================================================================
-spec notify([els_diagnostics:diagnostic()], pid()) -> ok.
notify(Diagnostics, Job) ->
?SERVER ! {diagnostics, Diagnostics, Job},
els_provider ! {diagnostics, Diagnostics, Job},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if those notifications should be somehow incorporated into the background job framework - e.g. that it sends as a reply to the process that started the job the value that was returned from the function (kind of similar to Elixir's Task)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not familiar with Elixir's Task but it sounds like a good idea. Created #1272

@robertoaloi robertoaloi merged commit dc3a5e1 into main Apr 19, 2022
@robertoaloi robertoaloi deleted the single-provider branch April 19, 2022 08:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants