Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Data Provider #154

Open
4 tasks
MoKob opened this issue May 26, 2017 · 0 comments
Open
4 tasks

Implement Data Provider #154

MoKob opened this issue May 26, 2017 · 0 comments

Comments

@MoKob
Copy link
Contributor

MoKob commented May 26, 2017

Depending on the amount of data / processing required, starting up a server can be a long process.
Especially for testing our server infrastructure, we would like to avoid unnecessary overheads.

To do so (and as already outlined in #3), we can think about implementing a data provider to supply data to our engine without a need to recreate the datasets after a failure / for any new startup of the engine.

As outlined in #3, we should seriously consider a process that does not used shared memory but rather communication via zeromq. The reasoning here is that this process would allow for us to avoid any locking problems that come along with shared memory regions and to allow distribution of workloads onto different machine types.

A data provider should take the role of the what is currently handled in the master-service::dataset. Instead of having a dataset and allowing the creation of different data structures, the master service should hand this responsibility to a data provider that can be located anywhere. The master service handles the communication with the data provider and returns structures as we are used to.

To do so, we need to serialise all structures into PBF and deserialised them from PBF. The Provider should offer the functionality to load a raw GTFS feed from disk and put it into PBF. The PBF has then to be transferred via ZeroMQ to the MasterService which, in turn, returns the access as usual, hiding all the ZeroMQ shenanigans from the rest of the project.

  • add PBF serialisation / deserialisation to all timetable data structures / look-up data structures
  • create data provider to be started up before any server can be started (think shared memory loading in OSRM), loading data and creating pbf-forms of all structures
  • start off with a create-all approach?
  • or still create on demand (might need to add additional pbf -> datastructure steps)
  • add communication possibilities to data provider
  • replace master service data-loading and creation with request + pbf->datastructure conversion

/cc @daniel-j-h

@MoKob MoKob added this to the Alpha 0.1.0 milestone May 26, 2017
@MoKob MoKob added this to Applications in ZMQ and App architecture May 31, 2017
@MoKob MoKob modified the milestones: Alpha 0.2.0, Alpha 0.1.0 Jun 9, 2017
@MoKob MoKob modified the milestones: Alpha 0.5.0, Alpha 0.2.0 Jun 15, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

No branches or pull requests

1 participant