This repository is the solution, this is the problem
Please note: I have not worked on a project that used Domain Driven Design before.
I familiarized myself with DDD and its core concepts as apart from this
assignment, and I tried to follow the DDD principles as best as I understood them.
There are two golang programs in this repository:
This is a translation layer (from REST to gRPC).
This can be thought of as outside the main scope of this assignment and is something auxiliary.
This is the main program which saves the ports to the database
For receiving the ports data gRPC streams are used, reasons:
- for microservices communications, gRPC is very efficient and is widely supported
- gRPC streams allow the service to control the flow of data and how much the service consumes at any one time. This is helpful for limiting resources.
NOTE: The default message size for gRPC is 4MB. I feel like this is a good enough size for most scenarios and most hardware portCaptureServer will be run on, so I did not include an option to set it in the config file.
On start up a number of go routines are spawned, these are the worker threads that write the incoming port data to the database.
The number of worker threads that are used is defined in the config files located in: portCaptureServer/config/
and can be adjusted to suite different hardware capabilities.
Please note: portCaptureServer/app/service/*
is where the bulk of the functionality is coded.
For the database I decided to go with Postgresql.
The schema is located in portCaptureServer/db/schema.sql
.
The port data inside ports.json
was a little confusing:
"AEAJM": {
"name": "Ajman",
"city": "Ajman",
"country": "United Arab Emirates",
"alias": [],
"regions": [],
"coordinates": [
55.5136433,
25.4052165
],
"province": "Ajman",
"timezone": "Asia/Dubai",
"unlocs": [
"AEAJM"
],
"code": "52000"
}
As you can see from the above, each object is referenced by its unloc
, but each object also has list of unlocs
which contains the same unloc
that the object is referenced with.
This would imply that it is possible that a port can have multiple unlocs
.
I made that assumption and also that there is a "main unloc"
the one the object is referenced with, and I called this the primary_unloc
in the schema/code.
For security and audit reasons, no data in the database is ever truly deleted, instead it is just marked as deleted.
With regard to saving the ports data to the database, the service works in an all or nothing way; either all the data in a request is written to the database, or none of it is.
This means that if there is a single error in any of the ports, the whole file will have to be sent over again, there is no 'partial success'.
The reason for this is to avoid database bloat and having multiple duplicate identical records (identical apart from the deleted_at
field).
Run the following from the root directory
- First you need to apply the database schema (note you need psql installed for this to run):
sudo ./setupDockerDB.sh
- Build the docker images:
sudo docker-compose build
- Start all the services plus the database:
sudo docker-compose up
(the database needs to initialize before all the other services start, so this might take a minute)
You will need to have python3
and pytest
installed:
From integrationTests
run sudo python3 -m pytest -s -v
I made sure to unit test the most critical parts of this service:
From portCaptureServer/app/service/
run go test .
From portCaptureServer/app/server/
run go test .