-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactoring summary list for each service #8
Conversation
DEFRA/water-abstraction-system#8 Creating a new endpoint for use by the System to display health data. Currently this will be version and git commit hash
DEFRA/water-abstraction-system#8 Creating a new endpoint for use by the System to display health data. Currently this will be version and git commit hash
DEFRA/water-abstraction-system#8 Creating a new endpoint for use by the System to display health data. Currently this will be version and git commit hash.
DEFRA/water-abstraction-system#8 We create a new endpoint `/health/info` which will be used by our System repo to display info. Currently this will be this repo's version and current git commit hash.
DEFRA/water-abstraction-system#8 We create a new endpoint `/health/info` which will be used by our System repo to display info. Currently this will be this repo's version and current git commit hash.
DEFRA/water-abstraction-system#8 We create a new endpoint `/health/info` which will be used by our System repo to display info. Currently this will be this repo's version and current git commit hash.
DEFRA/water-abstraction-system#8 We create a new endpoint `/health/info` which will be used by our System repo to display info. Currently this will be this repo's version and current git commit hash.
DEFRA/water-abstraction-system#8 Creating a new endpoint for use by the System to display health data. Currently this will be version and git commit hash.
DEFRA/water-abstraction-system#8 Creating a new endpoint for use by the System to display health data. Currently this will be version and git commit hash
DEFRA/water-abstraction-system#8 Creating a new endpoint for use by the System to display health data. Currently this will be version and git commit hash.
DEFRA/water-abstraction-system#8 We create a new endpoint `/health/info` which will be used by our System repo to display info. Currently this will be this repo's version and current git commit hash.
Our first pass at some unit tests for the `ServiceStatusService`. We use [Nock](https://github.com/nock/nock) to mock our http requests to the other services. We've found this gives us a cleaner way to control how this happens than trying to use Sinon to stub [Got](https://github.com/sindresorhus/got).
This was a pig of a problem! We are trying to stub `exec()` that is exported from the module `child_process` and expects to be passed a callback. It is then promisified by `util.promisify()` before being used in the service. We also want it to return 2 different values to mimic it being used to check ClamAV and Redis. We tried various suggestions for how to do this, but none worked until we brought in [proxyquire](https://github.com/thlorenz/proxyquire) > Proxies nodejs's require in order to make overriding dependencies during testing easy while staying totally unobtrusive. With this change, and a minor one in the service code, we are now mocking all calls to external services, whether that be other web services or the shell.
Just to be sure that we are not calling the real services, we alter the data the mocks return.
This should mean the test starts passing and not failing the build
Was just there for debugging purposes.
The other is a TODO comment but we should have shot of that soon.
This hopefully covers the check returning an error from the shell and any exceptions thrown when calling `exec()`.
If only to keep Sonarcloud happy, this removes the TODO and uses a temporary method to handle adding jobs back into the Import app data. Still needs some clean up.
We wanted to cover what happens if when making a call to one of the other services we get an error. Prior to this change if that happened the `ServiceStatusService` would implode and cause a `500` to be returned to the browser. We need the service to handle this so it can continue to report what's going on with the other services. So, we added some tests that mock different scenarios. This meant some changes in the test setups, most notably having to move some of the Nock calls down into other blocks. But to make it work in the code required some further refactoring. Now all the requests are made in one method, mainly to avoid duplicating the error handling all other the place. We've also brought across our experience of using Got in sroc-charging-module-api and use request options to control when exactly it should retry requests. Finally, with things coming together we add some more notes about the test setup.
Looking at the tests we see a copious the amount of work needed to mock web requests and system calls. We also have a range of scenarios. IMHO they are telling me that our service is doing too much. It is collecting data from different sources in different ways, which means it definately doesn't just have a single responsiblity. It needs breaking up, which in turn means we can break up the tests and hopefully make them a damn sight less scary. But GitHub currently clocks the PR at +905/-125. That's more than a 1000 changes and well beyond our aim of approzimately 100. We also need to get onto delivering some features. So, for now it works and is fully tested. Pending any quick wins we spot, we should approve and merge this and return to it at a later date (would be a great refactoring exercise for those of want the practise!)
Co-authored-by: Stuart Adair <[email protected]>
Thanks to a great suggestion from @StuAA78 we can use `withArgs()` to control responding differently to the two calls, rather than relying on our knowledge of the implementation. Always a better approach.
We had major conflicts due to adding packages in this PR and a bunch of others in one of our other PR's. I did try and manually handle the merge conflicts but it didn't work. So, have gone with the 'delete-and-rebuild' approach 😁
9a7c031
to
dbe05e2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy with this PR as it is, the only potential change I'd carry forward to another PR is exporting config directly.
As an aside, we could probably do with researching the best way to resolve package-lock.json
conflicts! Deleting and rebuilding it is fine for now while we're still getting it off the ground, my concern in future is that doing this could potentially lose any changes that Dependabot made for us -- to be fair it would probably just create new PRs for us to merge again but if we can prevent that happening in the first place then all the better! 😁
According to this post npm install --package-lock-only
may fix this for us (it's not entirely clear but it seems to be suggesting that you run this while you're mid-merge?). So possibly worth a try next time we have this issue?
Recently, we [Replaced node-sass with dart-sass package](#14). This meant removing the node-sass dependency. However, on a [subsequent change](#8) (@Cruikshanks fault!) we managed to let it slip back in. This corrects the mistake and removes node-sass from our dependencies.
Recently, we [Replaced node-sass with dart-sass package](#14). This meant removing the node-sass dependency. However, on a [subsequent change](#8) (@Cruikshanks fault!) we managed to let it slip back in. This corrects the mistake and removes node-sass from our dependencies.
DEFRA/water-abstraction-team#54
We wanted to make the layout for each repo consistent and relate things like jobs to the service they belong to. We then decided we'd report on 'apps' rather than just 'repos', as that is what is running on a server and which 'the service' is dependent on.
We tidied up the external services section, including adding the Charge Module API which we'd, though not in the existing /service-status page, should be. We also got them to return meaningful information.
All the calls for info are now robust. Even if all the other apps and external services were down the page would still display.
All this work meant some major refactoring of the
ServiceStatusService()
, for example, a single method for making HTTP requests to the other apps and services.All of this is tested. To do this we needed to bring in Nock to mock our HTTP requests to the other services and proxyquire to allow us to stub our calls to
child_process.exec()
.Thoughts on the current state
Looking at the tests we see a copious amount of work needed to mock web requests and system calls. We also have a range of scenarios. In our opinion, they are telling us that our service is doing too much. It is collecting data from different sources in different ways, which means it definitely doesn't just have a single responsibility.
It needs breaking up, which in turn means we can break up the tests and hopefully make them a damn sight less scary.
We didn't do those changes in this PR though, because we'd already clocked up more than 1000 changes, well beyond our working target of 100. So, we've agreed to merge and tackle the refactoring as a separate exercise.
Rebuilding the package-lock.json
Because of how long this PR had been around and the fact it adds new dependencies, we ended up having major merge issues in the
package-lock.json
. We went for a 'delete-and-rebuild' approach but we know we shouldn't have 😳 .Next time we'll endeavour to use
npm install --package-lock-only
which we recently learned is a command added to npm to help rectify the package-lock.json in these 'merge-hell' situations.