Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

keys doctor should restore server public key #165

Open
dmiddlecamp opened this issue Mar 13, 2015 · 9 comments
Open

keys doctor should restore server public key #165

dmiddlecamp opened this issue Mar 13, 2015 · 9 comments

Comments

@dmiddlecamp
Copy link
Contributor

maybe either prompt for this, or provide it as an option, or just do it by default, I think people are sometimes removing / corrupting their server keys, and this would help recover when possible? Thoughts?

@kennethlimcp
Copy link
Contributor

sounds good though it causes some breaking changes with the local ☁️

@dmiddlecamp
Copy link
Contributor Author

Totally, that's why I suspect maybe a prompt or extra parameter might help avoid that

@kennethlimcp
Copy link
Contributor

👍

sounds good but during the process if we perform a dfu :leavethen it's gonna exit before the next command though that's simple to fix.

Or spark keys doctor all core_id.der

@technobly
Copy link
Member

I just ran into this issue myself when changing a Core that was setup for the Staging server back to the Production server. I just assumed spark keys doctor xxxx fixed everything about the keys. Definitely happen to see this in the issues list already ;-)

If we have a couple known Public keys for these servers, my suggestion would be to Prompt within the doctor command if you'd like to restore 2: staging or 1: production and if the answer is no to both, explain that spark keys server is the command you'll need for a custom key (ala local cloud). Then just make sure the CLI can pull those keys down from amazon, or build them into the CLI.

@kennethlimcp
Copy link
Contributor

@technobly, sounds awesome but too specific for spark team development though. Also, i don't think you guys would want to share the staging public key + ip address/domain and have people hitting it randomly during your testing.

No harm sharing but for a development environment, it would probably be better to leave that variable out.

Suggesting to have a "cert" folder of some sort that we can check against and flash accordingly.

Maybe even tag each cert to a profile so that you know you are flashing the right cert based on the profile you are currently on ;)

@dmiddlecamp
Copy link
Contributor Author

@kennethlimcp 👍

@technobly
Copy link
Member

If the CLI downloads the staging key on demand, it could be a secured option based on your login permissions. But yeah, it's not really that critical to add staging to the CLI, maybe more importantly it should just default to putting the public cloud key back on the device, unless you specify not to with spark keys doctor xxxx --no-server. Then you can follow up with spark keys server xxxx.der.

If you are running the doctor command, you might as well fix everything up.

@dmiddlecamp
Copy link
Contributor Author

I like the idea of making it available via the API, then the CLI could grab it from whatever API it was pointed at

@KarbonDallas
Copy link
Contributor

I like the idea of having the CLI pull the cloud public key on-demand as well.
If we're in agreement on this being the default behaviour (without a flag) then I'll add this as a TODO!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants