-
Notifications
You must be signed in to change notification settings - Fork 18
Cloudstor plugin not enabled in newly provisioned swarm #64
Comments
I am also encountering this on a brand new, from template swarm. Attempts to enable the plugin fail with the same error. |
same here |
We have the same issue - just created a swarm from template (https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fdownload.docker.com%2Fazure%2Fstable%2FDocker.tmpl), and
Docker logs -
EDIT:
|
@dnataraj I noticed running
|
Same problem here! |
Hello! No answer on this one? Anybody could fix the issue? |
I found a fix here: #55 (comment) docker plugin rm cloudstor:azure || true &&
docker ps -a | \
grep init-azure | \
( read ID OTHER; docker restart $ID; docker exec $ID sed -ire 's,from azure.storage.table ,from azure.cosmosdb.table ,' /usr/bin/azureleader.py; docker exec $ID sed -ire 's,from azure.storage.table ,from azure.cosmosdb.table ,' /usr/bin/sakey.py ) &&
docker logs -f $(docker ps -a | grep init-azure | awk '{print $1}') |
Nick @walmon I test it in another cluster but not in Docker for Azure Swarm. |
Yeah, the way we worked around this bug: @gmsantos was right, for some reason the template doesn't set the 'AZURE_STORAGE_ACCOUNT_KEY' for the plugin's disk. What we did:
It will setup the disk on your manager. And then on the workers run the script that @gmsantos posted here (you need to connect to them by forwarding your ssh when connecting to the worker, if you don't know how, more details at the end). First, you will need the access key, so you can go to the azure console and get it from there, or you can copy it from doing a:
After
And by applying that to every worker node, you can access the same disk. To be able to connect to the worker nodes:
|
I have another workaround for Docker4Azure Swarm:
This global service will set up and enable cloudstor:azure plugin on every new node. |
With regards to the |
Expected behavior
Actual behavior
Information
Steps to reproduce the behavior
docker plugin ls
The text was updated successfully, but these errors were encountered: