-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vault storage migration from MySQL to RAFT results in uninitialized cluster despite successful migration #29368
Comments
I've successfully tested this migration from MySQL to RAFT and there were no issues. @javiermmenendez you do not need to work with an existing or already instantiated RAFT path but rather an entirely new path without anything. What's more when the migration is complete you must start Vault with exactly the same (HCL) configuration as before with the only noteable difference being the In my case here's the cluster_name = "mysql1"
api_addr = "http://127.0.0.1:8200"
cluster_addr = "https://127.0.0.1:8201"
listener "tcp" {
address = "127.0.0.1:8200"
cluster_address = "127.0.0.1:8201"
tls_disable = true
}
storage "mysql" {
address = "127.0.0.1:3306"
username = "root"
password = "admin"
database = "vault"
}
disable_mlock = true
log_level = "trace"
ui = true
raw_storage_endpoint = true Migrate config: storage_source "mysql" {
address = "127.0.0.1:3306"
username = "root"
password = "admin"
database = "vault"
}
storage_destination "raft" {
path = "/Users/aUser/Downloads/vault_1.18.4_darwin_arm64/raft"
}
cluster_addr = "http://127.0.0.1:8201"
log_level = "trace" Final Raft configuration cluster_name = "mysql1"
api_addr = "http://127.0.0.1:8202"
cluster_addr = "https://127.0.0.1:8203"
listener "tcp" {
address = "127.0.0.1:8202"
cluster_address = "127.0.0.1:8203"
tls_disable = true
}
storage "raft" {
path = "/Users/aUser/Downloads/vault_1.18.4_darwin_arm64/raft"
}
disable_mlock = true
log_level = "trace"
ui = true
raw_storage_endpoint = true |
Tray 168.169.0.10
…On Thu, 13 Feb, 2025, 7:46 pm aphorise, ***@***.***> wrote:
I've successfully tested this migration from MySQL to RAFT and there were
no issues.
@javiermmenendez <https://github.com/javiermmenendez> you do not need to
work with an existing or already instantiated RAFT path but rather an
entirely new path without anything.
What's more when the migration is complete you must start Vault with
exactly the same (HCL) configuration as before with the only noteable
difference being the storage stanza that would differ (nothing else). If
you did not have node-id previously set then do not introduce it as part
of the migration or in the final configuration.
In my case here's the *vault_mysql.hcl*
cluster_name = "mysql1"api_addr = "http://127.0.0.1:8200"cluster_addr = "https://127.0.0.1:8201"
listener "tcp" {
address = "127.0.0.1:8200"
cluster_address = "127.0.0.1:8201"
tls_disable = true
}
storage "mysql" {
address = "127.0.0.1:3306"
username = "root"
password = "admin"
database = "vault"
}
disable_mlock = truelog_level = "trace"ui = trueraw_storage_endpoint = true
------------------------------
Migrate config: *migrate.hcl*
storage_source "mysql" {
address = "127.0.0.1:3306"
username = "root"
password = "admin"
database = "vault"
}
storage_destination "raft" {
path = "/Users/mehdi/Downloads/vault_1.18.4_darwin_arm64/raft"
}cluster_addr = "http://127.0.0.1:8201"log_level = "trace"
------------------------------
Final Raft configuration *vault_raft.hcl* on different ports where both
instances (MySQL & new Raft) are accessible fine.
cluster_name = "mysql1"api_addr = "http://127.0.0.1:8202"cluster_addr = "https://127.0.0.1:8203"
listener "tcp" {
address = "127.0.0.1:8202"
cluster_address = "127.0.0.1:8203"
tls_disable = true
}
storage "raft" {
path = "/Users/aUser/Downloads/vault_1.18.4_darwin_arm64/raft"
}
disable_mlock = truelog_level = "trace"ui = trueraw_storage_endpoint = true
—
Reply to this email directly, view it on GitHub
<#29368 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/APO4FK26NCUATK6LBMFG6SD2PSSMVAVCNFSM6AAAAABVOS6VNGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNJWG4ZDQNZVGY>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
[image: aphorise]*aphorise* left a comment (hashicorp/vault#29368)
<#29368 (comment)>
I've successfully tested this migration from MySQL to RAFT and there were
no issues.
@javiermmenendez <https://github.com/javiermmenendez> you do not need to
work with an existing or already instantiated RAFT path but rather an
entirely new path without anything.
What's more when the migration is complete you must start Vault with
exactly the same (HCL) configuration as before with the only noteable
difference being the storage stanza that would differ (nothing else). If
you did not have node-id previously set then do not introduce it as part
of the migration or in the final configuration.
In my case here's the *vault_mysql.hcl*
cluster_name = "mysql1"api_addr = "http://127.0.0.1:8200"cluster_addr = "https://127.0.0.1:8201"
listener "tcp" {
address = "127.0.0.1:8200"
cluster_address = "127.0.0.1:8201"
tls_disable = true
}
storage "mysql" {
address = "127.0.0.1:3306"
username = "root"
password = "admin"
database = "vault"
}
disable_mlock = truelog_level = "trace"ui = trueraw_storage_endpoint = true
------------------------------
Migrate config: *migrate.hcl*
storage_source "mysql" {
address = "127.0.0.1:3306"
username = "root"
password = "admin"
database = "vault"
}
storage_destination "raft" {
path = "/Users/mehdi/Downloads/vault_1.18.4_darwin_arm64/raft"
}cluster_addr = "http://127.0.0.1:8201"log_level = "trace"
------------------------------
Final Raft configuration *vault_raft.hcl* on different ports where both
instances (MySQL & new Raft) are accessible fine.
cluster_name = "mysql1"api_addr = "http://127.0.0.1:8202"cluster_addr = "https://127.0.0.1:8203"
listener "tcp" {
address = "127.0.0.1:8202"
cluster_address = "127.0.0.1:8203"
tls_disable = true
}
storage "raft" {
path = "/Users/aUser/Downloads/vault_1.18.4_darwin_arm64/raft"
}
disable_mlock = truelog_level = "trace"ui = trueraw_storage_endpoint = true
—
Reply to this email directly, view it on GitHub
<#29368 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/APO4FK26NCUATK6LBMFG6SD2PSSMVAVCNFSM6AAAAABVOS6VNGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNJWG4ZDQNZVGY>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Describe the bug
I'm trying to migrate from a MySQL storage backend to a RAFT storage backend following the official documentation, but the migration process results in an uninitialized cluster, preventing me from proceeding with the migration.
I follow this document: https://support.hashicorp.com/hc/en-us/articles/17295423360403-How-to-migrate-Vault-s-storage-backend-to-a-new-Vault-cluster-in-Kubernetes
To Reproduce
Steps to reproduce the behavior:
Have a source Vault cluster running with MySQL storage backend (verified running and unsealed)
Deploy a new Vault instance configured with RAFT storage (uninitialized as per documentation)
Create and apply the migration configuration file
Run vault operator migrate -config migrate.hcl
Check new cluster status showing uninitialized state instead of initialized as documented
Expected behavior
According to the documentation, after migration, the new RAFT cluster should be initialized and ready to unseal, but instead, it remains uninitialized.
Environment:
Source Vault Server Version: 1.13.2
Destination Vault Server Version: 1.18.1
Server Operating System/Architecture: OpenShift 4.14
Both source and destination Vault instances are running in the same OpenShift cluster
Migration configuration file:
Additional context
The migration command completes successfully with the message "Success! All of the keys have been migrated." However, when checking the status of the new cluster, it shows as uninitialized:
The source cluster status was properly initialized and unsealed before the migration attempt.
The text was updated successfully, but these errors were encountered: