-
Notifications
You must be signed in to change notification settings - Fork 116
Zero downtime deploy
Filipp Ozinov edited this page Mar 19, 2017
·
8 revisions
Zero downtime deploy - ability to update nodes without stopping the whole cluster.
Here is the steps required for zero-deployment:
- Add new methods and mark them with versioned replicated decorator (
@replicated(ver=newVer)
) instead of a regular one (@replicated
). newVer - version of your code. By default all@replicated
methods have version0
. So if it's your first update - your version should be1
. - If your data structures have any changes - write a migration procedure and set it to
config.onCodeVersionChanged
. Your migration procedure shouldn't be very heavy. Use it only for cases when changes are light-weighted or there is no other way to perform migration. Sometimes better solution will be to support old and new structures in your methods instead. - If you have any changes in existing function arguments or behaviour - leave an old version and write the same function but with the new version. You can remove old version as soon as you switch a cluster to a new version.
- Deploy a new version of code to all cluster nodes one by one (upload code to
node1
, restartnode1
, wait fornode1
up, go to next node). - Set new version with syncobj_admin utility:
syncobj_admin -set_version newVer
.
For example, we have a simple key-value storage:
class KVStorage(SyncObj):
def __init__(self, selfAddress, partnerAddrs):
super(KVStorage, self).__init__(selfAddress, partnerAddrs)
self.__data = {}
@replicated
def set(self, key, value):
self.__data[key] = value
def get(self, key):
return self.__data.get(key, None)
In next version we decided to add ability to remove items. The structure will be the same, no old method behaviour modification required. So we just add another function and mark it with new version:
class KVStorage(SyncObj):
def __init__(self, selfAddress, partnerAddrs):
super(KVStorage, self).__init__(selfAddress, partnerAddrs)
self.__data = {}
@replicated
def set(self, key, value):
self.__data[key] = value
@replicated(ver=1)
def pop(self, key):
self.__data.pop(key, None)
def get(self, key):
return self.__data.get(key, None)
Now we decided to add revisions (versions) for all of our keys. There are two possibilities - with data migration or without it. We will start with a version with migration:
class KVStorage(SyncObj):
def __init__(self, selfAddress, partnerAddrs):
cfg = SyncObjConf()
cfg.onCodeVersionChanged = self.runMigration
super(KVStorage, self).__init__(selfAddress, partnerAddrs, cfg)
self.__data = {}
def runMigration(self, verFrom, verTo):
assert verFrom == 0 and verTo == 1
newData = {}
for key, value in self.__data.items():
newData[key] = (value, 1)
self.__data = newData
@replicated
def set(self, key, value):
self.__data[key] = value
@replicated(ver=2)
def set(self, key, value):
currVer = self.__data.get(key, (0, 0))[1]
self.__data[key] = (value, currVer + 1)
@replicated(ver=1)
def pop(self, key):
self.__data.pop(key, None)
def get(self, key):
return self.__data.get(key, None)
We made folowing modifications:
-
runMigration
function - it converts all our records to a new format. - new version of
set
function with modified behaviour, it will be used as soon as we switch cluster to a new version. We also left the old version ofset
function, it will be used before cluster performs switch.
In this case we have to support backward compatibility for our data format:
class KVStorage(SyncObj):
def __init__(self, selfAddress, partnerAddrs):
super(KVStorage, self).__init__(selfAddress, partnerAddrs)
self.__data = {}
@replicated
def set(self, key, value):
self.__data[key] = value
@replicated(ver=2)
def set(self, key, value):
currData = self.__data.get(key)
if isinstance(currData, tuple):
self.__data[key] = (value, currData[1] + 1)
elif currData is not None:
self.__data[key] = (value, 2)
else:
self.__data[key] = (value, 1)
@replicated(ver=1)
def pop(self, key):
self.__data.pop(key, None)
def get(self, key):
currData = self.__data.get(key, None)
if isinstance(currData, tuple):
return currData[0]
return currData