diff --git a/docs/docs/.pages b/docs/docs/.pages index 163af2a1..96e7af9b 100644 --- a/docs/docs/.pages +++ b/docs/docs/.pages @@ -2,7 +2,7 @@ arrange: - index.md - Getting_Started.md - Building_Your_First_Plugin + - Versioning_And_Upgrade - References - - Versioning_And_Upgrading - Best_Practices - Release_Notes diff --git a/docs/docs/Best_Practices/.pages b/docs/docs/Best_Practices/.pages index 81974762..a9189905 100644 --- a/docs/docs/Best_Practices/.pages +++ b/docs/docs/Best_Practices/.pages @@ -6,4 +6,3 @@ arrange: - Sensitive_Data.md - Unicode_Data.md - Working_with_Powershell.md - - Replication diff --git a/docs/docs/Best_Practices/Replication/.pages b/docs/docs/Best_Practices/Replication/.pages deleted file mode 100644 index 8811afd0..00000000 --- a/docs/docs/Best_Practices/Replication/.pages +++ /dev/null @@ -1,3 +0,0 @@ -arrange: - - Replication.md - - Managing_Versions_With_Replication.md diff --git a/docs/docs/Best_Practices/Replication/Managing_Versions_With_Replication.md b/docs/docs/Best_Practices/Replication/Managing_Versions_With_Replication.md deleted file mode 100644 index 1005e62d..00000000 --- a/docs/docs/Best_Practices/Replication/Managing_Versions_With_Replication.md +++ /dev/null @@ -1,9 +0,0 @@ -# Managing Versions With Replication - -In order to ensure incompatible plugin versions on the source and target do not cause issues with provisioning and failover, plugin authors can use the below recommendations. - -- Make sure to build your plugin with the newest Virtualization SDK version available. -- Make sure there is only one artifact built for a given official version of the plugin. -- Make sure the official release of a plugin does not use the same build number as a development build. -- Make sure to use a versionining scheme (the external version defined in the plugin configuration) that helps easily identify which plugin is older or newer than the plugin already installed on the Delphix Engine. -- Make sure to publish or maintain a plugin version compatibility matrix which lists out the plugin version, Virtualization SDK it was built with and the Delphix Engine version (or versions) it is compatible with. diff --git a/docs/docs/Best_Practices/Replication/Replication.md b/docs/docs/Best_Practices/Replication/Replication.md deleted file mode 100644 index 26752bfd..00000000 --- a/docs/docs/Best_Practices/Replication/Replication.md +++ /dev/null @@ -1,18 +0,0 @@ -# Overview - -When a Delphix Engine is setup for replication to another engine, it can be be configured in a few ways: - -- The entire engine could be replicated, which means that all the data objects will be replicated to the target. -- A subset of the objects could be replicated, which means an object/s and its dependencies will be replicated to the target. - -In both the cases above, if a data object that belongs to a plugin is replicated, the associated plugin (including metadata like schemas, plugin configuration and its source code) is replicated as well. A replicated plugin and associated objects are available on the target engine for couple of operations: - -## Replica Provisioning -Data sources and VDBs that belong to a plugin that are not failed over yet are available to provision a new VDB. When provisioning a VDB from a replicated source object, please note that the plugin that got replicated from the source engine has to be compatible with the plugin that lives on the target plugin. - -## Failover -If the replicated and target plugin versions are compatible, failover operation on the Delphix Engine will automatically merge the plugins and associated objects and there will only be one plugin active. In some cases, an upgrade operation post failover is required to consolidate the plugins and enable the replicated plugin objects. For more info, refer to [this link](/Versioning_And_Upgrading/Special_Concerns/Replication.md) for how replication impacts plugin version and upgrade. - -!!! info - A replicated object is said to be in a namespace until a failover operation is performed. The plugin and its objects that are already on the target engine are referred to as the active plugin (and objects) in the below sections. - diff --git a/docs/docs/Building_Your_First_Plugin/Data_Ingestion.md b/docs/docs/Building_Your_First_Plugin/Data_Ingestion.md index 53824d3d..87c04dfa 100644 --- a/docs/docs/Building_Your_First_Plugin/Data_Ingestion.md +++ b/docs/docs/Building_Your_First_Plugin/Data_Ingestion.md @@ -1,7 +1,3 @@ ---- -title: Virtualization SDK ---- - # Data Ingestion ## How Does Delphix Ingest Data? @@ -293,4 +289,4 @@ screen you see should ask for the properties that you recently added to your `li After you have finished entering this information, the initial sync process will begin. This is what will call your pre-snapshot operation, thus copying data. !!! warning "Gotcha" - Manually creating a dSource sets your plugin’s linked source schema in stone, and you will have to recreate the dSource in order to modify your schema. We will cover how to deal with this correctly later, in the upgrade section. For now, if you need to change your plugin's linked source schema, you will have to first delete any dSources you have manually added. \ No newline at end of file + Manually creating a dSource sets your plugin’s linked source schema in stone, and you will have to recreate the dSource in order to modify your schema. We will cover how to deal with this correctly later, in the [upgrade section](/Versioning_And_Upgrade/Upgrade.md). For now, if you need to change your plugin's linked source schema, you will have to first delete any dSources you have manually added. diff --git a/docs/docs/Building_Your_First_Plugin/Discovery.md b/docs/docs/Building_Your_First_Plugin/Discovery.md index 87af4073..67eb9e2a 100644 --- a/docs/docs/Building_Your_First_Plugin/Discovery.md +++ b/docs/docs/Building_Your_First_Plugin/Discovery.md @@ -1,7 +1,3 @@ ---- -title: Virtualization SDK ---- - # Discovery ## What is Discovery? @@ -250,7 +246,7 @@ Once you have added one or more source configs, you will be able to sync. This i !!! warning - Once you have automatically or manually created source configs, you will not be allowed to modify your plugin's source config schema. We will cover how to deal with this later in the upgrade section. For now, if you need to change your plugin's source config schema: + Once you have automatically or manually created source configs, you will not be allowed to modify your plugin's source config schema. We will cover how to deal with this later in the [upgrade section](/Versioning_And_Upgrade/Upgrade.md). For now, if you need to change your plugin's source config schema: - You will have to delete any source configs you have manually added. - - Delete the plugin and its corresponding objects (dSources, Virtual Sources, etc) if the source configs were manually discovered. \ No newline at end of file + - Delete the plugin and its corresponding objects (dSources, Virtual Sources, etc) if the source configs were manually discovered. diff --git a/docs/docs/Building_Your_First_Plugin/Overview.md b/docs/docs/Building_Your_First_Plugin/Overview.md index 191254b4..29fb0650 100644 --- a/docs/docs/Building_Your_First_Plugin/Overview.md +++ b/docs/docs/Building_Your_First_Plugin/Overview.md @@ -1,8 +1,3 @@ ---- -title: Virtualization SDK ---- - - # Overview In the following few pages, we will walk through an example of making a simple, working plugin. @@ -59,6 +54,6 @@ Defining your plugin’s schemas will enable it to give the Delphix Engine the d To complete the tutorial that follows, make sure you check off the things on this list: - Download the SDK and get it working -- A running Delphix Engine, version x.y.z or above. -- Add at least one Unix host—but preferably three—to the Delphix Engine as remote environments +- A running Delphix Engine version 6.0.2.0 or above. +- Add at least one Unix host—but preferably three—to the Delphix Engine as remote environments. - Have a tool at hand for editing text files—mostly Python and JSON. A simple text editor would work fine, or you can use a full-fledged IDE. diff --git a/docs/docs/Building_Your_First_Plugin/Provisioning.md b/docs/docs/Building_Your_First_Plugin/Provisioning.md index 638be8b0..349c46ba 100644 --- a/docs/docs/Building_Your_First_Plugin/Provisioning.md +++ b/docs/docs/Building_Your_First_Plugin/Provisioning.md @@ -1,7 +1,3 @@ ---- -title: Virtualization SDK ---- - # Provisioning ## What is Provisioning? diff --git a/docs/docs/Building_Your_First_Plugin/images/PostUpload.png b/docs/docs/Building_Your_First_Plugin/images/PostUpload.png index 7f8ecc98..ff269487 100644 Binary files a/docs/docs/Building_Your_First_Plugin/images/PostUpload.png and b/docs/docs/Building_Your_First_Plugin/images/PostUpload.png differ diff --git a/docs/docs/Getting_Started.md b/docs/docs/Getting_Started.md index ed78c13f..0879a0e8 100644 --- a/docs/docs/Getting_Started.md +++ b/docs/docs/Getting_Started.md @@ -14,7 +14,7 @@ The platform and libs modules expose objects and methods needed to develop a plu - macOS 10.14+, Ubuntu 16.04+, or Windows 10 - Python 2.7 (Python 3 is not supported) - Java 7+ -- Delphix Engine 5.3.5.0 or above +- Delphix Engine 6.0.2.0 or above ## Installation To install the latest version of the SDK run: @@ -63,4 +63,8 @@ $ dvp upload -e -u -a --root-dir
DIRECTORY|Set the plugin root directory.|N|`os.cwd()`| |-n,
--plugin-name
TEXT|Set the name of the plugin that will be used to identify it.|N|id| |-s,
--ingestion-strategy
[DIRECT\|STAGED]|Set the ingestion strategy of the plugin. A "direct" plugin ingests without a staging server while a "staged" plugin requires a staging server.|N|`DIRECT`| +|-t,
--host-type
[UNIX\|WINDOWS]|Set the host platform supported by the plugin.|N|`UNIX`| #### Examples @@ -115,7 +120,7 @@ $ dvp init -n mongodb -s STAGED -r /our/plugin/directory Create a `WINDOWS` plugin called `mssql` in the current working directory with the `DIRECT` ingestion strategy. ``` -$ dvp init -n mssql -p WINDOWS +$ dvp init -n mssql -t WINDOWS ``` *** @@ -161,13 +166,14 @@ Upload the generated upload artifact (the plugin JSON file) that was built to a |Option                            |Description|Required|Default                       | |-------|-----------|:--------:|:-------:| |-e,
--delphix-engine
TEXT|Upload plugin to the provided engine. This should be either the hostname or IP address.|Y|None| -|-u,
--user
TEXT|Authenticate to the Delphix Engine with the provided user.|Y| None | +|-u,
--user
TEXT|Authenticate to the Delphix Engine with the provided user.|Y|None| |-a,
--upload-artifact FILE|Path to the upload artifact that was generated through build.|N|`artifact.json`| -|--password
TEXT|Authenticate using the provided password. If ommitted, the password will be requested through a secure prompt.|N| None | +|--wait|Block and wait for the upload job to finish on the Delphix Engine.|N|None| +|--password
TEXT|Authenticate using the provided password. If ommitted, the password will be requested through a secure prompt.|N|None| #### Examples -Upload artifact `build/artifact.json` to `delphix-engine.domain` using the user `admin`. Since the password option is ommitted, a secure password prompt is used instead. +Upload artifact `build/artifact.json` to `engine.example.com` using the user `admin`. Since the password option is ommitted, a secure password prompt is used instead. ``` $ dvp upload -a build/artifact -e engine.example.com -u admin diff --git a/docs/docs/References/Classes.md b/docs/docs/References/Classes.md index 3b0ae2e1..91928087 100644 --- a/docs/docs/References/Classes.md +++ b/docs/docs/References/Classes.md @@ -1,7 +1,3 @@ ---- -title: Virtualization SDK ---- - # Classes ## DirectSource diff --git a/docs/docs/References/Decorators.md b/docs/docs/References/Decorators.md index cd8bffb9..20158783 100644 --- a/docs/docs/References/Decorators.md +++ b/docs/docs/References/Decorators.md @@ -1,7 +1,3 @@ ---- -title: Virtualization SDK ---- - # Decorators The Virtualization SDK exposes decorators to be able to annotate functions that correspond to each [Plugin Operation](Plugin_Operations.md). diff --git a/docs/docs/References/Glossary.md b/docs/docs/References/Glossary.md index f3dd0e39..56e6e0d2 100644 --- a/docs/docs/References/Glossary.md +++ b/docs/docs/References/Glossary.md @@ -11,10 +11,10 @@ A single file that is the result of a [build](#building). It is this artifact wh The process of creating an [artifact](#artifact) from the collection of files that make up the plugin's source code. ## Data Migration -A python function which is called as part of the upgrade process. It handles transforming data from an older format to a newer format. More details [here](/Versioning_And_Upgrading/Upgrading.md#data-migrations). +A python function which is called as part of the upgrade process. It handles transforming data from an older format to a newer format. More details [here](/Versioning_And_Upgrade/Upgrade.md#data-migrations). ## Data Migration ID -Each data migration is tagged with a unique ID. This allows the Delphix Engine to know which data migrations need to be run, in which order, when upgrading to a new plugin version. More details [here](/Versioning_And_Upgrading/Upgrading.md#data-migrations). +Each data migration is tagged with a unique ID. This allows the Delphix Engine to know which data migrations need to be run, in which order, when upgrading to a new plugin version. More details [here](/Versioning_And_Upgrade/Upgrade.md#data-migrations). ## Decorator A Python construct which is used by plugins to "tag" certain functions, so that the Delphix Engine knows which function corresponds to which plugin operation. @@ -72,7 +72,7 @@ For example, a MySQL plugin might provide an operation called "stop" which knows The process of making a virtual copy of a dataset and making it available for use on a target environment. ## Replication -Delphix allows engine user to replicate data objects between Delphix Engines by creating a replication spec. Data objects that belong to a plugin can also be part of the replication spec. More details [here](/Best_Practices/Replication/Replication.md). +Delphix allows end users to replicate data objects between Delphix Engines by creating a replication profile. Data objects that belong to a plugin can also be part of the replication profile. Refer to the [Delphix Engine Documentation](https://docs.delphix.com/docs/) for more details. ## Repository Information that represents a set of dependencies that a dataset requires in order to be functional. For example, a particular Postgres database might require an installed Postgres 9.6 DBMS, and so its associated repository would contain all the information required to interact with that DBMS. diff --git a/docs/docs/References/Platform_Libraries.md b/docs/docs/References/Platform_Libraries.md index 01a876ff..ddd42d4f 100644 --- a/docs/docs/References/Platform_Libraries.md +++ b/docs/docs/References/Platform_Libraries.md @@ -1,7 +1,3 @@ ---- -title: Virtualization SDK ---- - # Platform Libraries Set of functions that plugins can use these for executing remote commands, etc. diff --git a/docs/docs/References/Plugin_Config.md b/docs/docs/References/Plugin_Config.md index dd61d93d..7295fc79 100644 --- a/docs/docs/References/Plugin_Config.md +++ b/docs/docs/References/Plugin_Config.md @@ -9,8 +9,8 @@ The name of the file can be specified during the build. By default, the build lo |----------|:------:|:--:|-----------| |id|Y|string|The unique id of the plugin in a valid UUID format.| |name|N|string|The display name of the plugin. This will be used in the UI. If it is not specified name will be equal to id.| -|externalVersion|N|string|The plugin's [external version](/Versioning_And_Upgrading/Versioning.md#external-version). This is a freeform string. If it is not supplied, the build number is used as an external version. -|buildNumber|Y|string|The plugin's [build number](/Versioning_And_Upgrading/Versioning.md#build-number). This string must conform to the format described [here](/Versioning_And_Upgrading/Versioning.md#build-number-format-rules). +|externalVersion|N|string|The plugin's [external version](/Versioning_And_Upgrade/Versioning.md#external-version). This is a freeform string. If it is not supplied, the build number is used as an external version. +|buildNumber|Y|string|The plugin's [build number](/Versioning_And_Upgrade/Versioning.md#build-number). This string must conform to the format described [here](/Versioning_And_Upgrade/Versioning.md#build-number-format-rules). |hostTypes|Y|list|The host type that the plugin supports. Either `UNIX` or `WINDOWS`.| |schemaFile|Y|string|The path to the JSON file that contains the [plugin's schema definitions](Schemas.md).

This path can be absolute or relative to the directory containing the plugin config file.| |srcDir|Y|string|The path to the directory that contains the source code for the plugin. During execution of a plugin operation, this directory will be the current working directory of the Python interpreter. Any modules or resources defined outside of this directory will be inaccessible at runtime.

This path can be absolute or relative to the directory containing the plugin config file.| @@ -53,13 +53,13 @@ srcDir: src/ schemaFile: schema.json pluginType: DIRECT language: PYTHON27 +buildNumber: 0.1.0 ``` -This is a valid plugin config for the plugin with manualDiscovery set to false: +This is a valid plugin config for the plugin with `manualDiscovery` set to `false` and an `externalVersion` set: ```yaml id: 7cf830f2-82f3-4d5d-a63c-7bbe50c22b32 name: MongoDB -version: 2.0.0 hostTypes: - UNIX entryPoint: mongo_runner:mongodb @@ -68,4 +68,6 @@ schemaFile: schema.json manualDiscovery: false pluginType: DIRECT language: PYTHON27 +externalVersion: "MongoDB 1.0" +buildNumber: "1" ``` diff --git a/docs/docs/References/Plugin_Operations.md b/docs/docs/References/Plugin_Operations.md index 43baf046..588ad992 100644 --- a/docs/docs/References/Plugin_Operations.md +++ b/docs/docs/References/Plugin_Operations.md @@ -1,7 +1,3 @@ ---- -title: Virtualization SDK ---- - # Plugin Operations ## Summary @@ -741,7 +737,7 @@ from generated.definitions import SourceConfigDefinition plugin = Plugin() @plugin.virtual.reconfigure() -def configure(virtual_source, repository, source_config, snapshot): +def reconfigure(virtual_source, repository, source_config, snapshot): return SourceConfigDefinition(name="updated_config_name") ``` @@ -1048,17 +1044,17 @@ def virtual_status(virtual_source, repository, source_config): ## Repository Data Migration -A Repository [Data Migration](Glossary.md#data-migration) transforms repository data from an older format to a newer format. +A Repository [Data Migration](Glossary.md#data-migration) migrates repository data from an older [schema](Glossary.md#schema) format to an updated schema format. ### Required / Optional **Optional.**
!!! warning - You must ensure that all repository data will match your repository schema after an upgrade operation. Depending on how your schema has changed, this might imply that you need to write one or more repository data migrations. + You must ensure that all repository data will match your updated repository schema after an upgrade operation. Depending on how your schema has changed, this might imply that you need to write one or more repository data migrations. ### Delphix Engine Operations -* Upgrade +* [Upgrade](Workflows.md#upgrade) ### Signature @@ -1072,20 +1068,20 @@ A Repository [Data Migration](Glossary.md#data-migration) transforms repository Argument | Type | Description -------- | ---- | ----------- -migration_id | String | The ID of this migration. An ID is a string containing one or more positive integers separated by periods. Each ID must be unique. +migration_id | String | The ID of this migration. An ID is a string containing one or more positive integers separated by periods. Each ID must be unique. More details [here](/Versioning_And_Upgrade/Upgrade.md#rules-for-data-migrations). ### Function Arguments Argument | Type | Description -------- | ---- | ----------- -old_repository | Dictionary | The plugin-specific data associated with a repository, in an old format. +old_repository | Dictionary | The plugin-specific data associated with a repository, that conforms to the previous schema. !!! warning - The incoming data is a Python dictionary, where each property name appears exactly as described in the schema. This differs from non-upgrade-related operations, where the incoming data uses autogenerated classes. + The function argument `old_repository` is a Python dictionary, where each property name appears exactly as described in the previous repository schema. This differs from non-upgrade-related operations, where the function arguments are [autogenerated classes](Schemas_and_Autogenerated_Classes.md) based on the schema. ### Returns Dictionary
-A transformed version of the `old_repository` input. +A migrated version of the `old_repository` input that must conform to the updated repository schema. ### Example ```python @@ -1102,7 +1098,7 @@ def add_new_flag_to_repo(old_repository): ## Source Config Data Migration -A Source Config [Data Migration](Glossary.md#data-migration) transforms source config data from an older format to a newer format. +A Source Config [Data Migration](Glossary.md#data-migration) migrates source config data from an older [schema](Glossary.md#schema) format to an updated schema format. ### Required / Optional **Optional.**
@@ -1112,7 +1108,7 @@ A Source Config [Data Migration](Glossary.md#data-migration) transforms source c ### Delphix Engine Operations -* Upgrade +* [Upgrade](Workflows.md#upgrade) ### Signature @@ -1126,20 +1122,20 @@ A Source Config [Data Migration](Glossary.md#data-migration) transforms source c Argument | Type | Description -------- | ---- | ----------- -migration_id | String | The ID of this migration. An ID is a string containing one or more positive integers separated by periods. Each ID must be unique. +migration_id | String | The ID of this migration. An ID is a string containing one or more positive integers separated by periods. Each ID must be unique. More details [here](/Versioning_And_Upgrade/Upgrade.md#rules-for-data-migrations). ### Function Arguments Argument | Type | Description -------- | ---- | ----------- -old_source_config | Dictionary | The plugin-specific data associated with a source config, in an old format. +old_source_config | Dictionary | The plugin-specific data associated with a source config, that conforms to the previous schema. !!! warning - The incoming data is a Python dictionary, where each property name appears exactly as described in the schema. This differs from non-upgrade-related operations, where the incoming data uses autogenerated classes. + The function argument `old_source_config` is a Python dictionary, where each property name appears exactly as described in the previous source config schema. This differs from non-upgrade-related operations, where the function arguments are [autogenerated classes](Schemas_and_Autogenerated_Classes.md) based on the schema. ### Returns Dictionary
-A transformed version of the `old_source_config` input. +A migrated version of the `old_source_config` input that must conform to the updated source config schema. ### Example ```python @@ -1155,7 +1151,7 @@ def add_new_flag_to_source_config(old_source_config): ``` ## Linked Source Data Migration -A Linked Source [Data Migration](Glossary.md#data-migration) transforms linked source data from an older format to a newer format. +A Linked Source [Data Migration](Glossary.md#data-migration) migrates linked source data from an older [schema](Glossary.md#schema) format to an updated schema format. ### Required / Optional **Optional.**
@@ -1165,7 +1161,7 @@ A Linked Source [Data Migration](Glossary.md#data-migration) transforms linked s ### Delphix Engine Operations -* Upgrade +* [Upgrade](Workflows.md#upgrade) ### Signature @@ -1179,20 +1175,20 @@ A Linked Source [Data Migration](Glossary.md#data-migration) transforms linked s Argument | Type | Description -------- | ---- | ----------- -migration_id | String | The ID of this migration. An ID is a string containing one or more positive integers separated by periods. Each ID must be unique. +migration_id | String | The ID of this migration. An ID is a string containing one or more positive integers separated by periods. Each ID must be unique. More details [here](/Versioning_And_Upgrade/Upgrade.md#rules-for-data-migrations). ### Function Arguments Argument | Type | Description -------- | ---- | ----------- -old_linked_source | Dictionary | The plugin-specific data associated with a linked source, in an old format. +old_linked_source | Dictionary | The plugin-specific data associated with a linked source, that conforms to the previous schema. !!! warning - The incoming data is a Python dictionary, where each property name appears exactly as described in the schema. This differs from non-upgrade-related operations, where the incoming data uses autogenerated classes. + The function argument `old_linked_source` is a Python dictionary, where each property name appears exactly as described in the previous linked source schema. This differs from non-upgrade-related operations, where the function arguments are [autogenerated classes](Schemas_and_Autogenerated_Classes.md) based on the schema. ### Returns Dictionary
-A transformed version of the `old_linked_source` input. +A migrated version of the `old_linked_source` input that must conform to the updated linked source schema. ### Example ```python @@ -1208,7 +1204,7 @@ def add_new_flag_to_dsource(old_linked_source): ``` ## Virtual Source Data Migration -A Virtual Source [Data Migration](Glossary.md#data-migration) transforms virtual source data from an older format to a newer format. +A Virtual Source [Data Migration](Glossary.md#data-migration) migrates virtual source data from an older [schema](Glossary.md#schema) format to an updated schema format. ### Required / Optional **Optional.**
@@ -1218,7 +1214,7 @@ A Virtual Source [Data Migration](Glossary.md#data-migration) transforms virtual ### Delphix Engine Operations -* Upgrade +* [Upgrade](Workflows.md#upgrade) ### Signature @@ -1232,20 +1228,20 @@ A Virtual Source [Data Migration](Glossary.md#data-migration) transforms virtual Argument | Type | Description -------- | ---- | ----------- -migration_id | String | The ID of this migration. An ID is a string containing one or more positive integers separated by periods. Each ID must be unique. +migration_id | String | The ID of this migration. An ID is a string containing one or more positive integers separated by periods. Each ID must be unique. More details [here](/Versioning_And_Upgrade/Upgrade.md#rules-for-data-migrations). ### Function Arguments Argument | Type | Description -------- | ---- | ----------- -old_virtual_source | Dictionary | The plugin-specific data associated with a virtual source, in an old format. +old_virtual_source | Dictionary | The plugin-specific data associated with a virtual source, that conforms to the previous schema. !!! warning - The incoming data is a Python dictionary, where each property name appears exactly as described in the schema. This differs from non-upgrade-related operations, where the incoming data uses autogenerated classes. + The function argument `old_virtual_source` is a Python dictionary, where each property name appears exactly as described in the previous virtual source schema. This differs from non-upgrade-related operations, where the function arguments are [autogenerated classes](Schemas_and_Autogenerated_Classes.md) based on the schema. ### Returns Dictionary
-A transformed version of the `old_virtual_source` input. +A migrated version of the `old_virtual_source` input that must conform to the updated virtual source schema. ### Example ```python @@ -1261,7 +1257,7 @@ def add_new_flag_to_vdb(old_virtual_source): ``` ## Snapshot Data Migration -A Snapshot [Data Migration](Glossary.md#data-migration) transforms snapshot data from an older format to a newer format. +A Snapshot [Data Migration](Glossary.md#data-migration) migrates snapshot data from an older [schema](Glossary.md#schema) format to an updated schema format. ### Required / Optional **Optional.**
@@ -1271,7 +1267,7 @@ A Snapshot [Data Migration](Glossary.md#data-migration) transforms snapshot data ### Delphix Engine Operations -* Upgrade +* [Upgrade](Workflows.md#upgrade) ### Signature @@ -1285,20 +1281,20 @@ A Snapshot [Data Migration](Glossary.md#data-migration) transforms snapshot data Argument | Type | Description -------- | ---- | ----------- -migration_id | String | The ID of this migration. An ID is a string containing one or more positive integers separated by periods. Each ID must be unique. +migration_id | String | The ID of this migration. An ID is a string containing one or more positive integers separated by periods. Each ID must be unique. More details [here](/Versioning_And_Upgrade/Upgrade.md#rules-for-data-migrations). ### Function Arguments Argument | Type | Description -------- | ---- | ----------- -old_snapshot | Dictionary | The plugin-specific data associated with a snapshot, in an old format. +old_snapshot | Dictionary | The plugin-specific data associated with a snapshot, that conforms to the previous schema. !!! warning - The incoming data is a Python dictionary, where each property name appears exactly as described in the schema. This differs from non-upgrade-related operations, where the incoming data uses autogenerated classes. + The function argument `old_snapshot` is a Python dictionary, where each property name appears exactly as described in the previous snapshot schema. This differs from non-upgrade-related operations, where the function arguments are [autogenerated classes](Schemas_and_Autogenerated_Classes.md) based on the schema. ### Returns Dictionary
-A transformed version of the `old_snapshot` input. +A migrated version of the `old_snapshot` input that must conform to the updated snapshot schema. ### Example ```python diff --git a/docs/docs/References/Schemas.md b/docs/docs/References/Schemas.md index f9e74aea..c2a4ad54 100644 --- a/docs/docs/References/Schemas.md +++ b/docs/docs/References/Schemas.md @@ -1,7 +1,3 @@ ---- -title: Virtualization SDK ---- - # Schemas ## About Schemas diff --git a/docs/docs/References/Workflows.md b/docs/docs/References/Workflows.md index 983b354b..1d8cde14 100644 --- a/docs/docs/References/Workflows.md +++ b/docs/docs/References/Workflows.md @@ -1,7 +1,3 @@ ---- -title: Virtualization SDK ---- - # Workflows ## Legend diff --git a/docs/docs/Versioning_And_Upgrading/.pages b/docs/docs/Versioning_And_Upgrade/.pages similarity index 50% rename from docs/docs/Versioning_And_Upgrading/.pages rename to docs/docs/Versioning_And_Upgrade/.pages index 5ffb5e5e..c8648003 100644 --- a/docs/docs/Versioning_And_Upgrading/.pages +++ b/docs/docs/Versioning_And_Upgrade/.pages @@ -1,6 +1,7 @@ arrange: - Overview.md - Versioning.md - - Upgrading.md + - Upgrade.md - Compatibility.md - - Special_Concerns + - Backports_And_Hotfixes.md + - Replication.md diff --git a/docs/docs/Versioning_And_Upgrade/Backports_And_Hotfixes.md b/docs/docs/Versioning_And_Upgrade/Backports_And_Hotfixes.md new file mode 100644 index 00000000..ff253ea0 --- /dev/null +++ b/docs/docs/Versioning_And_Upgrade/Backports_And_Hotfixes.md @@ -0,0 +1,49 @@ +# Backports and Hotfixes + +If your plugin uses an ["enterprise-style"](/Versioning_And_Upgrade/Versioning.md#enterprise-style-release-strategy) release strategy, then you'll probably want to occasionally provide new "minor" or "patch" versions that build atop older versions. + +Code changes that are applied atop old releases are usually called "backports". Sometimes, they are also called "hotfixes", if the change is specifically created for a single user. + +These releases present a problem: although they are built atop an older code branch, they are still newer than some releases from a newer code branch. Below, we'll walk through how we prevent users from "upgrading" to a new-branch release that would be incompatible with an installed old-branch release. + +### Motivating Example +Let's take a look at an example of a possible timeline of releases. + +> **February**: The initial version of a plugin is released, with build number "1.0". This is a simple plugin that uses a simple strategy for syncing dSources. + +> **April**: A new version is released, with build number "1.1". This adds some bugfixes and adds some small optimizations to improve the performance of syncing. + +> **August**: A new version is released, with build number "2.0". This uses a completely new syncing strategy that is far more sophisticated and efficient. + +Let's assume that not all users will want to upgrade to the 2.0 release immediately. So, even months later, you expect to have a significant number of users still on version 1.0 or 1.1. + +Later, in October, a bug is found which impacts all releases. This bug is important enough that you want to fix it for **all** of your end users (not just the ones using 2.0). + +Here are the behaviors we need: + +* Our 2.0 end users should be able to get the new bugfix without giving up any of the major new features that were part of 2.0. +* Our 1.0 and 1.1 end users should be able to get the new bugfix without also needing to accept all the major new features that were part of 2.0. +* Once an end user has received the bugfix, it should be impossible to lose the bugfix in an upgrade. + +### Strategy + +You can include a [data migration](/Versioning_And_Upgrade/Upgrade.md#data-migrations) along with your bugfix. If your bugfix involves a schema change, you will have to do this anyways. If not, you can still include a data migration that simply does nothing. If a user with the bugfix attempts to "upgrade" to 2.0, the Delphix Engine will prevent it, because the 2.0 releases does not include this migration. + +You would typically follow these steps: + +* Fix the bug by applying a code change atop the 2.0 code. +* Include the new data migration in your 2.1 release. +* Separately, apply the same bugfix atop the 1.1 code. Note: depending on how code changed between 1.1 and 2.0, this 1.1-based bugfix might not contain the exact same code as we used with 2.0. +* Make another new release of the plugin, this time with build number "1.2". This release includes the 1.1-based bugfix. It also should include the new data migration. + + +This meets our requirements: + +* Our 2.0 end users can install version 2.1. This gives them the bugfix, and keeps all the features from 2.0. +* Our 1.0 and 1.1 end users can install version 1.2. This gives them the bugfix without any of the 2.0 features. +* It is impossible for a 2.1 end user to lose the bugfix, because the Delphix Engine will not allow the build number to go "backwards". So, a 2.1 end user will not be able to install versions 2.0, 1.1, or 1.0. +* It is also impossible for a 1.2 end user to lose the bugfix. + * They cannot install 1.0 or 1.1 because the build number is not allowed to decrease. + * They also cannot install 2.0. The missing data migration on 2.0 will prevent this. + +Note that a 1.2 end user can still upgrade to 2.1 at any time. This will allow them to keep the bugfix, and also take advantage of the new features that were part of 2.0. diff --git a/docs/docs/Versioning_And_Upgrading/Compatibility.md b/docs/docs/Versioning_And_Upgrade/Compatibility.md similarity index 61% rename from docs/docs/Versioning_And_Upgrading/Compatibility.md rename to docs/docs/Versioning_And_Upgrade/Compatibility.md index 33120a87..6b5ab42e 100644 --- a/docs/docs/Versioning_And_Upgrading/Compatibility.md +++ b/docs/docs/Versioning_And_Upgrade/Compatibility.md @@ -13,11 +13,5 @@ These restrictions are enforced by the Delphix Engine, and sometimes, the plugin The Delphix Engine will enforce these rules before a newly-uploded plugin is allowed to be installed: -* The [build number](/Versioning_And_Upgrading/Versioning/#build-number) may only move forward, not backwards. -* All [data migration IDs](/References/Glossary/#data-migration-id) that are present in the already-installed plugin must also be present on the newly-uploaded plugin. (The newly-uploaded plugin may add more data migrations, of course.) - - +* The [build number](/Versioning_And_Upgrade/Versioning.md#build-number) may only move forward, not backwards. +* All [data migration IDs](/References/Glossary.md#data-migration-id) that are present in the already-installed plugin must also be present on the newly-uploaded plugin. The newly-uploaded plugin may add more data migrations, of course. diff --git a/docs/docs/Versioning_And_Upgrade/Overview.md b/docs/docs/Versioning_And_Upgrade/Overview.md new file mode 100644 index 00000000..2193e02b --- /dev/null +++ b/docs/docs/Versioning_And_Upgrade/Overview.md @@ -0,0 +1,11 @@ +# Overview + +Once you start writing and releasing your plugin, you’ll reach a point when bug fixes or new features may require schema changes. The plugin upgrade process enables objects that have been created with a prior schema to be migrated to the newly defined schema. When this happens, a new version of the plugin must be created. The following few pages will walk through how versions need to change between upgrades and what needs to be written in the plugin to make sure upgrade is successful. + +## Plugin Versioning + +Like any other piece of software, plugins change over time. Every so often, there will be a new release. To keep track of the different releases, each plugin release has its own versioning information. Depending on what changes are included in a particular release, there are different rules and recommendations for how the versioning information should be changed. More information on versioning is located [here](Versioning.md). + +## Upgrade + +Upgrade is the process by which an older version of a plugin is replaced by a newer version. Depending on what has changed between the two versions, this process may also include modifying pre-existing plugin defined objects so they conform to the new schema expected by the new version of the plugin. Information on the upgrade process can be found [here](Upgrade.md). diff --git a/docs/docs/Versioning_And_Upgrade/Replication.md b/docs/docs/Versioning_And_Upgrade/Replication.md new file mode 100644 index 00000000..1814680e --- /dev/null +++ b/docs/docs/Versioning_And_Upgrade/Replication.md @@ -0,0 +1,21 @@ +# Replication +A Delphix Engine (source) can be setup to replicate data objects to another Delphix Engine (target). Plugins built using the Virtualization SDK work seamlessly with Delphix Engine replication with no additional development required from plugin developers. + +Only a single version of a plugin can be active on a Delphix Engine at a time. We discuss some basic scenarios below. For more detailed information refer to the [Delphix Engine Documentation](https://docs.delphix.com/docs/). + +## Replica Provisioning +Replicated dSource or VDB snapshots can be used to provision new VDBs onto a target Delphix Engine, without failing over any of the objects. When provisioning a VDB from a replicated snapshot: + +* A version of the plugin has to be installed on the target Delphix Engine. +* The versions of the plugins installed on the source and target Delphix Engines have to be [compatible](/Versioning_And_Upgrade/Compatibility.md). + +Once provisioned, the VDB on the target Delphix Engine will be associated with the version of the plugin installed on the target Delphix Engine, any required data migrations will be run as part of the provisioning process. For more details refer to the [Delphix Engine Documentation](https://docs.delphix.com/docs/). + +## Replication Failover +On failover, there are three scenarios for each plugin: + +| Scenario | Outcome +| -------- | ------- +Source plugin **not installed** on target Delphix Engine | The plugin will be failed over and marked as `active` on the target Delphix Engine. +Source plugin version **is equal to** the target plugin version | The plugin from the source will be merged with the plugin on the target Delphix Engine. +Source plugin version **is not equal to** the target plugin version | The plugin from the source will be marked `inactive` on the target Delphix Engine. An `inactive` plugin can be subsequently activated, after failover, if it is [compatible](/Versioning_And_Upgrade/Compatibility.md) with the existing `active` plugin. Activating a plugin will do an upgrade and merge the `inactive` plugin, and all its associated objects, with the `active` plugin. For more details refer to the [Delphix Engine Documentation](https://docs.delphix.com/docs/). \ No newline at end of file diff --git a/docs/docs/Versioning_And_Upgrade/Upgrade.md b/docs/docs/Versioning_And_Upgrade/Upgrade.md new file mode 100644 index 00000000..c80ab73f --- /dev/null +++ b/docs/docs/Versioning_And_Upgrade/Upgrade.md @@ -0,0 +1,300 @@ +# Upgrade +Upgrade is the process of moving from an older version of a plugin to a newer version. +Upgrading is not as simple as just replacing the installed plugin with a newer one. The main complication comes when the new plugin version makes changes to its [schemas](/References/Glossary.md#schema). + +Consider the case of a plugin that works with collections of text files -- the user points it to a directory tree containing text files, and the plugin syncs the files from there. + +The first release of such a plugin might have no link-related user options. So the plugin's linked source schema might define no properties at all: + +```json +"linkedSourceDefinition": { + "type": "object", + "additionalProperties" : false, + "properties" : { + } +} +``` + +And, the syncing code is very simple: +```python +@plugin.linked.pre_snapshot() +def linked_pre_snapshot(direct_source, repository, source_config): + libs.run_sync( + remote_connection = direct_source.connection, + source_directory = source_config.path + ) +``` + + +But, later, some users request a new feature -- they want to avoid syncing any backup or hidden files. So, a new plugin version is released. This time, there is a new boolean property in the linked source schema where users can elect to skip these files, if desired. +```json +"linkedSourceDefinition": { + "type": "object", + "additionalProperties" : false, + "required": ["skipHiddenAndBackup"], + "properties" : { + "skipHiddenAndBackup": { "type": "boolean" } + } +} +``` + +The plugin code that handles the syncing can now pay attention to this new boolean property: +```python +_HIDDEN_AND_BACKUP_SPECS = [ + "*.bak", + "*~", # Backup files from certain editors + ".*" # Unix-style hidden files +] + +@plugin.linked.pre_snapshot() +def linked_pre_snapshot(direct_source, repository, source_config): + exclude_spec = _HIDDEN_AND_BACKUP_SPECS if direct_source.parameters.skip_hidden_and_backup else [] + + libs.run_sync( + remote_connection = direct_source.connection, + source_directory = source_config.path, + exclude_paths = exclude_spec + ) +``` + +Suppose a user has an engine with linked sources created by the older version of this plugin. That is, the existing linked sources have no `skipHiddenAndBackup` property. + +If the user installs the new version of the plugin, we have a problem! The above `pre_snapshot` code from the new plugin will attempt to access the `skip_hidden_and_backup` property, which we've just seen will not exist! + +The solution to this problem is to use [data migrations](/References/Glossary.md#data-migration), explained below. + + +## Data Migrations + +### What is a Data Migration? + +Whenever a new version of a plugin is installed on a Delphix Engine, the engine needs to migrate pre-existing data from its old format (as specified by the schemas in the old version of the plugin), to its new format (as specified by the schemas in the new version of the plugin). + +A [data migration](/References/Glossary.md#data-migration) is a function that is responsible for doing this conversion. It is provided by the plugin. + +Thus, when the new plugin version is installed, the engine will call all applicable data migrations provided by the new plugin. This ensures that all data is always in the format expected by the new plugin. + +### A Simple Example + +Let's go back to the above example of the plugin that adds a new boolean option to allow users to avoid syncing backup and hidden files. Here is a data migration that the new plugin can provide to handle the data format change: + +```python +@plugin.upgrade.linked_source("2019.11.20") +def add_skip_option(old_linked_source): + return { + "skipHiddenAndBackup": false + } +``` + +The exact rules for data migrations are covered in detail [below](Upgrade.md#rules-for-data-migrations). Here, we'll just walk through this code line by line and make some observations. + +```python +@plugin.upgrade.linked_source("2019.11.20") +``` +The above line is a [decorator](/References/Glossary.md#decorator) that identifies the following function as a data migration. This particular migration will handle linked sources. It is given an ID of `2019.11.20` -- this controls when this migration is run in relation to other data migrations. + +```python +def add_skip_option(old_linked_source): +``` + +Note that the data migration takes an argument representing the old-format data. In this simple example, we know that there are no properties in the old-format data, so we can just ignore it. + +```python + return { + "skipHiddenAndBackup": false + } +``` + +Here, we are returning a Python dictionary representing the new format of the data. In this example, the dictionary has only one field: `skipHiddenAndBackup`. Because the old version of the plugin had no ability to skip files, we default this property to `false` to match the new schema. + + +### Rules for Data Migrations + +As shown above, the a data migration receives old-format input and produces new-format output. The rules and recommendations for data migrations follow: + +#### Rules + +* Input and output are Python dictionaries, with properties named exactly as specified in the schemas. Note that this differs from other plugin operations, where the inputs are defined with autogenerated Python [classes](/References/Schemas_and_Autogenerated_Classes.md), and whose properties use Python-style naming. + +* Each data migration must be tagged with an ID string. This string must consist of one or more positive integers separated by periods. + +* Data migration IDs must be numerically unique. Note that `"1.2"`, `"01.02"`, and "`1.2.0.0.0"` are all considered to be identical. + +* Once released, a data migration must never be deleted. An attempted upgrade will fail if the already-installed plugin version has a data migration that does not appear in the to-be-installed version. + +* At upgrade time, the engine will find the set of new migrations provided by the new version that are not already part of the already-installed version. Each of these migrations will then be run, in the order specified below. + +* After running all applicable migrations, the engine will confirm that the resultant data conforms to the new version's schemas. If not, the upgrade will fail. + +* Note that there is no requirement or guarantee that the input or output of any particular data migration will conform to a schema. We only guarantee that the input to the **first** data migration conforms to the schema of the already-installed plugin version. And, we only require that the output of the **final** data migration conforms to the schema of the new plugin version. + +* Data migrations are run in the order specified by their IDs. The ordering is numerical, not lexicographical. Thus `"1"` would run before `"2"`, which would run before `"10"`. + +* Data migrations have no access to [Platform Libraries](/References/Platform_Libraries.md) or remote hosts. For example: If a data migration attempts to use [run_bash](/References/Platform_Libraries.md#run_bash) the upgrade will fail. + +* Note that the above rules imply that at least one data migration is required any time a schema change is made that would invalidate any data produced using a previous version of the plugin. For example: adding a `"required"` property to the new schema. + + +#### Recommendations +* We recommend using a "Year.Month.Date" format like `"2019.11.04"` for migration IDs. You can use trailing integers as necessary (e.g. use `"2019.11.04.5"` if you need something to be run between `"2019.11.04"` and `"2019.11.05"`). + +* Even though they follow similar naming rules, migration IDs are not the same thing as plugin versions. We do not recommend using your plugin version in your migration IDs. + +* We recommend using small, single-purpose data migrations. That is, if you end up making four schema changes over the course of developing a new plugin version, we recommend writing four different data migrations, one for each change. + +### Data Migration Example + +Here is a very simple data migration. +```python +@plugin.upgrade.repository("2019.12.15") +def add_new_flag_to_repo(old_repository): + new_repository = dict(old_repository) + new_repository["useNewFeature"] = False + return new_repository +``` + +### Debugging Data Migration Problems + +During the process of upgrading to a new version, the Delphix Engine will run all applicable data migrations, and then ensure that the resulting object matches the new schema. But, what if there is a bug, and the resulting object does **not** match the schema? + +#### Security Concerns Prevent Detailed Error Messages +One problem here is that the Delphix Engine is limited in the information that it can provide in the error message. Ideally, the engine would say exactly what was wrong with the object (e.g.: "The field `port` has the value `15`, but the schema says it has to have a value between `256` and `1024`"). + +But, the Delphix Engine cannot do this for security reasons. Ordinarily, the Delphix Engine knows which fields contain sensitive information, and can redact such fields from error messages. But, the only reason the Delphix Engine has that knowledge is because the schema provides that information. If an object does +**not** conform to the schema, then the Delphix Engine can't know what is sensitive and what isn't. + +Therefore, the error message here might lack the detail necessary to debug the problem. + +#### One Solution: Temporary Logging + +During development of a new plugin version, you may find yourself trying to find and fix such a bug. +One technique is to use temporary logging. + +For example, while you are trying to locate and fix the bug, you could put a log statement at the very end of each of your data migrations, like so: +``` + logger.debug("Migration 2010.03.01 returning {}".format(new_object)) + return new_object +``` + +See the [Logging](/References/Logging.md) section for more information about logging works. + +From the logs, you'll be able to see exactly what each migration is returning. From there, hopefully the problem will become apparent. As a supplemental tool, consider pasting these results (along with your schema) into an online JSON validator for more information. + +!!! warning + It is **very important** that you only use logging as a temporary debugging strategy. **Such logging must be removed before you release the plugin to end users**. If this logging ends up in your end product, it could cause a serious security concern. Please see our [sensitive data best practices](/Best_Practices/Sensitive_Data.md) for more information. + +### When Data Migrations Are Insufficient + +New versions of plugins often require some modification of data that was written using an older version of the same plugin. Data migrations handle this modification. Unfortunately, data migrations cannot always fully handle all possible upgrade scenarios by themselves. + +For example, a new plugin version might want to add a new required field to one of its schemas. But, the correct value for this new field might not be knowable while the upgrade is underway -- perhaps it must be entered by the user, or perhaps it would require automatic discovery to be rerun. + +Such a situation will require some user intervention after the upgrade. + +In all cases, of course you will want to **clearly document** to your users that there will extra work required so they can make sure they known what they are getting into before they decide to upgrade. + +!!! tip + It should also be said that you should try to avoid cases like this. As much as possible, try to make your post-upgrade plugin function with no user intervention. Only resort to user intervention as a last resort. + +The recommended strategy here is to arrange for the affected objects to be in an "invalid" state, and for your plugin code to detect this state, and throw errors when the objects are used. + +For such a situation, we recommend the following process: + +* Make your schema changes so that the affected property can be set in such a way that plugin code can identify it as being invalid. Typically this is done by allowing for some "sentinel" value. This may require you to have a less-strict schema definition than you might otherwise want. +* In your data migrations, make sure the affected properties are indeed marked invalid. +* In any plugin code that needs to use these properties, first check them for validity. If they are invalid, then raise an error that explains the situation to the user, and tells them what steps they need to take. + +Following are two examples of schema changes that need extra user intervention after upgrade. One will require a rediscovery, and the other will require the user to enter information. + +#### Autodiscovery Example + +Suppose that a new plugin version adds a new required field to its repository schema. This new field specifies a full path to a database installation. The following listing shows what we'd ideally like the new repository schema to look like (`installationPath` is the new required property) + +``` +"repositoryDefinition": { + "type": "object", + "properties": { + "name": { "type": "string" }, + "installationPath": { "type": "string", "format": "unixpath"} + }, + "required": ["name", "installationPath"], + "nameField": "name", + "identityFields": ["name"] +} +``` + +The new plugin's autodiscovery code will know how to find this full path. Therefore, any repositories that are discovered (or rediscovered) after the upgrade will have this path filled in correctly. + +But, there may be repositories that were discovered before the upgrade. The data migrations will have to ensure that *some* value is provided for this new field. However, a data migration will not be able to determine what the correct final value is. + +One way to handle this is to modify the schema to allow a special value to indicate that the object needs to be rediscovered. In this example, we'll change the schema from the ideal version above, removing the `unixpath` constraint on this string: +``` +"installationPath": { "type": "string" } +``` + +Now, our data migration can set this property to some special sentinel value that will never be mistaken for an actual installation path. +``` +_REDISCOVERY_TOKEN = "###_REPOSITORY_NEEDS_REDISCOVERY_###" + +@plugin.upgrade.repository("2020.02.04.01") +def repo_path(old_repository): + # We need to add in a repository path, but there is no way for us to know + # what the correct path is here, so we cannot set this to anything useful. + # Instead, we'll set a special sentinel value that will indicate that the + # repository is unusable until the remote host is rediscovered. + old_repository["installationPath"] = _REDISCOVERY_TOKEN + return old_repository +``` + +Now, wherever the plugin needs to use this path, we'll need to check for this sentinel value, and error out if we find it. For example, we might need a valid path during the `configure` operation: +``` +@plugin.virtual.configure() +def configure(virtual_source, snapshot, repository): + if repository.installation_path == _REDISCOVERY_TOKEN: + # We cannot use this repository as/is -- it must be rediscovered. + msg = 'Unable to use repository "{}" because it has not been updated ' \ + 'since upgrade. Please re-run discovery and try again' + raise UserError(msg.format(repository.name)) + + # ... actual configure code goes here +``` + +#### Manual Entry + +Above, we looked at an example where the plugin could handle filling in new values for a new field at discovery time, so the user was simply asked to rediscover. + +Sometimes, though, users themselves will have to be the ones to supply new values. + +Suppose that a new plugin version wants to add a required field to the `virtualSource` object. This new property will tell which port the database should be accessible on. Ideally, we might want our new field to look like this: + +``` +"port": {"type": "integer", "minimum": 1024, "maximum": 65535} +``` + +Again, however, the data migration will not know which value is correct here. This is something the user must decide. Still, the data migration must provide *some* value. As before, we'll change the schema a bit from what would be ideal: + +``` +"port": {"type": "integer", "minimum": 0, "maximum": 65535} +``` + +Now, our data migration can use the value `0` as code for "this VDB needs user intervention". + +``` +@plugin.upgrade.virtual_source("2020.02.04.02") +def add_dummy_port(old_virtual_source): + # Set the "port" property to 0 to act as a placeholder. + old_virtual_source["port"] = 0 + return old_virtual_source +``` + +As with the previous example, our plugin code will need to look for this special value, and raise an error so that the user knows what to do. This example shows the [Virtual Source Reconfigure](/References/Plugin_Operations.md#virtual-source-reconfigure) operation, but of course, similar code will be needed anywhere else that the new `port` property is required. + +``` +@plugin.virtual.reconfigure() +def virtual_reconfigure(virtual_source, repository, source_config, snapshot): + if virtual_source.parameters.port == 0: + raise UserError('VDB "{}" cannot function properly. Please choose a ' \ + 'port number for this VDB to use.'.format(virtual_source.parameters.name)) + + # ... actual reconfigure code goes here +``` diff --git a/docs/docs/Versioning_And_Upgrading/Versioning.md b/docs/docs/Versioning_And_Upgrade/Versioning.md similarity index 52% rename from docs/docs/Versioning_And_Upgrading/Versioning.md rename to docs/docs/Versioning_And_Upgrade/Versioning.md index 6155a74d..4f8e75db 100644 --- a/docs/docs/Versioning_And_Upgrading/Versioning.md +++ b/docs/docs/Versioning_And_Upgrade/Versioning.md @@ -14,24 +14,28 @@ This field is intended only for use by the end user. The Delphix Engine does not Examples might be "5.3.0", "2012B", "MyPlugin Millennium Edition, Service Pack 3", "Playful Platypus" or "Salton City". -The external version is specified using the `externalVersion` property in your [plugin config](/References/Plugin_Config/) file. +The external version is specified using the `externalVersion` property in your [plugin config](/References/Plugin_Config.md) file. + +!!! tip + Use an external version that makes it easier for end users to determine newer vs older plugins. ### Build Number Unlike "external version", this field is intended to convey information to the Delphix Engine. This is a string of integers, separated by periods. Examples would be "5.3.0", "7", "5.3.0.0.0.157". -The Delphix Engine uses the build number to guard against users trying to "downgrade" their plugin to an older, incompatible version. So, if a user has build number "3.4.1" installed, then they may not install a version with a build number like "2.x.y", "3.3.y" or "3.4.0". +The Delphix Engine uses the build number to guard against end users trying to "downgrade" their plugin to an older, incompatible version. So, if a user has build number "3.4.1" installed, then they may not install a version with a build number like "2.x.y", "3.3.y" or "3.4.0". + +The build number is specified using the `buildNumber` property in your [plugin config](/References/Plugin_Config.md) file. -The build number is specified using the `buildNumber` property in your [plugin config](/References/Plugin_Config/) file. +This field is required to be a string. You might need to enclose your build number in quotes in order to prevent YAML from interpreting the field as a number. Examples: -This field is required to be a string. You might need to enclose your build number in quotes in order to prevent YAML from interpreting the field as a number. For example: -``` - buildNumber: "1" # OK: The quotes mean this is a string. - buildNumber: 1 # BAD: YAML will interpret this as an integer. - buildNumber: "1.2" # OK: The quotes mean this is a string. - buildNumber: 1.2 # BAD: YAML will interpret this as a floating-point number. - buildNumber: 1.2.3 # OK: YAML treats this as a string, since it cannot be a number. -``` +`buildNumber` | Allowed | Details +-------- | ---- | ----------- +1 | No | YAML will interpret this as an integer. +1.2 | No | YAML will interpret this as a floating-point number. +"1" | Yes | The quotes mean this is a string. +"1.2" | Yes | The quotes mean this is a string. +1.2.3 | Yes | YAML treats this as a string, since it cannot be a number. #### Build Number Format Rules @@ -42,33 +46,37 @@ Your build number must be a string, conforming to these rules: * Build numbers are sortable numerically, with earlier numbers having more significance than later numbers. So, "2.0" comes after "1.99999", and "1.10" comes after "1.2". * The Delphix Engine will never allow installation of plugin with a build number that is ordered before the the already-installed build number. -Please also see the [App-Style vs. Enterprise-Style section](#app-style-vs-enterprise-style) below. We generally recommend using a single integer build number for app-style development. Build numbers need to have multiple parts if you are doing enterprise-style development. +!!! tip + You can upload a plugin with the same `buildNumber` as the installed plugin. However this should only be done while a plugin is being developed. Plugin releases for end users should never have the same `buildNumber` - +Please also see the [App-Style vs. Enterprise-Style section](#app-style-vs-enterprise-style) below. We generally recommend using a single integer build number for app-style development. Build numbers need to have multiple parts if you are doing enterprise-style development. -## Versioning Strategies -### App-Style vs. Enterprise-Style +## Release Strategies There are two main strategies for releasing software: -* You can use an "app-style" release strategy. Here, all users are expected to use the latest available version of the software. Most consumer software works this way today -- websites, phone apps, etc. -* You can use an "enterprise-style" release strategy. Here, you might distinguish "major" releases of your software from "minor" releases. You might expect some customers to continue to use older major releases for a long time, even after a new major release comes out. This strategy is often used for software like operating systems and DBMSs, where upgrading can cause significant disruption. - -An app-style strategy is much simpler, but also more limiting: +#### "App-style" Release Strategy +Here, all users are expected to use the latest available version of the software. Most consumer software works this way today -- websites, phone apps, etc. An app-style strategy is much simpler, but also more limiting: * At any time, there is only one branch under active development. -* Customers that want bugfixes must update to the latest version. +* Customers that want bugfixes must upgrade to the latest version. * The plugin's build number can be a simple integer that is incremented with each new release. -An enterprise-style strategy is more flexible, but also more cumbersome: +### "Enterprise-style" Release Strategy +Here, you might distinguish "major" releases of your software from "minor" releases. You might expect some customers to continue to use older major releases for a long time, even after a new major release comes out. This strategy is often used for software like operating systems and DBMSs, where upgrading can cause significant disruption. An enterprise-style strategy is more flexible, but also more cumbersome: * There may be multiple branches under active development at any time. Typically one branch for every "major release" that is still being supported. This requires careful coordination to make sure that each new code change ends up on the correct branch (or branches). -* It is possible to supply bugfix-only minor releases (often called "backport releases") which build atop older major releases. Customers do not need to move to the new major version in order to get these bugfixes. +* It is possible to supply bugfix-only minor releases (often called "patch releases") which build atop older major releases. Customers do not need to move to the new major version in order to get these bugfixes. * The plugin's build number needs to be composed of multiple integers. -* The plugin may need to implement special logic to prevent certain incompatible upgrades. More details [here](/Versioning_And_Upgrading/Compatibility/#plugin-defined-compatibility) + +If you are using this strategy read more [here](#Backports_And_Hotfixes.md) about how to deal with backports and hotfixes. You may use whichever of these strategies works best for you. The SDK and the Delphix Engine support either strategy. You can even change your mind later and switch to the other strategy. + +## Recommendations + +* Build your plugin with the newest Virtualization SDK version available. +* Only publish one artifact built for a given official version of the plugin. +* The official release of a plugin should not use the same build number as a development build. +* Use an [external version](#external-version) that helps easily identify newer plugins. +* Publish a plugin version compatibility matrix which lists out the plugin version, the Virtualization SDK it was built with and the Delphix Engine version(s) it supports. diff --git a/docs/docs/Versioning_And_Upgrading/Overview.md b/docs/docs/Versioning_And_Upgrading/Overview.md deleted file mode 100644 index 66643ec2..00000000 --- a/docs/docs/Versioning_And_Upgrading/Overview.md +++ /dev/null @@ -1,11 +0,0 @@ -# Overview - -Once you start writing and releasing your plugin, you’ll reach a point when bug fixes or new features will require schema changes. The plugin upgrade process enables objects that have been created with a prior schema to be migrated to the newly defined schema. When this happens, a new version of the plugin must be created. The following few pages will walk through how versions need to change between upgrades and what needs to be written in the plugin to make sure upgrade is successful. - -## Plugin Versioning - -Like any other piece of software, plugins change over time. Every so often, there will be a new release. To keep track of the different releases, each plugin release has its own versioning information. Depending on what changes are included in a particular release, there are different rules and recommendations for how the versioning information should be changed. More information on versioning is located [here](Versioning.md). - -## Upgrading - -Upgrading is the process by which an older version of a plugin is replaced by a newer version. Depending on what has changed between the two versions, this process may also include modifying pre-existing plugin defined objects so that it conforms to the new schema expected by the new version of the plugin. Information on the upgrade process can be found [here](Upgrading.md). diff --git a/docs/docs/Versioning_And_Upgrading/Special_Concerns/.pages b/docs/docs/Versioning_And_Upgrading/Special_Concerns/.pages deleted file mode 100644 index c7cd2fbb..00000000 --- a/docs/docs/Versioning_And_Upgrading/Special_Concerns/.pages +++ /dev/null @@ -1,3 +0,0 @@ -arrange: - - Backports_And_Hotfixes.md - - Replication.md diff --git a/docs/docs/Versioning_And_Upgrading/Special_Concerns/Backports_And_Hotfixes.md b/docs/docs/Versioning_And_Upgrading/Special_Concerns/Backports_And_Hotfixes.md deleted file mode 100644 index 6abdf3bc..00000000 --- a/docs/docs/Versioning_And_Upgrading/Special_Concerns/Backports_And_Hotfixes.md +++ /dev/null @@ -1,224 +0,0 @@ -# Backports and Hotfixes - -If your plugin uses an ["enterprise-style"](/Versioning_And_Upgrading/Versioning/#app-style-vs-enterprise-style) release strategy, then you'll probably want to occasionally provide new "minor" versions that build atop older versions. - -Code changes that are applied atop old releases are usually called "backports". Sometimes, they are also called "hotfixes", if the change is specifically created for a single user. - -These releases present a problem: although they are built atop an older code branch, they are still newer than some releases from a newer code branch. Below, we'll walk through how we prevent users from "upgrading" to a new-branch release that would be incompatible with an installed old-branch release. - -### Motivating Example -Let's take a look at an example of a possible timeline of releases. - -> **February**: The initial version of a plugin is released, with build number "1.0". This is a simple plugin that uses a simple strategy for syncing dSources. - -> **April**: A new version is released, with build number "1.1". This adds some bugfixes and adds some small optimizations to improve the performance of syncing. - -> **August**: A new version is released, with build number "2.0". This uses a completely new syncing strategy that is far more sophisticated and efficient. - -Let's assume that not all users will want to upgrade to the 2.0 release immediately. So, even months later, you expect to have a significant number of users still on version 1.0 or 1.1. - -Later, in October, a bug is found which impacts all releases. This bug is important enough that you want to fix it for **all** of your customers (not just the ones using 2.0). - -Here are the behaviors we need: - -* Our 2.0 customers should be able to get the new bugfix without giving up any of the major new features that were part of 2.0. -* Our 1.0 and 1.1 customers should be able to get the new bugfix without also needing to accept all the major new features that were part of 2.0. -* Once a customer has received the bugfix, it should be impossible to lose the bugfix in an upgrade. - -### Strategy - -There are two general strategies you can use here: - -* You can write [compatibility-checking logic](/Versioning_And_Upgrading/Compatibility/#plugin-defined-compatibility) that explicitly prevents any attempted 2.0 upgrade that would mean losing the bugfix. -* You can include a [data migration](/Versioning_And_Upgrading/Upgrading/#data-migrations) along with your bugfix. If your bugfix involves a schema change, you will have to do this anyways. If not, you can still include a data migration that simply does nothing. If a user with the bugfix attempts to "upgrade" to 2.0, the Delphix Engine will prevent it, because the 2.0 releases does not include this migration. - -You would typically follow these steps: - -* Fix the bug by applying a code change atop the 2.0 code. -* Make a new release of the plugin that includes that bugfix, with the build number "2.1". If you are using the data migration strategy, then include the new data migration in your 2.1 release. -* Separately, apply the same bugfix atop the 1.1 code. Note: depending on how code changed between 1.1 and 2.0, this 1.1-based bugfix might not contain the exact same code as we used with 2.0. -* If you're using the custom compatibility strategy, then write the compatibility logic alongside that 1.1-based bugfix. -* Make another new release of the plugin, this time with build number "1.2". This release includes the 1.1-based bugfix. It also should include either the new data migration or the new compatibility logic. - - -This meets our requirements: - -* Our 2.0 customers can install version 2.1. This gives them the bugfix, and keeps all the features from 2.0. -* Our 1.0 and 1.1 customers can install version 1.2. This gives them the bugfix without any of the 2.0 features. -* It is impossible for a 2.1 customer to lose the bugfix, because the Delphix Engine will not allow the build number to go "backwards". So, a 2.1 customer will not be able to install versions 2.0, 1.1, or 1.0. -* It is also impossible for a 1.2 customer to lose the bugfix. - * They cannot install 1.0 or 1.1 because the build number is not allowed to decrease. - * They also cannot install 2.0. Either the compatibility logic, or the data migration, will prevent this. - -Note that a 1.2 customer can still upgrade to 2.1 at any time. This will allow them to keep the bugfix, and also take advantage of the new features that were part of 2.0. - - -## When Data Migrations Are Insufficient - -New versions of plugins often require some modification of data that was written using an older version of the same plugin. Data migrations handle this modification. Unfortunately, data migrations cannot always fully handle all possible upgrade scenarios by themselves. - -For example, a new plugin version might want to add a new required field to one of its schemas. But, the correct value for this new field might not be knowable while the upgrade is underway -- perhaps it must be entered by the user, or perhaps it would require automatic discovery to be rerun. - -Such a situation will require some user intervention after the upgrade. - -In all cases, of course you will want to **clearly document** to your users that there will extra work required so they can make sure they known what they are getting into before they decide to upgrade. - -It should also be said that you should try to avoid cases like this. As much as possible, try to make your post-upgrade plugin function with no user intervention. Only resort to user intervention as a last resort. - -### Forcing User Intervention After Plugin Upgrade - -The recommended strategy here is to arrange for the affected objects to be in an "invalid" state, and for your plugin code to detect this state, and throw errors when the objects are used. - - -For such a situation, we recommend the following process: - -* Make your schema changes so that the affected property can be set in such a way that plugin code can identify it as being invalid. Typically this is done by allowing for some "sentinel" value. This may require you to have a less-strict schema definition than you might otherwise want. -* In your data migrations, make sure the affected properties are indeed marked invalid. -* In any plugin code that needs to use these properties, first check them for validity. If they are invalid, then raise an error that explains the situation to the user, and tells them what steps they need to take. - -Following are two examples of schema changes that need extra user intervention after upgrade. One will require a rediscovery, and the other will require the user to enter information. - -#### Autodiscovery Example - -Suppose that a new plugin version adds a new required field to its repository schema. This new field specifies a full path to a database installation. The following listing shows what we'd ideally like the new repository schema to look like (`installationPath` is the new required property) - -``` -"repositoryDefinition": { - "type": "object", - "properties": { - "name": { "type": "string" }, - "installationPath": { "type": "string", "format": "unixpath"} - }, - "required": ["name", "installationPath"], - "nameField": "name", - "identityFields": ["name"] -} -``` - -The new plugin's autodiscovery code will know how to find this full path. Therefore, any repositories that are discovered (or rediscovered) after the upgrade will have this path filled in correctly. - -But, there may be repositories that were discovered before the upgrade. The data migrations will have to ensure that *some* value is provided for this new field. However, a data migration will not be able to determine what the correct final value is. - -One way to handle this is to modify the schema to allow a special value to indicate that the object needs to be rediscovered. In this example, we'll change the schema from the ideal version above, removing the `unixpath` constraint on this string: -``` -"installationPath": { "type": "string" } -``` - -Now, our data migration can set this property to some special sentinel value that will never be mistaken for an actual installation path. -``` -_REDISCOVERY_TOKEN = "###_REPOSITORY_NEEDS_REDISCOVERY_###" - -@plugin.upgrade.repository("2020.02.04.01") -def repo_path(old_repository): - # We need to add in a repository path, but there is no way for us to know - # what the correct path is here, so we cannot set this to anything useful. - # Instead, we'll set a special sentinel value that will indicate that the - # repository is unusable until the remote host is rediscovered. - old_repository["installationPath"] = _REDISCOVERY_TOKEN - return old_repository -``` - -Now, wherever the plugin needs to use this path, we'll need to check for this sentinel value, and error out if we find it. For example, we might need a valid path during the `configure` operation: -``` -@plugin.virtual.configure() -def configure(virtual_source, snapshot, repository): - if repository.installation_path == _REDISCOVERY_TOKEN: - # We cannot use this repository as/is -- it must be rediscovered. - msg = 'Unable to use repository "{}" because it has not been updated ' \ - 'since upgrade. Please re-run discovery and try again' - raise UserError(msg.format(repository.name)) - - # ... actual configure code goes here -``` - -#### Manual Entry - -Above, we looked at an example where the plugin could handle filling in new values for a new field at discovery time, so the user was simply asked to rediscover. - -Sometimes, though, users themselves will have to be the ones to supply new values. - -Suppose that a new plugin version wants to add a required field to the `virtualSource` object. This new property will tell which port the database should be accessible on. Ideally, we might want our new field to look like this: - -``` -"port": {"type": "integer", "minimum": 1024, "maximum": 65535} -``` - -Again, however, the data migration will not know which value is correct here. This is something the user must decide. Still, the data migration must provide *some* value. As before, we'll change the schema a bit from what would be ideal: - -``` -"port": {"type": "integer", "minimum": 0, "maximum": 65535} -``` - -Now, our data migration can use the value `0` as code for "this VDB needs user intervention". - -``` -@plugin.upgrade.virtual_source("2020.02.04.02") -def add_dummy_port(old_virtual_source): - # Set the "port" property to 0 to act as a placeholder. - old_virtual_source["port"] = 0 - return old_virtual_source -``` - -As with the previous example, our plugin code will need to look for this special value, and raise an error so that the user knows what to do. This example shows the status operation, but of course, similar code will be needed anywhere else that the new `port` property is required. - -``` -@plugin.virtual.status() -def virtual_status(virtual_source, repository, source_config): - if virtual_source.parameters.port == 0: - raise UserError('VDB "{}" cannot function properly. Please choose a ' \ - 'port number for this VDB to use.'.format(virtual_source.parameters.name)) - - # ... actual status checking code goes here -``` - -## Debugging Data Migration Problems - -During the process of upgrading to a new version, the Delphix Engine will run -all applicable data migrations, and then ensure that the resulting object -matches the new schema. But, what if there is a bug, and the resulting object -does **not** match the schema? - -### Security Concerns Prevent Detailed Error Messages -One problem here is that the Delphix Engine is limited in the information that -it can provide in the error message. Ideally, the engine would say exactly what -was wrong with the object (e.g.: "The field `port` has the value `15`, but the -schema says it has to have a value between `256` and `1024`"). - -But, the Delphix Engine cannot do this for security reasons. Ordinarily, the -Delphix Engine knows which fields contain sensitive information, and can redact -such fields from error messages. But, the only reason the engine has that -knowledge is because the schema provides that information. If an object does -**not** conform to the schema, then the Delphix Engine can't know what is -sensitive and what isn't. - -Therefore, the error message here might lack the detail necessary to debug the -problem. - -### One Solution: Temporary Logging - -During development of a new plugin version, you may find yourself trying to find -and fix such a bug. - -One technique is to use temporary logging. - -For example, while you are trying to locate and fix the bug, you could put a log -statement at the very end of each of your data migrations, like so: -``` - logger.debug("Migration 2010.03.01 returning {}".format(new_object)) - return new_object -``` - -See the [Logging](/References/Logging.md) section for more information about -logging works. - -From the logs, you'll be able to see exactly what each migration is returning. -From there, hopefully the problem will become apparent. As a supplemental tool, -consider pasting these results (along with your schema) into an online JSON -validator for more information. - -Note: It is **very important** that you only use logging as a temporary -debugging strategy. **Such logging must be removed before you release the plugin -to end users**. If this logging ends up in your end product, it could cause -a serious security concern. Please see our -[sensitive data best practices](/Best_Practices/Sensitive_Data.md) for more -information. diff --git a/docs/docs/Versioning_And_Upgrading/Special_Concerns/Replication.md b/docs/docs/Versioning_And_Upgrading/Special_Concerns/Replication.md deleted file mode 100644 index c96081c6..00000000 --- a/docs/docs/Versioning_And_Upgrading/Special_Concerns/Replication.md +++ /dev/null @@ -1,22 +0,0 @@ -# Replication - -If the Delphix Engine is setup to replicate data objects to a target engine, there are a few things to consider when uploading a new plugin or upgrading an existing plugin. These considerations are in addition to what a normal upload needs to take into account. - -## Delphix Engine Rules - -In some cases, a plugin would need an upgrade after a failover operation is done on the target engine. - -- After failover, the replicated plugin (now marked as 'inactive') needs an upgrade since its version is lower than the version of the plugin installed on the target Delphix Engine. -- After failover, the live plugin (a.k.a 'active' plugin) needs an upgrade since its version is either lower or not compatible with replicated plugin. - -!!! info - If the replicated and target plugin versions are compatible, failover will automatically merge the plugins and associated objects and there will only be one plugin active. - -!!! info - In some rare cases, a replicated plugin is incompatible with the target plugin even though the target plugin is of a higher version - e.g. target plugin has a data migration that is not present in the replicated plugin or target plugin is built with a lower version of the Virtualization SDK. In such cases, a multi-step upgrade might help. - -## Inactive plugin needs upgrade -In most cases, the replicated plugin upgrade is a user initiated operation on the Delphix Engine. However, this operation may fail if the plugins are not compatible. For such a case, a fault (and exception) on the Delphix Engine would indicate that the replicated plugin needs an upgrade and the replicated plugin is marked 'inactive'. As a plugin author, it is a good idea to document the compatibility between the different official releases of a plugin so that a compatible plugin is uploaded. - -## Active plugin needs upgrade -In some cases, an active plugin on the target Delphix Engine would need an upgrade to make it compatible with the replicated plugin. Again, as in the case of upgrading a replicated plugin, following [these](/Best_Practices/Replication/Managing_Versions_With_Replication.md) recommendations will help the end user choose the right plugin version to upload. diff --git a/docs/docs/Versioning_And_Upgrading/Upgrading.md b/docs/docs/Versioning_And_Upgrading/Upgrading.md deleted file mode 100644 index 3dbd8a76..00000000 --- a/docs/docs/Versioning_And_Upgrading/Upgrading.md +++ /dev/null @@ -1,162 +0,0 @@ -# Upgrading - -Upgrading is the process of moving from an older version of a plugin to a newer version. - -# Motivating Example - -Upgrading is not as simple as just replacing the installed plugin with a newer one. The main complication comes when the new plugin version makes changes to its [schemas](/References/Glossary.md#schema). - -Consider the case of a plugin that works with collections of text files -- the user points it to a directory tree containing text files, and the plugin syncs the files from there. - -The first release of such a plugin might have no link-related user options. So the plugin's linked source schema might define no properties at all: - -```json -"linkedSourceDefinition": { - "type": "object", - "additionalProperties" : false, - "properties" : { - } -} -``` - -And, the syncing code is very simple: -```python -@plugin.linked.pre_snapshot() -def linked_pre_snapshot(direct_source, repository, source_config): - libs.run_sync( - remote_connection = direct_source.connection, - source_directory = source_config.path - ) -``` - - -But, later, some users request a new feature -- they want to avoid syncing any backup or hidden files. So, a new plugin version is released. This time, there is a new boolean property in the linked source schema where users can elect to skip these files, if desired. -```json -"linkedSourceDefinition": { - "type": "object", - "additionalProperties" : false, - "required": ["skipHiddenAndBackup"], - "properties" : { - "skipHiddenAndBackup": { "type": "boolean" } - } -} -``` - -The plugin code that handles the syncing can now pay attention to this new boolean property: -```python -_HIDDEN_AND_BACKUP_SPECS = [ - "*.bak", - "*~", # Backup files from certain editors - ".*" # Unix-style hidden files -] - -@plugin.linked.pre_snapshot() -def linked_pre_snapshot(direct_source, repository, source_config): - exclude_spec = _HIDDEN_AND_BACKUP_SPECS if direct_source.parameters.skip_hidden_and_backup else [] - - libs.run_sync( - remote_connection = direct_source.connection, - source_directory = source_config.path, - exclude_paths = exclude_spec - ) -``` - -Suppose a user has an engine with linked sources created by the older version of this plugin. That is, the existing linked sources have no `skipHiddenAndBackup` property. - -If the user installs the new version of the plugin, we have a problem! The above `pre_snapshot` code from the new plugin will attempt to access the `skip_hidden_and_backup` property, which we've just seen will not exist! - -The solution to this problem is to use [data migrations](/References/Glossary.md#data-migration), explained below. - - -# Data Migrations - -## What is a Data Migration? - -Whenever a new version of a plugin is installed to a Delphix Engine, the engine needs to convert pre-existing data from its old format (as specified by the schemas in the old version of the plugin), to its new format (as specified by the schemas in the new version of the plugin). - -A [data migration](/References/Glossary.md#data-migration) is a function that is responsible for doing this conversion. It is provided by the plugin. - -Thus, when the new plugin version is installed, the engine will call all applicable data migrations provided by the new plugin. This ensures that all data is always in the format expected by the new plugin. - -## A Simple Example - -Let's go back to the above example of the plugin that adds a new boolean option to allow users to avoid syncing backup and hidden files. Here is a data migration that the new plugin can provide to handle the data format change: - -```python -@plugin.upgrade.linked_source("2019.11.20") -def add_skip_option(old_linked_source): - return { - "skipHiddenAndBackup": false - } -``` - -The exact rules for data migrations are covered in detail below. Here, we'll just walk through this code line by line and make some observations. - -```python -@plugin.upgrade.linked_source("2019.11.20") -``` -The above line is a [decorator](/References/Glossary.md#decorator) that identifies the following function as a data migration. This particular migration will handle linked sources. It is given an ID of `2019.11.20` -- this controls when this migration is run in relation to other data migrations. - -```python -def add_skip_option(old_linked_source): -``` - -Note that the data migration takes an argument representing the old-format data. In this simple example, we know that there are no properties in the old-format data, so we can just ignore it. - -```python - return { - "skipHiddenAndBackup": false - } -``` - -Here, we are returning a Python dictionary representing the new format of the data. In this example, the dictionary has only one field: `skipHiddenAndBackup`. Because the old version of the plugin had no ability to skip files, we set this property to `false` to match. - - -## Rules for Data Migrations - -As shown above, the a data migration receives old-format input and produces new-format output. The rules and recommendations for data migrations follow: - -### RULES - -* Input and output are Python dictionaries, with properties named exactly as specified in the schemas. Note that this differs from other plugin operations, where the inputs are defined with Python classes, and whose properties use Python-style naming. - -* Each data migration must be tagged with an ID string. This string must consist of one or more positive integers separated by periods. - -* Data migration IDs must be numerically unique. Note that `"1.2"`, `"01.02"`, and "`1.2.0.0.0"` are all considered to be identical. - -* Once released, a data migration must never be deleted. An attempted upgrade will fail if the already-installed version has a data migration that does not appear in the to-be-installed version. - -* At upgrade time, the engine will find the set of new migrations provided by the new version that are not already part of the already-installed version. Each of these migrations will then be run, in the order specified below. - -* After running all applicable migrations, the engine will confirm that the resultant data conforms to the new version's schemas. If not, the upgrade will fail. - -* Note that there is no requirement or guarantee that the input or output of any particular data migration will conform to a schema. We only guarantee that the input to the **first** data migration conforms to the schema of the already-installed plugin version. And, we only require that the output of the **final** data migration conforms to the schema of the new plugin version. - -* Data migrations are run in the order specified by their IDs. The ordering is numerical, not lexicographical. Thus `"1"` would run before `"2"`, which would run before `"10"`. - -* Data migrations have no access to remote hosts. If a data migration attempts to use `run_bash` or similar, the upgrade will fail. - -* Note that the above rules imply that at least one data migration is required any time a schema change is made that would invalidate any data produced using a previous version of the plugin. - - -### Recommendations -* We recommend using a "Year.Month.Date" format like `"2019.11.04"` for migration IDs. You can use trailing integers as necessary (e.g. use `"2019.11.04.5"` if you need something to be run between `"2019.11.04"` and `"2019.11.05"`). - -* Even though they follow similar naming rules, migration IDs are not the same thing as plugin versions. We do not recommend using your plugin version in your migration IDs. - -* We recommend using small, single-purpose data migrations. That is, if you end up making four schema changes over the course of developing a new plugin version, we recommend writing four different data migrations, one for each change. - -## Data Migration Example - -Here is a very simple data migration. -```python -@plugin.upgrade.repository("2019.12.15") -def add_new_flag_to_repo(old_repository): - new_repository = dict(old_repository) - new_repository["useNewFeature"] = False - return new_repository -``` - - diff --git a/docs/docs/index.md b/docs/docs/index.md index 813f220c..88f71516 100644 --- a/docs/docs/index.md +++ b/docs/docs/index.md @@ -37,3 +37,7 @@ Read through the first few sections of this documentation, and we will walk you [Building Your First Plugin](/Building_Your_First_Plugin/Overview.md) will walk you step-by-step through the process of developing a very simple plugin. With it, you will learn the concepts and techniques that you will need to develop fully-fledged plugins. That does not mean this first plugin is useless—you will be able to virtualize simple datasets with it. Once you complete these sections, use the rest of the documentation whenever you would like. + +## Questions? + +If you have questions, bugs or feature requests reach out to us via the [Virtualization SDK GitHub repository](https://github.com/delphix/virtualization-sdk/). \ No newline at end of file diff --git a/docs/readme.md b/docs/readme.md index 37450c10..be1068cb 100644 --- a/docs/readme.md +++ b/docs/readme.md @@ -2,6 +2,30 @@ This is the Markdown-based documentation for the Virtualization SDK. +## Local Testing +Create a `virtualenv` using Python 3 and run `pipenv run mkdocs serve` + +``` +$ virtualenv -p /usr/local/bin/python3 . +Running virtualenv with interpreter /usr/local/bin/python3 +Using base prefix '/usr/local/Cellar/python/3.7.2_1/Frameworks/Python.framework/Versions/3.7' +New python executable in /Users/asarin/Documents/repos/virt-sdk-docs/env/bin/python3.7 +Also creating executable in /Users/asarin/Documents/repos/virt-sdk-docs/env/bin/python +Installing setuptools, pip, wheel... +done. + +$ source bin/activate + +$ pipenv run mkdocs serve +INFO - Building documentation... +INFO - Cleaning site directory +[I 200424 15:54:06 server:292] Serving on http://127.0.0.1:8000 +[I 200424 15:54:06 handlers:59] Start watching changes +[I 200424 15:54:06 handlers:61] Start detecting changes +``` + +The docs would be served up at [http://127.0.0.1:8000](http://127.0.0.1:8000). + ## Live Testing and Reviews The command `git docsdev-review` will handle publishing reviews, and putting your changes on a live docs server. For example, you can clone the `docsdev-server` image on DCOA, and then run `git docsdev-review -m `. This will: @@ -10,4 +34,4 @@ The command `git docsdev-review` will handle publishing reviews, and putting you - Publish a review ## Workflow diagrams -We create workflow diagrams using a tool called `draw.io` which allows us to import/export diagrams in html format. If you want to add a diagram or edit an existing one, simply create or import the html file in `docs/References/html` into `draw.io` and make your desired changes. When you are done, select your diagram and export it as a png file. You can think of the html files as source code, and the png files as build artifacts. After this step, you will be prompted to crop what was selected. You'll want this box checked to trim the whitespace around the diagram. After the diagrams are exported, check in the updated html file to `docs/References/html` and png file to `docs/References/images`. \ No newline at end of file +We create workflow diagrams using a tool called `draw.io` which allows us to import/export diagrams in html format. If you want to add a diagram or edit an existing one, simply create or import the html file in `docs/References/html` into `draw.io` and make your desired changes. When you are done, select your diagram and export it as a png file. You can think of the html files as source code, and the png files as build artifacts. After this step, you will be prompted to crop what was selected. You'll want this box checked to trim the whitespace around the diagram. After the diagrams are exported, check in the updated html file to `docs/References/html` and png file to `docs/References/images`.