Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
84b5ea3
Testing LRO truncation operation.
tjprescott Apr 25, 2016
244f020
Successful tests prior to time compressing.
tjprescott Apr 26, 2016
2a7db5c
Testing LRO truncation.
tjprescott Apr 26, 2016
6b5ec97
Compatability fixed for Python 2 and 3. Ensure lease breaking operati…
tjprescott Apr 26, 2016
8b413c3
Merge branch 'UploadBlobCommand' of https://github.com/tjprescott/azu…
tjprescott Apr 26, 2016
a53bc00
Merge branch 'UploadBlobCommand' of https://github.com/tjprescott/azu…
tjprescott Apr 26, 2016
352c03c
Remove unused import.
tjprescott Apr 26, 2016
6ebf3c1
Merge branch 'UploadBlobCommand' of https://github.com/tjprescott/azu…
tjprescott Apr 27, 2016
bffe309
Code review fixes.
tjprescott Apr 27, 2016
65ee2f7
Merge branch 'UploadBlobCommand' of https://github.com/tjprescott/azu…
tjprescott Apr 27, 2016
a4ee80d
Merge branch 'master' of https://github.com/tjprescott/azure-cli into…
tjprescott Apr 27, 2016
0300edc
Merge branch 'master' of https://github.com/Azure/azure-cli into VCRT…
tjprescott Apr 27, 2016
75d6ac9
Merge branch 'master' of https://github.com/tjprescott/azure-cli into…
tjprescott Apr 28, 2016
97dc6b7
Code review comments.
tjprescott Apr 28, 2016
5f6c708
Merge branch 'master' of https://github.com/tjprescott/azure-cli into…
tjprescott Apr 28, 2016
036aa64
Fix issue with token expiration date.
tjprescott Apr 28, 2016
71d7303
Merge branch 'master' of https://github.com/tjprescott/azure-cli into…
tjprescott Apr 28, 2016
2b8f2ca
Re-record a test to verify merge didn't break anything.
tjprescott Apr 28, 2016
131204f
Update test authoring help doc.
tjprescott Apr 28, 2016
14a5924
Merge branch 'master' of https://github.com/tjprescott/azure-cli into…
tjprescott Apr 29, 2016
1f73d5d
Added info on print_ method.
tjprescott Apr 29, 2016
1cabd93
Add bash equivalent of testall and lintall scripts.
tjprescott Apr 29, 2016
e4d1b26
Merge branch 'master' of https://github.com/tjprescott/azure-cli into…
tjprescott Apr 29, 2016
3912850
Merge branch 'master' of https://github.com/tjprescott/azure-cli into…
tjprescott May 3, 2016
78185fd
Merge branch 'master' of https://github.com/tjprescott/azure-cli into…
tjprescott May 4, 2016
d1a1e23
Move commented out tests out of the Test_argparse class because for s…
tjprescott May 5, 2016
c3843b1
Merge branch 'master' of https://github.com/tjprescott/azure-cli into…
tjprescott May 5, 2016
851b819
Comment out failing test.
tjprescott May 5, 2016
ee98a5d
Merge branch 'master' of https://github.com/tjprescott/azure-cli into…
tjprescott May 5, 2016
cb4a6a1
Post-merge fixes. THANKS GIT.
tjprescott May 5, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
63 changes: 54 additions & 9 deletions doc/recording_vcr_tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,29 +17,74 @@ TEST_DEF = [
},
{
'test_name': 'name2',
'command': 'command2'
'script': script_class()
}
]
```

Simply add your new entries and run all tests. The test driver will automatically detect the new tests and run the command, show you the output, and query if it is correct. If so, it will save the HTTP recording into a YAML file, as well as the raw output into a dictionary called `expected_results.res`. When the test is replayed, as long as the test has an entry in `expected_results.res` and a corresponding .yaml file, the test will proceed automatically. If either the entry or the .yaml file are missing, the test will be re-recorded.
Simply add your new entries and run all tests. The test driver will automatically detect the new tests and run the commands, show you the output, and query if it is correct. If so, it will save the HTTP recording into a YAML file, as well as the raw output into a dictionary called `expected_results.res`. When the test is replayed, as long as the test has an entry in `expected_results.res` and a corresponding .yaml file, the test will proceed automatically. If either the entry or the .yaml file are missing, the test will be re-recorded.

If the tests are run on TravisCI, any tests which cannot be replayed will automatically fail.

##Recording Tests
##Types of Tests

Many tests, for example those which simply retrieve information, can simply be played back, verified and recorded.
The system currently accepts individual commands and script test objects. Individual commands will always display the output and query the user if the results are correct. These are the "old" style tests.

Other tests, such as create and delete scenarios, may require additional commands to set up for recording, or may require additional commands to verify the proper operation of a command. For example, several create commands output nothing on success. Thus, you'll find yourself staring at nothing with a prompt asking if that is the expected response.
To allow for more complex testing scenarios involving creating and deleting resources, long-running operations, or automated verification, use the script object method. To do so, simply create a class in the `command_specs.py` file with the following structure:

For these scenarios, I recommend having a second shell open, from which you can run any setup commands and then run any commands you need to verify the proper operation of the command in order to answer the prompt.
```Python
class MyScenarioTest(CommandTestScript):
def __init__(self):
super(MyScenarioTest, self).__init__(self.set_up, self.test_body, self.tear_down)

def set_up(self):
# Setup logic here
pass

def test_body(self):
# Main test logic
pass

def tear_down(self):
# clean up logic here
pass
```

The `set_up` and `tear_down` methods are optional and can be omitted. A number of helper methods are available for structuring your script tests.

####run(command_string)

This method executes a given command and returns the output. The results are not sent to the display or to expected results. Use this for:

- running commands that produce no output (the next statement will usually be a test)
- running commands that are needed for conditional logic or in setup/cleanup logic

####rec(command_string)

This method runs a given command and records its output to the display for manual verification. Using this command will force the user to verify the output via a Y/N prompt. If the user accepts the output, it will be saved to `expected_results.res`.

####test(command_string, checks)

This method runs a given command and automatically validates the output. The results are saved to `expected_results.res` if valid, but nothing is display on the screen. Valid checks include: `bool`, `str` and `dict`. A check with a `dict` can be used to check for multiple matching parameters (and logic). Child `dict` objects can be used as values to verify properties within nested objects.

####set_env(variable_name, value)

This method is a wrapper around `os.environ` and simply sets an environment variable to the specified value.

####pop_env(variable_name)

Another wrapper around `os.environ` this pops the value of the indicated environment variable.

####print_(string)

This method allows you to write to the display output, but does not add to the `expected_results.res` file. One application of this would be to print information ahead of a `rec` statement so the person validating the output knows what to look for.

##Long Running Operations (LRO)

I don't recommend trying to structure your tests so that one test sets up for another, because in general you cannot guarantee the order in which the tests will run. Also, I don't recommend attempting to record large batches of tests at once. I generally add one to three tests at a time and leave the remaining new tests commented out. Running `testall.bat` will let me record these. Then I uncomment a few more and so on, until they are all recorded.
The system now allows the testing of long running operations. Regardless of the time required to record the test, playback will truncate the long running operation to finish very quickly. However, because re-recording these actions can take a very long time, it is recommended that each LRO scenario be individually tested (possibly in tandem with a delete operation) rather than as part of a larger scenario.

##Limitations

The current system saves time, but has some limitations.

+ Certain commands require manual steps to setup or verify
+ You can't test for things like 'this input results in an exception'. It simply tests that the response equals an expected response.
+ This system does not work with long running operations. While it technically does, the resulting recording takes as long as the original call, which negates some of the key benefits of automated testing.
5 changes: 5 additions & 0 deletions lintall
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
#!/bin/bash

export PYTHONPATH=$PATHONPATH:./src
pylint -r n src/azure
python scripts/command_modules/pylint.py
2 changes: 1 addition & 1 deletion src/azure/cli/commands/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ def __call__(self, poller):
if self.progress_file:
print('.', end='', file=self.progress_file)
self.progress_file.flush()
time.sleep(self.poll_interval_ms / 1000.0)
time.sleep(self.poll_interval_ms / 1000.0)
result = poller.result()
succeeded = True
return result
Expand Down
8 changes: 2 additions & 6 deletions src/azure/cli/utils/command_test_script.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,10 +83,7 @@ def _check_json(source, checks):
if isinstance(checks, bool):
result_val = str(result).lower().replace('"', '')
bool_val = result_val in ('yes', 'true', 't', '1')
try:
assert bool_val == checks
except AssertionError as ex:
raise ex
assert bool_val == checks
elif isinstance(checks, str):
assert result.replace('"', '') == checks
elif isinstance(checks, dict):
Expand All @@ -95,8 +92,7 @@ def _check_json(source, checks):
elif checks is None:
assert result is None or result == ''
else:
raise IncorrectUsageError('test only accepts a dictionary of json properties or ' + \
'a boolean value.')
raise IncorrectUsageError('unsupported type \'{}\' in test'.format(type(checks)))
def set_env(self, key, val): #pylint: disable=no-self-use
os.environ[key] = val

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@ def exists_container(args):
'storage blob service-properties', None, _blob_data_service_factory,
[
AutoCommandDefinition(BlockBlobService.get_blob_service_properties,
'[ServiceProperties]', 'show'),
'ServiceProperties', 'show'),
AutoCommandDefinition(BlockBlobService.set_blob_service_properties,
'ServiceProperties', 'set')
], command_table, PARAMETER_ALIASES, STORAGE_DATA_CLIENT_ARGS)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
import json
import os
import sys
from time import sleep

from six import StringIO

Expand All @@ -27,6 +26,31 @@ def _get_connection_string(runner):
connection_string = out.replace('Connection String : ', '')
runner.set_env('AZURE_STORAGE_CONNECTION_STRING', connection_string)

class StorageAccountCreateAndDeleteTest(CommandTestScript):
def set_up(self):
self.account = 'testcreatedelete'
self.run('storage account delete -g {} -n {}'.format(RESOURCE_GROUP_NAME, self.account))
result = json.loads(self.run('storage account check-name --name {} -o json'.format(self.account)))
if not result['nameAvailable']:
raise RuntimeError('Failed to delete pre-existing storage account {}. Unable to continue test.'.format(self.account))

def test_body(self):
account = self.account
rg = RESOURCE_GROUP_NAME
s = self
s.run('storage account create --type Standard_LRS -l westus -n {} -g {}'.format(account, rg))
s.test('storage account check-name --name {}'.format(account),
{'nameAvailable': False, 'reason': 'AlreadyExists'})
s.run('storage account delete -g {} -n {}'.format(RESOURCE_GROUP_NAME, account))
s.test('storage account check-name --name {}'.format(account), {'nameAvailable': True})

def tear_down(self):
self.run('storage account delete -g {} -n {}'.format(RESOURCE_GROUP_NAME, self.account))

def __init__(self):
super(StorageAccountCreateAndDeleteTest, self).__init__(
self.set_up, self.test_body, self.tear_down)

class StorageAccountScenarioTest(CommandTestScript):

def test_body(self):
Expand Down Expand Up @@ -54,7 +78,8 @@ def test_body(self):
s.run('storage account set -g {} -n {} --type Standard_LRS'.format(rg, account))

def __init__(self):
super(StorageAccountScenarioTest, self).__init__(None, self.test_body, None)
super(StorageAccountScenarioTest, self).__init__(
None, self.test_body, None)

class StorageBlobScenarioTest(CommandTestScript):

Expand All @@ -63,9 +88,9 @@ def set_up(self):
self.rg = RESOURCE_GROUP_NAME
self.proposed_lease_id = 'abcdabcd-abcd-abcd-abcd-abcdabcdabcd'
self.new_lease_id = 'dcbadcba-dcba-dcba-dcba-dcbadcbadcba'
self.date = '2016-04-08T12:00Z'
self.date = '2016-04-01t12:00z'
_get_connection_string(self)
sas_token = self.run('storage account generate-sas --services b --resource-types sco --permission rwdl --expiry 2017-01-01t00:00z')
sas_token = self.run('storage account generate-sas --services b --resource-types sco --permission rwdl --expiry 2100-01-01t00:00z')
self.set_env('AZURE_SAS_TOKEN', sas_token)
self.set_env('AZURE_STORAGE_ACCOUNT', STORAGE_ACCOUNT_NAME)
self.pop_env('AZURE_STORAGE_CONNECTION_STRING')
Expand Down Expand Up @@ -187,7 +212,7 @@ def set_up(self):
self.share1 = 'testshare01'
self.share2 = 'testshare02'
_get_connection_string(self)
sas_token = self.run('storage account generate-sas --services f --resource-types sco --permission rwdl --expiry 2017-01-01t00:00z')
sas_token = self.run('storage account generate-sas --services f --resource-types sco --permission rwdl --expiry 2100-01-01t00:00z')
self.set_env('AZURE_SAS_TOKEN', sas_token)
self.set_env('AZURE_STORAGE_ACCOUNT', STORAGE_ACCOUNT_NAME)
self.pop_env('AZURE_STORAGE_CONNECTION_STRING')
Expand Down Expand Up @@ -301,28 +326,20 @@ def __init__(self):
super(StorageFileScenarioTest, self).__init__(self.set_up, self.test_body, self.tear_down)

TEST_DEF = [
# STORAGE ACCOUNT TESTS
{
'test_name': 'storage_account',
'script': StorageAccountScenarioTest()
},
# TODO: Enable when item #117262541 is complete
#{
# 'test_name': 'storage_account_create',
# 'command': 'storage account create --type Standard_LRS -l westus -g travistestresourcegroup --account-name teststorageaccount04'
#},
{
'test_name': 'storage_account_delete',
'command': 'storage account delete -g travistestresourcegroup --account-name teststorageaccount04'
'test_name': 'storage_account_create_and_delete',
'script': StorageAccountCreateAndDeleteTest()
},
# STORAGE CONTAINER TESTS
{
'test_name': 'storage_blob',
'script': StorageBlobScenarioTest()
},
# STORAGE SHARE TESTS
{
'test_name': 'storage_file',
'script': StorageFileScenarioTest()
},
}
]

Large diffs are not rendered by default.

Loading