-
-
Notifications
You must be signed in to change notification settings - Fork 474
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parameterized test cases #179
Comments
Pester test scripts are really just PowerShell script blocks, so you can technically do this sort of thing already without a new command: Describe "A test" {
function Test-Xor {
param ($a, $b, $ExpectedResult)
It ("Applies XOR to inputs {0} and {1}" -f $a, $b) {
$a -xor $b | Should be $ExpectedResult
}
}
$testCases = @(
@{ a = 0; b = 1; ExpectedResult = 1 }
@{ a = 1; b = 0; ExpectedResult = 1 }
@{ a = 1; b = 1; ExpectedResult = 0 }
@{ a = 0; b = 0; ExpectedResult = 0 }
)
foreach ($testCase in $testCases) {
Test-Xor @testCase
}
} This could have used arrays like in your example; I just thought the hashtables made it a little more easy to read and understand. I'm not sure if there should be a separate command to support these parameterized tests, or if we would just leave it up to the test authors. @nohwnd, thoughts on this? |
One reason to implement this as a Pester feature is to have the parameterized tests be reported in the nUnit xml file output differently. For example, I need parameterized tests to be reported inside of a test-suite tag with a type of "ParameterizedTest" in order to match up tests to BDD requirements in a feature file when the requirement uses an example table. |
Fair enough, we can certainly look into that. I think I would still lean toward having splatted hashtables represent the test case parameters, rather than arrays. It's more clear, and easier to work with in the code. |
Yes that does make sense. My vote was really only about the feature being built in or not. Eric — On Wed, Aug 20, 2014 at 6:07 PM, Dave Wyatt [email protected]
|
I'm going to look into implementing this feature next. Without worrying too much about the details of how it affects the console / passthru / NUnit output at this point, I'm just thinking of how the user experience should be. I don't think we need to tie this new feature to the concept of a Context, as that has other implications (effects on TestDrive contents and scopes for Mocking). I was thinking of just adding an optional parameter to the However, we would need some form of string replacement within the $testCases = @(
@{ a = 1; b = 2; expectedResult = 3 }
@{ a = 5, b = 8, expectedResult = 13 }
)
# Using PowerShell variable expansion. Note that this string would have to be single-quoted,
# and the It function would be responsible for expanding the variables later. If you double-quoted
# the string, PowerShell would try to expand the variables right away before passing the string to
# It, which is not useful:
It 'Adds $a and $b' -TestCases $testCases {
param ($a, $b, $expectedResult)
$a + $b | Should Be $expectedResult
}
# Using some other replacement method to avoid confusion / conflict with PowerShell syntax and
# single / double quoted string. For example, something like an environment variable in cmd.exe:
It 'Adds %a% and %b%' -TestCases $testCases {
param ($a, $b, $expectedResult)
$a + $b | Should Be $expectedResult
} Alternatively, we could separate this functionality from the It command, making a new keyword. This allows the user to deal with variables and strings passed to the It command however they like: ParameterizedTest -TestCases $testCases {
param ($a, $b, $expectedResult)
It "Adds $a and $b" {
$a + $b | Should Be $expectedResult
}
} I don't know if |
Does the Gherkin already implement examples/scenarios? There you use as placeholder in the scenario and then you provide table of values with header Name. Maybe it would be worth it to use the same notation. In any case I like the second option more. |
Just to add my two cents: I like your last option, Dave, using ParameterizedTest (which happens to coincide with my original sample at the top of this thread :-). As to the nomenclature, by itself I think "ParameterizedTest" is reasonable, but in the context of other things Pester (Describe, Context, It), I am not sure it is a good match. Would ParamaterizedContext be a possibility...? Perhaps it is not a variation of a Context--I do not know your code well enough--but if not, then how would this fit in the hierarchy? Would it be (Describe, Context, ParameterizedTest, It) or (Describe, ParameterizedTest, Context, It) or something else...? |
I think that the last option of adding a keyword to Pester is the better Since the word "test" does not really fit into the nomenclature of Pester, Eric On Mon, Sep 1, 2014 at 2:52 PM, Dave Wyatt [email protected] wrote:
|
I'm not sure, but it's worth looking into. I did check RSpec, which Pester's language is originally based on, but they don't have built-in support for parameterized tests. (For basically the same reason I originally gave in this thread; RSpec tests are just executable code anyway, so it's easy to write loops wherever needed.) Anyhow, one of the big changes we're looking at for v4 is to abstract away the test language and allow Pester to be used with plugins (with its current RSpec-like language and Gherkin included by default.) Internally, it's fine for us to refer to suites, test cases and parameterized tests, and we can make a final decision on what the language should look like separately. |
@dlwyatt not sure if it was clear but by gherkin I meant the gherkin branch of pester. The Gherkin lenguage itself implements these features. |
Hi guys, Even knowing that code has been checked in would be a plus. Thanks for taking this up. Eric |
I haven't spent much time on Pester recently; been very busy. However, this is a pretty low-hanging fruit kind of request, and it shouldn't take very long to implement, once a design is settled on. After re-reading the earlier posts in this thread, I'm leaning towards one of the first two options (just adding a new parameter to the |
Looks like the Gherkin language uses this syntax: 'It adds <a> and <b>' May as well run with that. In the short term, I'll probably wind up duplicating some code that @Jaykul already did in Invoke-GherkinScenario, and that can get cleaned up along with everything else later when we abstract out the language from the test runner. |
I think you should look for an implementation based on RSpec ... because if you're going to use RSpec, you should do it the RSpec way. Having said that, you should note that Gherkin test cases match code-behind steps which are frequently parametrized even when they don't look it. So in this test, for example
The first step matches an implementation that looks like this in PowerShell, with regex:
Notice that I wrote a regex that can match one or more parameters, and I wrote a function with two optional parameters? ;-) That same step implementation will also match a Scenario Outline (note: different keyword) like this, which uses the
And it's also possible to let the step implementation deal with the looping by having a table that's passed entire (as a hashtable) to the step, like this one:
|
As far as I know, RSpec doesn't have a concept of parameterized tests either. It works much like Pester, with lots of executable code around a minimal DSL. If you want to run a test multiple times, you just use a loop, as I suggested in my first reply to this thread. The parameters to these tests are neatly handled by hashtables and splatting already; all that was left to figure out was how to parameterize the test names as well. I think the |
Makes sense. |
I started working on this feature this morning. Shouldn't take too long. |
Just opened a pull request for this. @ericrlarson (and anyone else who's interested), please give it a try and let me know if it's working as you expect. There's a new example in the comment-based help for the |
So, the base feature works great. I am able to pass in the the test cases and the tests all get run perfectly. My only struggle now is the nUnit output formats. My main struggle is that I need the Pester output to be pulled in as input to another program which is generating some of our QMS documentation for the release. The expected input is an nUnit result file, but the tool has been coded against real nUnit result files, not one that approximates the format. nUnit does parameterized tests differently. It does not require the variable names to appear in the test name in order to run the test. Each resulting test name has all the date data added to it in ()'s like this:
Notice the parameter data is in quotes, this seems to be true regardless of the datatype and the last element is always ",null". As a result what is expected by the next tool in my process is not what Pester is producing. I realize that some of this may not really be Pester's problem, but I figure that if I do not ask then I will not get...so here is my ask. Below you will see a Pester result file that I have modified somewhat. <?xml version="1.0" encoding="utf-8" standalone="no"?>
<test-results xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="nunit_schema_2.5.xsd" name="Pester" total="11" errors="0" failures="7" not-run="0" inconclusive="0" ignored="0" skipped="0" invalid="0" date="2014-10-03" time="15:20:57">
<environment nunit-version="2.5.8.0" clr-version="2.0.50727.5485" user-domain="DOMAIN" user="USERNAME" cwd="PATH" os-version="6.1.7601" platform="Microsoft Windows 7 Professional |C:\Windows|\Device\Harddisk0\Partition1" machine-name="HOSTNAME" />
<culture-info current-culture="en-US" current-uiculture="en-US" />
<test-suite type="Powershell" name="PATH" executed="True" result="Failure" success="False" time="3.0003" asserts="0">
<results>
<test-suite type="TestFixture" name="Add Things Together" description="Add Things Together" executed="True" result="Success" success="True" time="0.4039" asserts="0">
<results>
<test-suite type="ParameterizedTest" name="Add Things Together.Add Numbers" description="Add Numbers" executed="True" result="Success" success="True" time="0.3481" asserts="0">
<results>
<test-case name="Add Things Together.Add Numbers("2","3","5",null)" description="Add Numbers("2","3","5",null)" executed="True" time="0.2231" asserts="0" success="True" result="Success" />
<test-case name="Add Things Together.Add Numbers("-2","-2","-4",null)" description="Add Numbers("-2","-2","-4",null)" executed="True" time="0.066" asserts="0" success="True" result="Success" />
<test-case name="Add Things Together.Add Numbers("2","-2","0",null)" description="Add Numbers("2","-2","0",null)" executed="True" time="0.0591" asserts="0" success="True" result="Success" />
</results>
</test-suite>
<test-case name="Add Things Together.Add strings" description="Add strings" executed="True" time="0.0558" asserts="0" success="True" result="Success" />
</results>
</test-suite>
</results>
</test-suite>
</test-results> Modifications:
I know that I can change the 'Type' of the Here are my asks:
What do you think? If the answer is no for any or all of these, ideas on accomplishing the goals are greatly appreciated. Eric |
I'll look into it, but I'm not sure whether we can make changes to the Describe test-suite elements or the test-case element names (other than for ParameterizedTest cases) without having a breaking change for anyone else who's using this export functionality from Pester. As for adding the parameters to the test-case output for new parameterized tests, that's definitely doable, but you might have a bit of a snag there... most people are going to be using Hashtable objects, and the order of the keys in that table (and in the XML file) might not be quite what you expected. That trailing null is also a bit odd; I'd like to know what it actually represents before I just stick it into the file. |
Hey, I completely understand wrt not introducing breaking changes. Though do you really think that providing data variables that can be used when I customize the nUnit template files for my own use would involve a breaking change? I confess that the null at the end of the parameter list confuses me too, but whenever I do a parameterized test in nUnit for a C# project that null is there at the end of the data list for each row in the example table. I think that it is just an explicit end of data marker but I cannot say that definitively. In terms of parameter order, I expect to need to arrange things "just-so" in my .feature files and in the testcases parameter in order to get this integration to work so if you add the parameters at the end as I suggested then I will take care of making sure that the order is what I expect. Also, I am envisioning the parameters at the end in ()'s to only appear if I use NONE of the parameters in my test name. In other words I do not expect you to put just the missing ones, or any complicated stuff like that. They either appear embedded in the name (in which case it is up to me to include them all) or they are added to the end and Pester adds them all. I wish that the team that makes the tool I am required to use here would code a specific parser for the Pester output file. That would be the best thing, but alas it is not an option at this time. Thanks again, |
Hmmm...I was just looking through the code-base and discovered that 3.0 no longer uses the templates...that definitely makes implementing some of my suggestions more complicated. Need to think about that for a bit... Eric |
I like the idea of a template file, but the old -XmlOutput implementation had some bugs, and it was rewritten in V3. By using the XmlTextWriter .NET class, v3 ensures that the resulting file will be well-formed. We can probably meet in the middle and still make use of a template and also leverage XmlTextWriter; it's just a question of how much time that takes and how soon it gets done. |
There current If you skip the Incidentally, I'm still thinking about writing a JSON one, and really think -OutputXML should be deprecated and replaced by two parameters: Invoke-Pester -OutputFormat NUnitReport -OutputFile path\report.xml And we'd just pipe: $pester | &(Get-Command "Export-$OutputFormat") $OutputFile |
I dunno. I hate making changes that are not backward-compatible without a good reason, and I'm not so sure that this is really a bug. Our current XML export files are valid according to the NUnit 2.5 schema XSD; that validation is part of Pester's unit tests. If we implement your idea of adding a new set of export parameters, I suppose we can just have two different NUnit formats; one legacy (possibly deprecated) which matches what we have now, and a newer one that more closely matches the files that NUnit actually produces. -OutputXml would choose the legacy format, of course. |
I've updated #213 with the new parameter set. No new functionality yet, but now it's easy to add that. I'm going to merge this into a development branch instead of master, so we can tinker with it. |
Starting to implement the requests from pester#179 for output that is closer to what NUnit produces.
I've uploaded some changes to the ParameterizedTests branch which should be implementing most of what you've mentioned here. To use them, call The one thing that I haven't been able to account for yet is that trailing null you describe in the parameter list. In http://nunit.org/files/testresult_25.txt , I'm not seeing that at all. The ParameterizedTest elements look like this: <test-suite type="ParameterizedTest" name="GenericMethod" executed="True" result="Success" success="True" time="0.013" asserts="0">
<results>
<test-case name="NUnit.Tests.FixtureWithTestCases.GenericMethod<Double>(9.2d,11.7d)" executed="True" result="Success" success="True" time="0.007" asserts="1" />
<test-case name="NUnit.Tests.FixtureWithTestCases.GenericMethod<Int32>(2,4)" executed="True" result="Success" success="True" time="0.001" asserts="1" />
</results>
</test-suite>
<test-suite type="ParameterizedTest" name="MethodWithParameters" executed="True" result="Success" success="True" time="0.011" asserts="0">
<results>
<test-case name="NUnit.Tests.FixtureWithTestCases.MethodWithParameters(9,11)" executed="True" result="Success" success="True" time="0.003" asserts="1" />
<test-case name="NUnit.Tests.FixtureWithTestCases.MethodWithParameters(2,2)" executed="True" result="Success" success="True" time="0.000" asserts="1" />
</results>
</test-suite> |
Let me know when you have a chance to look into this and see if it's meeting your needs. I'm not sure about conditionally appending the (params) string to the test-case names. Maybe we should just do that every time, regardless of whether the script happens to use the expansion functionality. |
Hi Dave, I am trying to get to this ASAP. I think that I will finally get the We are at the end of a development sprint and I am trying to help get a Eric On Tue, Oct 7, 2014 at 2:13 PM, Dave Wyatt [email protected] wrote:
|
Sounds good. No huge rush; just wanted to make sure my previous post hadn't been overlooked. |
Hey Dave, I finally had the bandwidth to test these changes. This is much closer to There were two other reasons that the output Pester created did not work
I highlighted two things in the output above.
I understand what you are saying about the trailing null in the parameter We may be getting too far into my specific needs for all this to be part of
Let me know what you think. Eric On Sun, Oct 5, 2014 at 1:21 AM, Dave Wyatt [email protected] wrote:
|
Dave, I goofed up in my response. I see how you are adding the parameter values Unfortunately (at least right now) this is a road block for me, and Anyway, let me know what you think. On Tue, Oct 14, 2014 at 2:59 PM, Eric Larson [email protected]
|
Can you provide an example of some NUnit tests that are producing output with this null in the parameter list? I've tried reproducing this with some very basic NUnit test code, and am not seeing that null in the output XML file: namespace NUnitTest
{
public class Tester
{
[TestCase(2, 2, 4)]
[TestCase(0, 5, 5)]
[TestCase(31, 11, 42)]
public void AddNumbers(int a, int b, int sum)
{
Assert.That(a+b, Is.EqualTo(sum));
}
}
} Produces this output: <test-suite type="ParameterizedTest" name="AddNumbers" executed="True" result="Success" success="True" time="0.040" asserts="0">
<results>
<test-case name="NUnitTest.Tester.AddNumbers(2,2,4)" executed="True" result="Success" success="True" time="0.029" asserts="1" />
<test-case name="NUnitTest.Tester.AddNumbers(0,5,5)" executed="True" result="Success" success="True" time="0.000" asserts="1" />
<test-case name="NUnitTest.Tester.AddNumbers(31,11,42)" executed="True" result="Success" success="True" time="0.000" asserts="1" />
</results>
</test-suite> One thing to keep in mind is that even if I can't figure out what this null represents, or why your code expects it, you can still work around this by modifying your test code (instead of Pester itself). For example: Describe 'Testing NUnit Output' {
$testCases = @(
[ordered]@{ a = 1; b = 2; expectedResult = 3; bogus = $null }
[ordered]@{ a = 5; b = 5; expectedResult = 10; bogus = $null }
[ordered]@{ a = -4; b = -6; expectedResult = -10; bogus = $null }
)
It 'Adds numbers' -TestCases $testCases {
param ($a, $b, $expectedResult, $bogus)
$a + $b | Should Be $expectedResult
}
} Produces this output: <test-suite type="ParameterizedTest" name="Testing NUnit Output.Adds numbers" executed="True" result="Success" success="True" time="0.2346" asserts="0" description="Adds numbers">
<results>
<test-case name="Testing NUnit Output.Adds numbers(1,2,3,null)" executed="True" time="0.2076" asserts="0" success="True" result="Success" />
<test-case name="Testing NUnit Output.Adds numbers(5,5,10,null)" executed="True" time="0.0192" asserts="0" success="True" result="Success" />
<test-case name="Testing NUnit Output.Adds numbers(-4,-6,-10,null)" executed="True" time="0.0078" asserts="0" success="True" result="Success" />
</results>
</test-suite> I'll look into the missing description attribute that you mentioned. |
Added the description attribute to test-case tags. Let me know if this, along with the workaround I mentioned for injecting nulls into your results, will work for you. (If you have samples of NUnit code that produces the nulls in the output, I can take a look and try to find out what that is.) |
Hey Dave, I will test this out to day. The workaround you mention is a great idea I will keep you posted. Eric On Fri, Oct 17, 2014 at 12:00 AM, Dave Wyatt [email protected]
|
Hey Dave, It looks like this update works fine. My only issue at this point is that Eric On Fri, Oct 17, 2014 at 8:51 AM, Eric Larson [email protected]
|
Stay tuned on that; I'm looking into possible ways of making this code work regardless of how the test cases are passed (so the order in the XML file is the same order as the param block of the test.) |
OK, check out the latest code from this branch and see how it works. You no longer need to specify the bogus key in your test case hashtables; just tack an extra parameter on to the end of your param block and it'll wind up showing up as null in the XML output. The parameters in the XML output will be in the same order as they are listed in the test's param() block, even if you just pass in normal hashtables instead of [ordered]. Describe 'Testing NUnit Output' {
$testCases = @(
@{ a = 1; b = 2; expectedResult = 3; }
@{ a = 5; b = 5; expectedResult = 10; }
@{ a = -4; b = -6; expectedResult = -10; }
)
It 'Adds numbers' -TestCases $testCases {
param ($a, $b, $expectedResult, $bogus)
$a + $b | Should Be $expectedResult
}
}
|
Dave, This is looking great. Now that the format is generally right I am doing some more extensive tests, especially regarding boundary conditions. Also (and this is just an FYI) the tool that is taking the Pester results as input is expecting all parameters as a quoted string so I am having to cast the strings into typed values. Why can't people just follow the standards. Grrrr. Thank you for all your work. Do you have any thoughts yet on release? I am guessing that this will be part of 3.0.3 right? Eric On Fri, Oct 17, 2014 at 12:29 PM, Dave Wyatt [email protected] wrote:
|
This will probably wind up being called 3.1, but we can release it pretty quickly either way since there are no breaking changes involved. |
Sounds great. My timeline would just need it before the end of the year so that we can run it through the testing process prior to using it to validate our application. Eric Sent from my iPod On Oct 18, 2014, at 11:23 AM, Dave Wyatt [email protected] wrote:
|
Two and a half months for a minor release? Yeah, I think we can manage that. :) |
Starting to implement the requests from pester#179 for output that is closer to what NUnit produces.
Released with version 3.1. |
This should be in the documentation! I didn't even realize that this feature was implemented. |
If I remember correctly, I added it to the comment-based help, but hadn't gotten around to the wiki yet. Bear with me for a few more days, big changes in daily schedule starting in December, should give me more time for Pester and other such projects. :) |
Given that "It" is not run outside of a test script, I would not think to run |
Should the It "testname" string be modified if I'm using the -TestCases parameter in the console? Right now, I see that the names of the tests are the exact same. |
You can embed parameters from your function Add-Numbers($a, $b) {
return $a + $b
}
Describe "Add-Numbers" {
$testCases = @(
@{ a = 2; b = 3; expectedResult = 5 }
@{ a = -2; b = -2; expectedResult = -4 }
@{ a = -2; b = 2; expectedResult = 0 }
@{ a = 'two'; b = 'three'; expectedResult = 'twothree' }
)
It 'Correctly adds <a> and <b> to get <expectedResult>' -TestCases $testCases {
param ($a, $b, $expectedResult)
$sum = Add-Numbers $a $b
$sum | Should Be $expectedResult
}
} |
Oh, thanks. Thanks, On Fri, Nov 21, 2014 at 7:33 PM, Dave Wyatt [email protected]
|
Actually, there's no reason we couldn't do the same test name modification that was requested for the NUnit output in the console output as well. Currently, the behavior there is that if you do any string interpolation with the <> syntax, the NUnit output will append (param1, ... paramN) to the test name. Would that be valuable to see in the console as well? (Personally, I prefer to tailor the test names myself, but since the code is already there, it's very easy to make this work both ways.) |
Custom tailoring is good |
Understanding that I can create my own loop. I like this feature and only wish that the -TestCases were not cast to System.Collections.IDictionary, such that it might work with other objects, particularly PsCustomObjects. I'll admit, the problem I am currently trying to solve is a out of the ordinary, I'm testing someone else's mile long of multi-condition, nested If-Else statements. However, I can see value in TestCases being more inclusive, for needs beyond my own. Thanks, EXAMPLE
OUTPUT
|
That feature uses splatting to send the parameters to the test case; splatting requires dictionaries. You could write a quick function to take the objects from Import-Csv and convert them to hashtables, though: function ConvertTo-Hashtable
{
param (
[Parameter(ValueFromPipeline = $true)]
[Object[]] $InputObject
)
process
{
foreach ($object in $InputObject)
{
$hash = @{}
foreach ($property in $object.PSObject.Properties)
{
$hash[$property.Name] = $property.Value
}
$hash
}
}
}
$testCases = ConvertFrom-Csv $yourCsvStuff | ConvertTo-Hashtable |
This is not a defect report, rather it is a feature request--with code provided to implement the feature.
Did not really see any other place to send this along to you...
In NUnit I am spoiled by being able to define one test case and provide multiple sets of inputs to feed to that function (using NUnit's TestCase attribute). It is such a handy thing to have that one of my colleagues implemented support for our JavaScript test cases with Jasmine and, based on that, I implemented the same thing for PowerShell test cases with Pester.
Here is an example usage, where the single test case (It) is executed 4 times because I have (a) provided 4 sets of inputs and (b) instrumented the script block to receive the parameters in each input set.
Here is the Pester output:
And here is the definition of ContextUsing, which is just Context with a couple lines tweaked:
Note that ContextUsing will not work with a single set of inputs, but it does not need to: a single set of inputs can use the regular Context function.
The text was updated successfully, but these errors were encountered: