Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setup/teardown for a suite #165

Open
ozars opened this issue Jul 16, 2018 · 6 comments · May be fixed by #166
Open

Setup/teardown for a suite #165

ozars opened this issue Jul 16, 2018 · 6 comments · May be fixed by #166

Comments

@ozars
Copy link
Contributor

ozars commented Jul 16, 2018

Is there a way that I can create setup/teardown for a suite? I found some information about fixtures for test cases in the documentation, but not for suites.

Thanks.

@brarcher
Copy link
Contributor

The tcase_add_checked_fixture() call will add setup/teardown calls before/after each unit test in a test case, and the calls will take place after the fork() so it is in the process space of the unit test.

The tcase_add_unchecked_fixture() call will add setup/teardown calls at the start and end of the test case, and not before/after individual unit tests. These occur before the fork(), and any state change that is not cleaned up will be present for all subsequent tests.

There is presently no setup/teardown that occurs at the suite level, only at the test case and unit test levels.

@ozars
Copy link
Contributor Author

ozars commented Jul 17, 2018

I can implement suite_add_unchecked_fixture. Not sure about naming though, since it doesn't make sense to have a checked version in suite level (does it?).

My use case is: I'm generating a large compressed file for multiple test cases, but it requires each test case to generate same file over and over again, which slows down testing substantially. I could generate it in main, but then I have to take care of unlinking the temporary file in case of failures. I couldn't find a nice and compact way to do that. A suite fixture would make this really simple.

@brarcher
Copy link
Contributor

If I recall correctly, 'checked' means the call running in the forked process of the unit test, and 'unchecked' means it runs before the fork() of the unit test. As the suites and test cases are not forked into their own process, a suite fixture will need to be labeled 'unchecked'.

Feel free to add support for suite-level fixtures. If you include the needed tests and update the docs, I can take a look at the pull request and work with you to get it in. If you have any questions, just let me know.

@ozars ozars linked a pull request Jul 25, 2018 that will close this issue
@ladar
Copy link

ladar commented Aug 27, 2018

Forgive the ramble, but I've been up all night dancing with my computer, and talking to a temperamental compiler. And I'm not sure the part of my brain which speaks English is still functioning... but I need to ask, when you say:

before/after each unit test in a test case

Do you mean a test case, aka START_TEST/END_TEST, or do you mean a test function? I've searched, and searched, and can't find any documentation on how to create a "test case" with multiple "test functions,". and I can only assume that when you say "unit test" you mean a plethora of test functions, inside a single, larger, test case.

I'm asking because, I've been in need of an elegant solution for awhile, and I don't know if this new suite fixture will fix it, or if I can use the existing functions.

Specifically, what I need is the ability to setup/initialize variables at the beginning of a group of units tests, regardless of whether I run the set in order, or cherry pick a single test case. And I want my setup function to then pass a variable with information to each test case (or test function) inside the set.

In practical terms, I'm working on a network server daemon, and some of my test cases require access to valid user credentials. I'd like to create a random user inside the setup function, then pass the ephemeral account/password to the test cases/functions which need it. Right now I'm either going through the creation process during each test case, or relying on the database being pre-configured with test accounts. Both methods have issues. I've also tried a "global" setup function, but in that scenario, I'm creating user accounts even if the run fails before I reach the affected tests, or I'm simply running a single test case.

I have another, related issue. For my tests which test/fuzz the network protocol directly, I have no way to setup a network connection, and then pass that connection between various test cases reliability. Right now my "camelface" test is actually 20 different tests, making up about 100 HTTP requests, simply because the programmer who wrote, wanted to use a single session, and pass state information between individual tests. (Create a random folder, and then use that folder id later for the rename/delete folder test cases.)

So, what I'm asking, is whether it will be possible for a suite fixture function to setup some of these parameters before handing them to test cases, in a way that works reliably whether I'm running with CK_FORK, or CK_NOFORK?

Regardless, I'm a plus one for suite_add_unchecked_fixture(), as it's a setup in the right direction.

@ozars
Copy link
Contributor Author

ozars commented Aug 27, 2018

Do you mean a test case, aka START_TEST/END_TEST, or do you mean a test function?

Currently, checked fixtures wrap test functions (aka START_TEST/END_TEST) included in test cases. Test cases are TCase structs in which tests are added via tcase_add_test(). So, there are three primitives: suites, test cases and tests. Name similarity between test cases and tests had confused me a lot at first while going through docs, so I just wanted to clarify this.

I've also tried a "global" setup function, but in that scenario, I'm creating user accounts even if the run fails before I reach the affected tests, or I'm simply running a single test case.

If I understood correctly, you have some test cases which require some common initialization (creating ephemeral user/pass on a DB). Currently there isn't an elegant way for two test cases to share same fixture, which was my point for suite fixtures. (If so-called suite fixtures are implemented, multiple test cases running under the same suite would have a convenient way to have a common setup/teardown which would ran just once.)

One workaround to avoid unnecessary initialization may be using a lazily-initialized singleton pattern in unchecked setup of test cases which require ephemeral keys. Pseudo example:

Credentials *creds;

void require_creds()
{
  if (!creds) {
    // DB query to create ephemeral credentials and assigning them to creds
    creds = ...;
  }
}

void cleanup_creds()
{
  if (creds) {
    // DB query to remove ephemeral credentials if initialized
  }
}

/* This setup is for test cases which require credentials. */
void setup_requiring_creds()
{
  require_creds();
  // creds can be used here
  // ...
}

/* This setup is for test cases which doesn't require credentials. */
void setup_not_requiring_creds()
{
  // ...
}

// Test functions (i.e. START_TEST/END_TEST stuff)
// ...

int main() {
  // Code to configure and run suites, test cases and their fixtures
  // ...

  // Clean up creds
  cleanup_creds();

  // ...
}

I couldn't think of a simple way to teardown credentials sooner than the end of test program though, since it's not trivial for a teardown function to figure out if credentials would be necessary for next test cases.

Also, those setups should be added as unchecked fixtures, since checked fixtures is ran in child processes in CK_FORK mode, and therefore unable to modify value of the global creds in the parent process (modified in require_creds()). This results in reinitialization of credentials for each test ran in a child process, unless variable creds is stored in a shared memory, which brings us to the related issue...

I have no way to setup a network connection, and then pass that connection between various test cases reliability.

Assuming tests make some changes on the object keeping state of network connection, one way to pass this state between tests running in different child processes in CK_FORK mode may be using some IPC mechanism (preferably shared memory). I don't know any other reliable (or easier) way to do this though. Also, this may require preserving the order of tests, but I don't think this is an issue for now as libcheck runs tests sequentially AFAIK.

I hope these answer your questions. Forgive me for injecting answers to questions you probably addressed to @brarcher. I wanted to ensure I have a sound understanding of the library as I'm gonna make some changes on it soon. @brarcher, please correct me if there are any mistakes above.

@brarcher
Copy link
Contributor

Forgive me for injecting answers to questions

On my end, thanks!

please correct me if there are any mistakes above

Looks pretty good. (:

For the network connection, as long as the socket is created in an unchecked fixture, or perhaps in the unit test program during setup, the socket should be available to unit tests.

Also, this may require preserving the order of tests, but I don't think this is an issue for now as libcheck runs tests sequentially AFAIK.

Correct, the order of tests is deterministic. Further, there is no support for running tests in parallel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants