Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regular prints are non distinguisable from the listing (--info) printout #668

Open
cnx-tcsikos opened this issue Jul 2, 2021 · 7 comments

Comments

@cnx-tcsikos
Copy link

Hi!
I'm working on integrating testplan support for "Python Test Explorer for VsCode" VsCode extension.
I have encountered a scenario, where a global print("Hello world") in the main test_plan.py file is visible in the listing (--info) printout.
This is corrupting the whole printout, making the parsing unnecessarily hard.

Example:

print("Hello")
@test_plan(name="Test App")
def main(plan):
    plan.add(
        MultiTest(
            name="Test",
            suites=[
                ...
            ]
        )
    )
print("World")

if __name__ == '__main__':
    sys.exit(not main())

python test_plan.py --info pattern-full

Hello
World
<tests>

Proposed solution

  • Special printout before and after printing the tests
    • Eg.: '==DISCOVERED TESTS BEGIN=='
@cnx-tcsikos
Copy link
Author

I am aware of custom listing. However due to the nature of the being an extension, I cannot force the user to use a special listing for test discovery to work.

@kn-ms
Copy link
Contributor

kn-ms commented Jul 2, 2021

Hi,

Thank you for raising this, I think this is a perfectly valid requirement from external tool point of view to have a stable way of listing testcases. I think the currently implemented listers are mostly for human users. You are right that custom lister will not be generic enough as user could not be forced, but a new lister can be easily added to testplan.

A new lister for externals tools to use, can be more structured by dumping a json or xml which could be used more easily than the current :: separated names, it can even deliver some more info, that might be useful for those tools. If you have the capacity to work on this kind of lister, we are open for pull requests. If not I will put this up for prioritization for our team, but please provide some feedback what would be the ideal format/information that lister need to return?

@cnx-tcsikos
Copy link
Author

Yes! Json format would be great!

As for my point of view, this is the interface I have to comply with:
https://github.com/hbenl/vscode-test-adapter-api/blob/e7ac0ca4fa483b9354b2679900937df66cdb3a3f/src/index.ts#L185-L267
I believe it is a well thought through format, as it have became the VsCode standard test API.
Note that for Testplan, both App and Suite are TestSuiteInfo type.

Beside the mandatory id and label, the next important ones are the file and line
For me

  • id is the same as the lines in --info pattern-full, but changed the '::' to ':' so it can be used for --pattern
  • label is the simple name

While implementing it would be fun, the rigorous DCO and the potential documentation avalanche it could cause probably won't quite fit well my active employment time :)
For finishing the first iteration of the VsCode extension, I can work with the current printout.

PS: I did not hope to receive such a positive feedback :) Thanks

@Pyifan
Copy link
Contributor

Pyifan commented Jul 6, 2021

Hi,

We have recorded an internal ticket for adding a reasonable interface for programmatic listing - it is not prioritized at the moment, and will keep you posted.

I'm also thinking if we could re-use the exporters to generate the data you need - dryrun testplan and write json/xml report skeleton.

Btw, have you tried interactive mode of testplan? I'm thinking that might be another thing we could use for integration with VSCode as well.

@cnx-tcsikos
Copy link
Author

Thanks!

Btw, have you tried interactive mode of testplan? I'm thinking that might be another thing we could use for integration with VSCode as well.

I haven't tried interactive mode, mainly because the 'Test Adapter' extension doesn't support starting-stopping test environments.

@cnx-tcsikos
Copy link
Author

Just a quick question: Does Testplan provide API to programmatically get the list of tests (from a separate python script, just including the testplan script file) and get some similar information like we are talking about?

Quote from the vscode extension author for clarification:

The idea is doing similar to how test discovery is done for unittest (see https://github.com/kondratyev-nv/vscode-python-test-adapter/blob/master/src/unittest/unittestScripts.ts). Unittest outputs discovered tests to the stdout by default. However, there is an option to call unittest discovery programmatically which returns a set of objects that can be processed and outputted in a format that can be consumed by the extension (defaultTestLoader.discover(start_directory, pattern=pattern) in the script). So my question is whether it is possible to do something similar with testplan?

@Pyifan
Copy link
Contributor

Pyifan commented Jul 8, 2021

Just a quick question: Does Testplan provide API to programmatically get the list of tests (from a separate python script, just including the testplan script file) and get some similar information like we are talking about?

No, we don't have that. testplan's test discovery could be more complicated than unittest/pytest - they probably can just do it by scanning the directories and importing the modules. If we do that, we probably only know what testsuites/testcases we have but we cannot construct a complete plan/multitest/testsuite/testcase structure.

With that said, it might be a nice feature to discover from the file structure, and allow user to run the testsuites without having to add them into a multitest (if it doesn't require an environment). We will give this some thoughts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants