Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration Tests for Borg #1716

Merged
merged 75 commits into from
Aug 5, 2023
Merged

Conversation

jetchirag
Copy link
Contributor

@jetchirag jetchirag commented May 20, 2023

Related Issue

Fixes #1711

Motivation and Context

https://github.com/borgbase/vorta/wiki/Google-Summer-of-Code-2023-Ideas#test-on-live-borg-binary

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist:

  • I have read the CONTRIBUTING guide.
  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have added tests to cover my changes.
  • All new and existing tests passed.

I provide my contribution under the terms of the license of this repository and I affirm the Developer Certificate of Origin.

Signed-off-by: Chirag Aggarwal <[email protected]>
Signed-off-by: Chirag Aggarwal <[email protected]>
@jetchirag
Copy link
Contributor Author

jetchirag commented May 23, 2023

I'm facing a weird issue with testing.

This is the test with removed mock command:

def test_repo_list(qapp, qtbot):
    main = qapp.main_window
    tab = main.archiveTab

    main.tabWidget.setCurrentIndex(3)
    tab.refresh_archive_list()
    qtbot.waitUntil(lambda: not tab.bCheck.isEnabled(), **pytest._wait_defaults)

    assert not tab.bCheck.isEnabled()

    qtbot.waitUntil(lambda: 'Refreshing archives done.' in main.progressText.text(), **pytest._wait_defaults)
    assert ArchiveModel.select().count() == 3
    assert 'Refreshing archives done.' in main.progressText.text()
    assert tab.bCheck.isEnabled()

Running the command on its own produces this error:

Traceback (most recent call last):
  File "/vorta/vorta/src/vorta/views/schedule_tab.py", line 106, in <lambda>
    self.app.scheduler.schedule_changed.connect(lambda pid: self.draw_next_scheduled_backup())
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vorta/vorta/src/vorta/views/schedule_tab.py", line 175, in draw_next_scheduled_backup
    status = self.app.scheduler.next_job_for_profile(self.profile().id)
                                                     ^^^^^^^^^^^^^^
  File "/vorta/vorta/src/vorta/store/models.py", line 201, in profile
    return BackupProfileModel.get(id=self.window().current_profile.id)
                                     ^^^^^^^^^^^^^
RuntimeError: wrapped C/C++ object of type ScheduleTab has been deleted
________________________________________________________________________________
FAILED

But it works fine if I put any test above it:

def test_assad():
    return

Any clue what could be the cause?

I did see that causing signal is disconnected on teardown in conftest.py
https://github.com/borgbase/vorta/blame/master/tests/conftest.py#L104

@jetchirag
Copy link
Contributor Author

@real-yfprojects Can I move existing tests to unit/ and these tests to integration/?

@real-yfprojects
Copy link
Collaborator

Any clue what could be the cause?

I did see that causing signal is disconnected on teardown in conftest.py

My supposition would be that this error is somehow triggered by emitting schedule_changed while the ScheduleTag of the deleted main window is still connected. Try disconnecting the signal before the yield statement in conftest.py.

@jetchirag
Copy link
Contributor Author

Pushed new integration tests. Tests will fail from this commit onwards.

I've cleaned up the print statements for this commit but there are several wait and debug statement like main.show() which aren't required for testing. I'll clean them up once this test file is complete.

Copy link
Collaborator

@real-yfprojects real-yfprojects left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you have finished a test file pls add short docstrings to the functions and one docstring at the top of the file explaining what is being tested.

tests/integration/test_archives.py Outdated Show resolved Hide resolved
noxfile.py Show resolved Hide resolved
@real-yfprojects
Copy link
Collaborator

I was wondering what happens when a borg command fails (return code > 0) will the tests fail too?

Signed-off-by: Chirag Aggarwal <[email protected]>
@jetchirag
Copy link
Contributor Author

jetchirag commented Jun 9, 2023

No, they won't directly fail. All tests are currently relying on vorta to show success message or some behavior and have assertions or wait for the same.

I tested this earlier with borg check and although borg command failed, test passed since Vorta still displayed completed message.

Only test_borg.py which directly calls Borg Jobs tests for returncode. Perhaps I can go through messages I'm testing and see if they internally are checking for returncode?

@real-yfprojects
Copy link
Collaborator

No, they won't directly fail. All tests are currently relying on vorta to show success message or some behavior and have assertions or wait for the same.

I tested this earlier with borg check and although borg command failed, test passed since Vorta still displayed completed message.

Only test_borg.py which directly calls Borg Jobs tests for returncode. Perhaps I can go through messages I'm testing and see if they internally are checking for returncode?

As long as there is a test catching the error it's fine I think. However if you find a simple solution for catching the error in the other tests that wouldn't be bad either.

Makefile Outdated Show resolved Hide resolved
tests/integration/conftest.py Show resolved Hide resolved
Signed-off-by: Chirag Aggarwal <[email protected]>
@bigtedde
Copy link
Collaborator

bigtedde commented Jul 24, 2023

I checked out this branch and ran the tests on M1 Mac. Slightly modified command to work on arm64 architecture:

BORG_OPENSSL_PREFIX=/usr/local/Cellar/openssl@3/3.1.0 arch -arm64 make test
...
nox > Ran multiple sessions:
nox > * run_tests(borgbackup='1.1.18'): failed
nox > * run_tests(borgbackup='1.2.2'): success
nox > * run_tests(borgbackup='1.2.4'): failed
nox > * run_tests(borgbackup='2.0.0b5'): failed

v1.1.18 Failed with:
ImportError: dlopen(/vorta/.nox/run_tests-borgbackup-1-1-18/lib/python3.11/site-packages/borg/compress.cpython-311-darwin.so, 0x0002): symbol not found in flat namespace '_LZ4_compressBound'

v1.2.2: SUCCESS

v1.2.4 and 2.0.0b5 failed with:
ImportError: dlopen(/vorta/.nox/run_tests-borgbackup-2-0-0b5/lib/python3.11/site-packages/borg/crypto/low_level.cpython-311-darwin.so, 0x0002): symbol not found in flat namespace '_EVP_aes_256_ctr'

I think the fails are due to m1 Mac issues and not the tests themself. IIRC I've gotten the symbol not found in flat namespace issue before and believe I was able to get past it, but can't recall how.

@m3nu
Copy link
Contributor

m3nu commented Jul 25, 2023

I think the fails are due to m1 Mac issues and not the tests themself. IIRC I've gotten the symbol not found in flat namespace issue before and believe I was able to get past it, but can't recall how.

Yes, likely more related to Homebrew libs than our code here.

What happens if you install and run those Borg versions via pip? Or if you point it to openssl 1.1 (should be in the Homebrew folder too).

@bigtedde
Copy link
Collaborator

What happens if you install and run those Borg versions via pip? Or if you point it to openssl 1.1 (should be in the Homebrew folder too).

Pointed it to openssl 1.1 and now all versions of Borg passed except v1.1.18, with same error as before.

@jetchirag
Copy link
Contributor Author

Are you able to use v1.1.18 normally (via pip installation)?

@bigtedde
Copy link
Collaborator

Are you able to use v1.1.18 normally (via pip installation)?

Unfortunately no, I believe I always receive the same error. But it is (maybe?) a good thing, because it means the issue lies with my system and not your code :)

@real-yfprojects
Copy link
Collaborator

What's missing for this PR to be merged?

@m3nu
Copy link
Contributor

m3nu commented Jul 29, 2023

Minor conflict, but otherwise we should merge it.

@jetchirag
Copy link
Contributor Author

@m3nu
Copy link
Contributor

m3nu commented Aug 4, 2023

Is this ready to merge? You can add tests for new features when adding those features. One CI test timed out (looks like a Github issue)

@jetchirag
Copy link
Contributor Author

Yes, ready to merge from my side. Which test timed out? Last commit seems to show all passed.

@m3nu
Copy link
Contributor

m3nu commented Aug 4, 2023

One had a timeout and I restarted it this morning. Didn't look related to the code here.

@real-yfprojects real-yfprojects merged commit b015368 into borgbase:master Aug 5, 2023
39 checks passed
@real-yfprojects
Copy link
Collaborator

(I've disabled the old status checks and enable the new ones, after this is merged)

👍

@real-yfprojects
Copy link
Collaborator

Congratulations for this very big PR @jetchirag. 🎉 You put lots of work into it and it is very worth it since we can now, for the first time, be sure that PRs run with actual borg versions.

@jetchirag
Copy link
Contributor Author

@real-yfprojects Thank you! This was a big learning experience for me.

@jetchirag jetchirag deleted the borg-live-tests branch August 6, 2023 11:36
@m3nu
Copy link
Contributor

m3nu commented Aug 6, 2023

Congrats on merging this beast! 👏

@m3nu
Copy link
Contributor

m3nu commented Aug 13, 2023

Tests take a good amount of time now. Is it possible to run some of them on-demand only and not for each commit? And only run a sensible selection for each commit? @jetchirag

I still want to run all tests before doing a final merge, just not for each commit. For the macOS binary, we have an on-demand workflow that can be triggered by the user.

@m3nu
Copy link
Contributor

m3nu commented Aug 13, 2023

On the plus-side, the tests seem to be fairly stable. We used to have some issues with race-conditions, but they seem less now.

@jetchirag
Copy link
Contributor Author

jetchirag commented Aug 16, 2023

@m3nu I assume it would be through workflow_dispatch? I'll check how we can specifc tests to run but do you know what tests would come under sensible selection?

@m3nu
Copy link
Contributor

m3nu commented Aug 17, 2023

Let's discuss this further in #1778.

BTW: After merging some PRs this morning, I'm still waiting for tests after 30 min or so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Running tests on live borg binary
5 participants