Skip to content

Commit

Permalink
e2e: Move cleanup and timesync to Before/After hooks
Browse files Browse the repository at this point in the history
To make sure that timesync is re-enabled even if some Steps fail, move
this action to After hook of the given Scenario. Do the same thing
with `crc cleanup` to make sure cluster is destroyed after selected
Scenarios.
  • Loading branch information
jsliacan authored and openshift-merge-robot committed Jan 19, 2023
1 parent a8efd07 commit 27a19c8
Show file tree
Hide file tree
Showing 5 changed files with 57 additions and 20 deletions.
2 changes: 1 addition & 1 deletion test/e2e/features/basic.feature
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Feature: Basic test
When executing crc status command fails
Then stderr should contain "crc does not seem to be setup correctly, have you run 'crc setup'?"

@darwin @linux @windows
@darwin @linux @windows @cleanup
Scenario: CRC start usecase
Given executing "crc setup --check-only" fails
# Request start with monitoring stack
Expand Down
18 changes: 4 additions & 14 deletions test/e2e/features/cert_rotation.feature
Original file line number Diff line number Diff line change
@@ -1,28 +1,18 @@
@cert_rotation @linux
Feature: Certificate rotation test

User starts CRC more than one month after the release. They expect
User starts CRC more than 13 months after the release. They expect
certificate rotation to happen successfully and to be able to deploy
an app and check its accessibility.

Background: Setup CRC and rewind the clock forward
When executing single crc setup command succeeds
And executing "sudo timedatectl set-ntp off" succeeds
Then executing "sudo date -s '13 month'" succeeds
And with up to "10" retries with wait period of "1s" command "virsh --readonly -c qemu:///system capabilities" output matches "^<capabilities>"

@timesync @cleanup
Scenario: Start CRC "in the future" and clean up
Given executing single crc setup command succeeds
When starting CRC with default bundle along with stopped network time synchronization succeeds
Then stdout should contain "Started the OpenShift cluster"
And executing "eval $(crc oc-env)" succeeds
When checking that CRC is running
Then login to the oc cluster succeeds
Then executing "oc whoami" succeeds
And stdout should contain "kubeadmin"
# Set clock back to the original time
When executing "sudo date -s '-13 month'" succeeds
And executing "sudo timedatectl set-ntp on" succeeds
# CRC delete and cleanup
When executing "crc delete -f" succeeds
Then stdout should contain "Deleted the instance"
When executing crc cleanup command succeeds

1 change: 1 addition & 0 deletions test/e2e/features/proxy.feature
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ Feature: Behind proxy test
Background: Setup the proxy container using podman
* executing "podman run --name squid -d -p 3128:3128 quay.io/crcont/squid" succeeds

@cleanup
Scenario: Start CRC behind proxy
Given executing single crc setup command succeeds
And executing "crc config set http-proxy http://192.168.130.1:3128" succeeds
Expand Down
5 changes: 1 addition & 4 deletions test/e2e/features/story_openshift.feature
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ Feature: 4 Openshift stories

# Operator from marketplace

@darwin @linux @windows @testdata @story_marketplace
@darwin @linux @windows @testdata @story_marketplace @cleanup
Scenario: Install new operator
When executing "oc apply -f redis-sub.yaml" succeeds
Then with up to "20" retries with wait period of "30s" command "oc get csv" output matches ".*redis-operator\.(.*)Succeeded$"
Expand All @@ -86,8 +86,5 @@ Feature: 4 Openshift stories
Then stdout should match "^pod(.*)deleted$"
# after a while 1 pods should be up & running again
And with up to "10" retries with wait period of "30s" command "oc get pods" output matches "redis-standalone-[a-z0-9]* .*Running.*"
# cleanup
When executing "crc delete -f" succeeds
And executing crc cleanup command succeeds


51 changes: 50 additions & 1 deletion test/e2e/testsuite/testsuite.go
Original file line number Diff line number Diff line change
Expand Up @@ -179,15 +179,64 @@ func InitializeScenario(s *godog.ScenarioContext) {
fmt.Println("error logging:", err)
}

// copy data/config files to test dir
for _, tag := range sc.Tags {
// copy data/config files to test dir
if tag.Name == "@testdata" {
err := util.CopyFilesToTestDir()
if err != nil {
os.Exit(1)
}
}

// move host's date 13 months forward and turn timesync off
if tag.Name == "@timesync" {
err := util.ExecuteCommand("sudo timedatectl set-ntp off")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
err = util.ExecuteCommand("sudo date -s '13 month'")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
err = util.ExecuteCommandWithRetry(10, "1s", "virsh --readonly -c qemu:///system capabilities", "contains", "<capabilities>")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
}
}

return ctx, nil
})

s.After(func(ctx context.Context, sc *godog.Scenario, err error) (context.Context, error) {

for _, tag := range sc.Tags {

// move host's date 13 months back and turn timesync on
if tag.Name == "@timesync" {
err := util.ExecuteCommand("sudo date -s '-13 month'")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
err = util.ExecuteCommand("sudo timedatectl set-ntp on")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
}

if tag.Name == "@cleanup" {
err := util.ExecuteCommand("crc cleanup")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
}

}

return ctx, nil
Expand Down

0 comments on commit 27a19c8

Please sign in to comment.