Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature(testing): Introduce a mock implementation for KnClient #306

Merged
merged 4 commits into from
Jul 31, 2019

Conversation

rhuss
Copy link
Contributor

@rhuss rhuss commented Jul 26, 2019

Commands must only use the KnClient API and their unit tests should also
only use a mock implementation for this interface.

This commit introduces such a Mock implementation and works like in
the example below for creating a simple service in a synchronous
way

// New mock client
client := knclient.NewMockKnClient(t)

// Recording:
r := client.Recorder()
// Check for existing service --> no
r.GetService("foo", nil, errors.NewNotFound(v1alpha1.Resource("service"), "foo"))
// Create service (don't validate given service --> "Any()" arg is allowed)
r.CreateService(knclient.Any(), nil)
// Wait for service to become ready
r.WaitForService("foo", knclient.Any(), nil)
// Get for showing the URL
r.GetService("foo", getServiceWithUrl("foo", "http://foo.example.com"), nil)

// Testing:
output, err := executeCommand(client, "create", "foo", "--image", "gcr.io/foo/bar:baz")
assert.NilError(t, err)
assert.Assert(t, util.ContainsAll(output, "created", "foo", "http://foo.example.com", "Waiting"))

// Validate that all recorded API methods have been called
r.Validate()

Such tests have three phases:

  • A recording phase where the mock client is prepared. In this phase a
    recorder is called with the expected arguments and the return values
    it can return. The arguments can be also functions with
    signature func (t *testing.T, actual interface{}, expected interface{})
    and will be called to verify a given argument. Such a function should
    t.Fail() on its own if the validation fails.
    A convenient Any() method is added to allow no validation on an argument
    (see example below).
    Method can be called multiple times, but the order needs to reflect
    the actual calling order

  • A playback phase where the test executed which in turn calls out to the
    Mocks

  • A validation phase to check the expected output. The recorder can be
    also validated to verify that all recorded mock calls has been
    used during the test.

    See service_create_mock_test.go for a full example.

@googlebot googlebot added the cla: yes Indicates the PR's author has signed the CLA. label Jul 26, 2019
@knative-prow-robot knative-prow-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 26, 2019
@knative-prow-robot knative-prow-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jul 26, 2019
@rhuss
Copy link
Contributor Author

rhuss commented Jul 26, 2019

@maximilien you might be interested in this PR as it would simplify command testing considerably.

@rhuss rhuss changed the title feature(testing): Introduce a Mock implementation for KnClient feature(testing): Introduce a mock implementation for KnClient Jul 26, 2019
@@ -57,7 +58,10 @@ func (params *KnParams) GetNamespace(cmd *cobra.Command) (string, error) {
var err error
namespace, err = params.CurrentNamespace()
if err != nil {
return "", err
if !clientcmd.IsEmptyConfig(err) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is PR #305 included here.

Copy link
Contributor

@maximilien maximilien left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is all good. My only concern is that it's a custom Mock. Nothing wrong with that for some specific cases.

But in general I try to avoid custom mocks using tools to generate them instead (when the generated mock is good enough).

The reason is that like any custom supporting code it needs to be maintained and adds to the overall effort in ramping up for new devs. Also auto-generating mocks and doubles forces defined interfaces and nicely defined types.

One mock/doubles generating tool I have used with success is counterfeiter. Also used in CF and other large OSS projects.

@maximilien
Copy link
Contributor

/approve
/ok-to-test

@knative-prow-robot knative-prow-robot added the ok-to-test Indicates a non-member PR verified by an org member that is safe to test. label Jul 26, 2019
@knative-prow-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: maximilien, rhuss

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Commands must only use the `KnClient` API and their unit tests should also
only use a mock implementation for this interface.

This commit introduces such a Mock implementation and works like in
the example below for creating a simple service in a synchronous
way

```
// New mock client
	client := knclient.NewMockKnClient(t)

	// Recording:
	r := client.Recorder()
	// Check for existing service --> no
	r.GetService("foo", nil, errors.NewNotFound(v1alpha1.Resource("service"), "foo"))
	// Create service (don't validate given service --> "Any()" arg is allowed)
	r.CreateService(knclient.Any(), nil)
	// Wait for service to become ready
	r.WaitForService("foo", knclient.Any(), nil)
	// Get for showing the URL
	r.GetService("foo", getServiceWithUrl("foo", "http://foo.example.com"), nil)

	// Testing:
	output, err := executeCommand(client, "create", "foo", "--image", "gcr.io/foo/bar:baz")
	assert.NilError(t, err)
	assert.Assert(t, util.ContainsAll(output, "created", "foo", "http://foo.example.com", "Waiting"))

	// Validate that all recorded API methods have been called
	r.Validate()
```

Such tests have three phases:

* A recording phase where the mock client is prepared. In this phase a
  recorder is called with the expected arguments and the return values
  it can return. The arguments can be also functions with
  signature `func (t *testing.T, actual interface{}, expected interface{})`
  and will be called to verify a given argument. Such a function should
  `t.Fail()` on its own if the validation fails.
  A convenient `Any()` method is added to allow no validation on an argument
  (see example below).
  Method can be called multiple times, but the order needs to reflect
  the actual calling order
* A playback phase where the test executed which in turn calls out to the
  Mocks
* A validation phase to check the expected output. The recorder can be
  also validated to verify that all recorded mock calls has been
  used during the test.

  See `service_create_mock_test.go` for a full example.
@knative-metrics-robot
Copy link

The following is the coverage report on pkg/.
Say /test pull-knative-client-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/serving/v1alpha1/client_mock.go Do not exist 96.0%

@rhuss
Copy link
Contributor Author

rhuss commented Jul 27, 2019

The reasons why I decided against a Mock framework (GoMock, counterfeiter, pegomock, ...) are:

  • I wanted to avoid code generation
  • The simplest implementation possible which suites our needs

wrt/ to maintenance, yes its a bit more then calling a code generator but not so much as that's all you have to add for a new KnClient method:

// Get a revision by name
func (r *Recorder) GetRevision(name interface{}, revision *v1alpha1.Revision, err error) {
	r.add("GetRevision", apiMethodCall{[]interface{}{name}, []interface{}{revision, err}})
}

func (c *MockKnClient) GetRevision(name string) (*v1alpha1.Revision, error) {
	call := c.getCall("GetRevision")
	c.verifyArgs(call, name)
	return call.result[0].(*v1alpha1.Revision), errorOrNil(call.result[1])
}

As mentioned, simplicity was the main driving factor (which makes up also for a simplified call to the recording method which combines expected arguments and return values in a single argument list.

Could be that's too simplistic. If so, we, of course, switch over to a full Mock framework.

@rhuss
Copy link
Contributor Author

rhuss commented Jul 30, 2019

/retest

@maximilien
Copy link
Contributor

maximilien commented Jul 30, 2019

I am not going to oppose this but my vote would be using a mock framework. Specifically because it's less custom code to maintain.

However, since we have not used any mock framework so far in this project, the decision of using a framework is heavier than a custom mock...hence why I'd compromise with your solution. I want to be easy 🏖 and trust your judgement here.

👍 this message and I'll lgtm. Cheers 🍻

@rhuss
Copy link
Contributor Author

rhuss commented Jul 30, 2019

Some other benefits of using that kind of mocks for testing commands:

  • If we want to offer a KnClient API as stated as a goal, offering a typed test API is a plus for users of this API.
  • Using that Mock interface exclusively in the Kn commands unit tests, we have a clear separation and also an indication that no command uses the plain Knative Apis directly but only via the KnClient abstraction.

The Mock API is kept deliberately very simplistic to make it easy (even trivial) to extend or to adapt it.
However, I totally agree with @sixolet that an excellent test coverage of the KnClient implementation is essential if we not only want to rely on the integration tests.

@maximilien
Copy link
Contributor

/lgtm

@knative-prow-robot knative-prow-robot added the lgtm Indicates that a PR is ready to be merged. label Jul 31, 2019
@knative-prow-robot knative-prow-robot merged commit 71c39e8 into knative:master Jul 31, 2019
maximilien pushed a commit to maximilien/client that referenced this pull request Jul 31, 2019
…ve#306)

* feature(testing): Introduce a Mock implementation for KnClient

Commands must only use the `KnClient` API and their unit tests should also
only use a mock implementation for this interface.

This commit introduces such a Mock implementation and works like in
the example below for creating a simple service in a synchronous
way

```
// New mock client
	client := knclient.NewMockKnClient(t)

	// Recording:
	r := client.Recorder()
	// Check for existing service --> no
	r.GetService("foo", nil, errors.NewNotFound(v1alpha1.Resource("service"), "foo"))
	// Create service (don't validate given service --> "Any()" arg is allowed)
	r.CreateService(knclient.Any(), nil)
	// Wait for service to become ready
	r.WaitForService("foo", knclient.Any(), nil)
	// Get for showing the URL
	r.GetService("foo", getServiceWithUrl("foo", "http://foo.example.com"), nil)

	// Testing:
	output, err := executeCommand(client, "create", "foo", "--image", "gcr.io/foo/bar:baz")
	assert.NilError(t, err)
	assert.Assert(t, util.ContainsAll(output, "created", "foo", "http://foo.example.com", "Waiting"))

	// Validate that all recorded API methods have been called
	r.Validate()
```

Such tests have three phases:

* A recording phase where the mock client is prepared. In this phase a
  recorder is called with the expected arguments and the return values
  it can return. The arguments can be also functions with
  signature `func (t *testing.T, actual interface{}, expected interface{})`
  and will be called to verify a given argument. Such a function should
  `t.Fail()` on its own if the validation fails.
  A convenient `Any()` method is added to allow no validation on an argument
  (see example below).
  Method can be called multiple times, but the order needs to reflect
  the actual calling order
* A playback phase where the test executed which in turn calls out to the
  Mocks
* A validation phase to check the expected output. The recorder can be
  also validated to verify that all recorded mock calls has been
  used during the test.

  See `service_create_mock_test.go` for a full example.

* chore: Test the mock client

* chore: Minor fixes

* chore: Cosmetic fixes
coryrc pushed a commit to coryrc/client that referenced this pull request May 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cla: yes Indicates the PR's author has signed the CLA. lgtm Indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants