Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update vendoring of docker/docker #155

Merged
merged 1 commit into from
Jun 7, 2017

Conversation

vdemeester
Copy link
Collaborator

Updating the vendoring and moving some function over here (not used in moby/moby anymore).

Signed-off-by: Vincent Demeester [email protected]

vendor.conf Outdated
github.com/docker/go-connections e15c02316c12de00874640cd76311849de2aeed5
github.com/docker/go-events 18b43f1bc85d9cdd42c05a6cd2d444c7a200a894
github.com/docker/go-units 9e638d38cf6977a37a8ea0078f3ee75a7cdb2dd1
github.com/docker/libnetwork b13e0604016a4944025aaff521d9c125850b0d04
github.com/docker/libnetwork 83e1e49475b88a9f1f8ba89a690a7d5de42e24b9
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like libnetwork is only used for a single IsLocalhost function in github.com/docker/libnetwork/resolvconf/dns.

Could we make the vendoring more specific, and reference that package directly? Or possibly copy that function into cli to avoid the dependency.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would vote for copying the thing 👼

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and updated 😉

@codecov-io
Copy link

codecov-io commented Jun 5, 2017

Codecov Report

Merging #155 into master will decrease coverage by 0.11%.
The diff coverage is 17.1%.

@@            Coverage Diff             @@
##           master     #155      +/-   ##
==========================================
- Coverage   44.96%   44.84%   -0.12%     
==========================================
  Files         169      171       +2     
  Lines       11381    11464      +83     
==========================================
+ Hits         5117     5141      +24     
- Misses       5971     6028      +57     
- Partials      293      295       +2

Name: opts.name,
Labels: runconfigopts.ConvertKVStringsToMap(opts.labels.GetAll()),
Name: options.name,
Labels: runconfigopts.ConvertKVStringsToMap(options.labels.GetAll()),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/runconfigopts/opts/ ? (one more below this as well)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh right :)

Signed-off-by: Vincent Demeester <[email protected]>
@aaronlehmann
Copy link
Contributor

LGTM

When did we start enforcing code coverage targets for PRs? I thought we previously agreed not to do this: #109 (comment)

@dnephin
Copy link
Contributor

dnephin commented Jun 6, 2017

I think it was enabled by that PR, I didn't realize that it would, but I don't think it's a problem.

Enforcement is up to reviewers. We don't have to wait for it to be green if we decide that the PR doesn't need test coverage, or that the PR is important enough to merge without coverage. A vendoring PR is a good example of when we might ignore it.

As far as I can tell enabling statuses is the only way to get the "patch coverage" percentage, which I think is the most valuable part of coverage. I'm glad we have it.

Copy link
Contributor

@dnephin dnephin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@aaronlehmann
Copy link
Contributor

I think it was enabled by that PR, I didn't realize that it would, but I don't think it's a problem.

Enforcement is up to reviewers. We don't have to wait for it to be green if we decide that the PR doesn't need test coverage, or that the PR is important enough to merge without coverage. A vendoring PR is a good example of when we might ignore it.

As far as I can tell enabling statuses is the only way to get the "patch coverage" percentage, which I think is the most valuable part of coverage. I'm glad we have it.

I don't take any pleasure in arguing about this stuff, but I really don't like the idea of the "passing/failing checks" status becoming essentially random. IMHO, only reliable checks that we actually care about should count towards the status of the PR. If a PR is marked as failing its checks, there should be a good reason for it, not "oh, disregard that check because it's finicky and doesn't matter much". It would just be unfortunate and confusing if the status of the PR wasn't actually meaningful because of stuff like this. It means a lot of clicking through from the PR list to see if something's actually failing CI or not, and it's confusing to new contributors.

I'm also disappointed to see this marking PRs as failing their checks when I previously asked that we not do this, and was told there were no plans to.

@vdemeester
Copy link
Collaborator Author

@aaronlehmann @dnephin let's remove the "failure" checks for now and discuss this on a proposal, seems good ?

@aaronlehmann
Copy link
Contributor

Wondering if it's possible to decrease the project and patch coverage targets to the point where the checks will always be marked successful, but then maintain informal coverage targets that we can apply more loosely.

This is the codecov.yml we use for swarmkit:

comment:
  layout: header, changes, diff, sunburst
coverage:
  status:
    patch: false
    project:
      default:
        enabled: yes
        target: 0
    changes: false
ignore:
  -**/testutils

Perhaps something similar (without patch having target: 0 instead of being disabled) would work?

@dnephin
Copy link
Contributor

dnephin commented Jun 6, 2017

let's remove the "failure" checks for now

Is there any other way of seeing patch coverage?

@vdemeester
Copy link
Collaborator Author

@aaronlehmann works for me. Another way is to augment the threshold (threshold: "10%" for ex). That way it will fail in the PR decrease the coverage by 10% (which would be quite bad)

@aaronlehmann
Copy link
Contributor

LGTM

@aaronlehmann aaronlehmann merged commit 583ed2e into docker:master Jun 7, 2017
@GordonTheTurtle GordonTheTurtle added this to the 17.06.0 milestone Jun 7, 2017
@vdemeester vdemeester deleted the fix-and-update-vendor branch June 7, 2017 11:27
@thaJeztah thaJeztah modified the milestones: 17.07.0, 17.06.0 Jun 8, 2017
trapier pushed a commit to trapier/cli that referenced this pull request Sep 30, 2019
…-19.03-a63faebcf13973933a3ae646e5a9e5befaad42fd

[19.03] sync to upstream 19.03 a63faeb
thaJeztah added a commit to thaJeztah/cli that referenced this pull request Jul 3, 2024
This makes a quick pass through our tests;

Discard output/err
----------------------------------------------

Many tests were testing for error-conditions, but didn't discard output.
This produced a lot of noise when running the tests, and made it hard
to discover if there were actual failures, or if the output was expected.
For example:

    === RUN   TestConfigCreateErrors
    Error: "create" requires exactly 2 arguments.
    See 'create --help'.

    Usage:  create [OPTIONS] CONFIG file|- [flags]

    Create a config from a file or STDIN
    Error: "create" requires exactly 2 arguments.
    See 'create --help'.

    Usage:  create [OPTIONS] CONFIG file|- [flags]

    Create a config from a file or STDIN
    Error: error creating config
    --- PASS: TestConfigCreateErrors (0.00s)

And after discarding output:

    === RUN   TestConfigCreateErrors
    --- PASS: TestConfigCreateErrors (0.00s)

Use sub-tests where possible
----------------------------------------------

Some tests were already set-up to use test-tables, and even had a usable
name (or in some cases "error" to check for). Change them to actual sub-
tests. Same test as above, but now with sub-tests and output discarded:

    === RUN   TestConfigCreateErrors
    === RUN   TestConfigCreateErrors/requires_exactly_2_arguments
    === RUN   TestConfigCreateErrors/requires_exactly_2_arguments#01
    === RUN   TestConfigCreateErrors/error_creating_config
    --- PASS: TestConfigCreateErrors (0.00s)
        --- PASS: TestConfigCreateErrors/requires_exactly_2_arguments (0.00s)
        --- PASS: TestConfigCreateErrors/requires_exactly_2_arguments#01 (0.00s)
        --- PASS: TestConfigCreateErrors/error_creating_config (0.00s)
    PASS

It's not perfect in all cases (in the above, there's duplicate "expected"
errors, but Go conveniently adds "#1" for the duplicate). There's probably
also various tests I missed that could still use the same changes applied;
we can improve these in follow-ups.

Set cmd.Args to prevent test-failures
----------------------------------------------

When running tests from my IDE, it compiles the tests before running,
then executes the compiled binary to run the tests. Cobra doesn't like
that, because in that situation `os.Args` is taken as argument for the
command that's executed. The command that's tested now sees the test-
flags as arguments (`-test.v -test.run ..`), which causes various tests
to fail ("Command XYZ does not accept arguments").

    # compile the tests:
    go test -c -o foo.test

    # execute the test:
    ./foo.test -test.v -test.run TestFoo
    === RUN   TestFoo
    Error: "foo" accepts no arguments.

The Cobra maintainers ran into the same situation, and for their own
use have added a special case to ignore `os.Args` in these cases;
https://github.com/spf13/cobra/blob/v1.8.1/command.go#L1078-L1083

    args := c.args

    // Workaround FAIL with "go test -v" or "cobra.test -test.v", see docker#155
    if c.args == nil && filepath.Base(os.Args[0]) != "cobra.test" {
        args = os.Args[1:]
    }

Unfortunately, that exception is too specific (only checks for `cobra.test`),
so doesn't automatically fix the issue for other test-binaries. They did
provide a `cmd.SetArgs()` utility for this purpose
https://github.com/spf13/cobra/blob/v1.8.1/command.go#L276-L280

    // SetArgs sets arguments for the command. It is set to os.Args[1:] by default, if desired, can be overridden
    // particularly useful when testing.
    func (c *Command) SetArgs(a []string) {
        c.args = a
    }

And the fix is to explicitly set the command's args to an empty slice to
prevent Cobra from falling back to using `os.Args[1:]` as arguments.

    cmd := newSomeThingCommand()
    cmd.SetArgs([]string{})

Some tests already take this issue into account, and I updated some tests
for this, but there's likely many other ones that can use the same treatment.

Perhaps the Cobra maintainers would accept a contribution to make their
condition less specific and to look for binaries ending with a `.test`
suffix (which is what compiled binaries usually are named as).

Signed-off-by: Sebastiaan van Stijn <[email protected]>
thaJeztah added a commit to thaJeztah/cli that referenced this pull request Jul 19, 2024
This makes a quick pass through our tests;

Discard output/err
----------------------------------------------

Many tests were testing for error-conditions, but didn't discard output.
This produced a lot of noise when running the tests, and made it hard
to discover if there were actual failures, or if the output was expected.
For example:

    === RUN   TestConfigCreateErrors
    Error: "create" requires exactly 2 arguments.
    See 'create --help'.

    Usage:  create [OPTIONS] CONFIG file|- [flags]

    Create a config from a file or STDIN
    Error: "create" requires exactly 2 arguments.
    See 'create --help'.

    Usage:  create [OPTIONS] CONFIG file|- [flags]

    Create a config from a file or STDIN
    Error: error creating config
    --- PASS: TestConfigCreateErrors (0.00s)

And after discarding output:

    === RUN   TestConfigCreateErrors
    --- PASS: TestConfigCreateErrors (0.00s)

Use sub-tests where possible
----------------------------------------------

Some tests were already set-up to use test-tables, and even had a usable
name (or in some cases "error" to check for). Change them to actual sub-
tests. Same test as above, but now with sub-tests and output discarded:

    === RUN   TestConfigCreateErrors
    === RUN   TestConfigCreateErrors/requires_exactly_2_arguments
    === RUN   TestConfigCreateErrors/requires_exactly_2_arguments#01
    === RUN   TestConfigCreateErrors/error_creating_config
    --- PASS: TestConfigCreateErrors (0.00s)
        --- PASS: TestConfigCreateErrors/requires_exactly_2_arguments (0.00s)
        --- PASS: TestConfigCreateErrors/requires_exactly_2_arguments#01 (0.00s)
        --- PASS: TestConfigCreateErrors/error_creating_config (0.00s)
    PASS

It's not perfect in all cases (in the above, there's duplicate "expected"
errors, but Go conveniently adds "#1" for the duplicate). There's probably
also various tests I missed that could still use the same changes applied;
we can improve these in follow-ups.

Set cmd.Args to prevent test-failures
----------------------------------------------

When running tests from my IDE, it compiles the tests before running,
then executes the compiled binary to run the tests. Cobra doesn't like
that, because in that situation `os.Args` is taken as argument for the
command that's executed. The command that's tested now sees the test-
flags as arguments (`-test.v -test.run ..`), which causes various tests
to fail ("Command XYZ does not accept arguments").

    # compile the tests:
    go test -c -o foo.test

    # execute the test:
    ./foo.test -test.v -test.run TestFoo
    === RUN   TestFoo
    Error: "foo" accepts no arguments.

The Cobra maintainers ran into the same situation, and for their own
use have added a special case to ignore `os.Args` in these cases;
https://github.com/spf13/cobra/blob/v1.8.1/command.go#L1078-L1083

    args := c.args

    // Workaround FAIL with "go test -v" or "cobra.test -test.v", see docker#155
    if c.args == nil && filepath.Base(os.Args[0]) != "cobra.test" {
        args = os.Args[1:]
    }

Unfortunately, that exception is too specific (only checks for `cobra.test`),
so doesn't automatically fix the issue for other test-binaries. They did
provide a `cmd.SetArgs()` utility for this purpose
https://github.com/spf13/cobra/blob/v1.8.1/command.go#L276-L280

    // SetArgs sets arguments for the command. It is set to os.Args[1:] by default, if desired, can be overridden
    // particularly useful when testing.
    func (c *Command) SetArgs(a []string) {
        c.args = a
    }

And the fix is to explicitly set the command's args to an empty slice to
prevent Cobra from falling back to using `os.Args[1:]` as arguments.

    cmd := newSomeThingCommand()
    cmd.SetArgs([]string{})

Some tests already take this issue into account, and I updated some tests
for this, but there's likely many other ones that can use the same treatment.

Perhaps the Cobra maintainers would accept a contribution to make their
condition less specific and to look for binaries ending with a `.test`
suffix (which is what compiled binaries usually are named as).

Signed-off-by: Sebastiaan van Stijn <[email protected]>
(cherry picked from commit ab23024)
Signed-off-by: Sebastiaan van Stijn <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants