From 9495b703cab57ddac4498ddb1c3d7e07bda99bc7 Mon Sep 17 00:00:00 2001 From: Matt Moore Date: Mon, 4 May 2020 10:30:44 -0700 Subject: [PATCH 01/12] [master] Format markdown (#993) Produced via: `prettier --write --prose-wrap=always $(find -name '*.md' | grep -v vendor | grep -v .github | grep -v docs/cmd/)` /assign grantr nachocano /cc grantr nachocano --- docs/examples/channel/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/examples/channel/README.md b/docs/examples/channel/README.md index 4ace601df4..0e70d6193d 100644 --- a/docs/examples/channel/README.md +++ b/docs/examples/channel/README.md @@ -96,8 +96,8 @@ Data, } ``` -These events are generated from the `hello-world` `PingSource`, sent through -the `demo` `Channel` and delivered to the `event-display` via the `demo` +These events are generated from the `hello-world` `PingSource`, sent through the +`demo` `Channel` and delivered to the `event-display` via the `demo` `Subscription`. ## What's Next From 443a9e865c8e0e1169aaa5da766a0898c555dfa2 Mon Sep 17 00:00:00 2001 From: Grace Gao <52978759+grac3gao@users.noreply.github.com> Date: Mon, 4 May 2020 11:18:45 -0700 Subject: [PATCH 02/12] Update installation instructions for v0.14.0 (#979) * update configure * Revert "update configure" This reverts commit fa4c9bb3 * docs * update doc --- docs/install/install-knative-gcp.md | 8 ++++++-- docs/install/pubsub-service-account.md | 6 ++++-- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/docs/install/install-knative-gcp.md b/docs/install/install-knative-gcp.md index 10ab178417..359a5f7a90 100644 --- a/docs/install/install-knative-gcp.md +++ b/docs/install/install-knative-gcp.md @@ -56,7 +56,7 @@ ko apply -f ./config kubectl apply --filename https://github.com/google/knative-gcp/releases/download/${KGCP_VERSION}/cloud-run-events.yaml ``` -## Configure the Authentication Mechanism for GCP +## Configure the Authentication Mechanism for GCP (the Control Plane) Currently, we support two methods: Workload Identity and Kubernetes Secret. Workload Identity is the recommended way to access Google Cloud services from @@ -82,7 +82,11 @@ error message. wish to configure the auth manually, refer to [manually configure authentication for GCP](./authentication-mechanisms-gcp.md), -- Option 1 (Recommended): Use Workload Identity. Apply +- Option 1 (Recommended): Use Workload Identity. ***Note:*** Now, Workload Identity +for the Control Plane only works if you install the Knative-GCP Constructs from the master. +If you install the Knative-GCP Constructs with our latest release (v0.14.0) or older releases, please use option 2. + + Apply [init_control_plane_gke.sh](../../hack/init_control_plane_gke.sh): ```shell diff --git a/docs/install/pubsub-service-account.md b/docs/install/pubsub-service-account.md index 97a89dd2b1..7f8f4ee40d 100644 --- a/docs/install/pubsub-service-account.md +++ b/docs/install/pubsub-service-account.md @@ -47,14 +47,16 @@ also need the ability to publish messages (`roles/pubsub.publisher`). --role roles/pubsub.editor ``` -## Configure the Authentication Mechanism for GCP +## Configure the Authentication Mechanism for GCP (the Data Plane) ### Option 1: Use Workload Identity It is the recommended way to access Google Cloud services from within GKE due to its improved security properties and manageability. For more information about Workload Identity see -[here](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity). +[here](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity). +***Note:*** Installing Knative-GCP Constructs from the master and installing it with our latest release (v0.14.0) both +support Workload Identity for the Data Plane. Older releases don't support this. 1. Enable Workload Identity. Check [Manually Configure Authentication Mechanism for GCP](authentication-mechanisms-gcp.md) From 3d22fc0899cb78f99dd9c103ff2d7c884490b3f1 Mon Sep 17 00:00:00 2001 From: Matt Moore Date: Tue, 5 May 2020 10:00:44 -0700 Subject: [PATCH 03/12] [master] Format markdown (#999) Produced via: `prettier --write --prose-wrap=always $(find -name '*.md' | grep -v vendor | grep -v .github | grep -v docs/cmd/)` /assign grantr nachocano /cc grantr nachocano --- docs/install/install-knative-gcp.md | 10 ++++++---- docs/install/pubsub-service-account.md | 7 ++++--- 2 files changed, 10 insertions(+), 7 deletions(-) diff --git a/docs/install/install-knative-gcp.md b/docs/install/install-knative-gcp.md index 359a5f7a90..ab6d703bec 100644 --- a/docs/install/install-knative-gcp.md +++ b/docs/install/install-knative-gcp.md @@ -82,11 +82,13 @@ error message. wish to configure the auth manually, refer to [manually configure authentication for GCP](./authentication-mechanisms-gcp.md), -- Option 1 (Recommended): Use Workload Identity. ***Note:*** Now, Workload Identity -for the Control Plane only works if you install the Knative-GCP Constructs from the master. -If you install the Knative-GCP Constructs with our latest release (v0.14.0) or older releases, please use option 2. +- Option 1 (Recommended): Use Workload Identity. **_Note:_** Now, Workload + Identity for the Control Plane only works if you install the Knative-GCP + Constructs from the master. If you install the Knative-GCP Constructs with our + latest release (v0.14.0) or older releases, please use option 2. + + Apply - Apply [init_control_plane_gke.sh](../../hack/init_control_plane_gke.sh): ```shell diff --git a/docs/install/pubsub-service-account.md b/docs/install/pubsub-service-account.md index 7f8f4ee40d..19503ff772 100644 --- a/docs/install/pubsub-service-account.md +++ b/docs/install/pubsub-service-account.md @@ -54,9 +54,10 @@ also need the ability to publish messages (`roles/pubsub.publisher`). It is the recommended way to access Google Cloud services from within GKE due to its improved security properties and manageability. For more information about Workload Identity see -[here](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity). -***Note:*** Installing Knative-GCP Constructs from the master and installing it with our latest release (v0.14.0) both -support Workload Identity for the Data Plane. Older releases don't support this. +[here](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity). +**_Note:_** Installing Knative-GCP Constructs from the master and installing it +with our latest release (v0.14.0) both support Workload Identity for the Data +Plane. Older releases don't support this. 1. Enable Workload Identity. Check [Manually Configure Authentication Mechanism for GCP](authentication-mechanisms-gcp.md) From 9ec3f3abfc6d36c8adc74f7a10b205a2d954a089 Mon Sep 17 00:00:00 2001 From: Grace Gao <52978759+grac3gao@users.noreply.github.com> Date: Tue, 5 May 2020 14:19:44 -0700 Subject: [PATCH 04/12] Skip TestCloudAuditLogsSource and TestCloudStorageSource in e2e-wi-tests (#1001) * update configure * Revert "update configure" This reverts commit fa4c9bb3 * skip --- test/e2e/e2e_test.go | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/test/e2e/e2e_test.go b/test/e2e/e2e_test.go index ca79946add..867f02c79d 100644 --- a/test/e2e/e2e_test.go +++ b/test/e2e/e2e_test.go @@ -222,7 +222,7 @@ func TestBrokerWithPubSubChannel(t *testing.T) { // TestBrokerWithPubSubChannel tests we can knock a Knative Service from a broker with PubSub Channel. func TestBrokerWithPubSubChannelStackdriverMetrics(t *testing.T) { - t.Skip("Stackdriver currently not working without patch. See https://github.com/google/knative-gcp/issues/317") + t.Skip("Stackdriver currently not working without patch. See https://github.com/google/knative-gcp/issues/317") if authConfig.WorkloadIdentity { t.Skip("Skip broker related test when workloadIdentity is enabled, issue: https://github.com/google/knative-gcp/issues/746") } @@ -230,6 +230,7 @@ func TestBrokerWithPubSubChannelStackdriverMetrics(t *testing.T) { defer cancel() BrokerWithPubSubChannelTestImpl(t, authConfig, true /* assertMetrics */) } + // TestCloudPubSubSourceBrokerWithPubSubChannel tests we can knock a Knative Service from a broker with PubSub Channel from a CloudPubSubSource. func TestCloudPubSubSourceBrokerWithPubSubChannel(t *testing.T) { if authConfig.WorkloadIdentity { @@ -272,6 +273,9 @@ func TestCloudSchedulerSourceBrokerWithPubSubChannel(t *testing.T) { // TestCloudStorageSource tests we can knock down a target from a CloudStorageSource. func TestCloudStorageSource(t *testing.T) { + if authConfig.WorkloadIdentity { + t.Skip("Skip this test temporally for issue: https://github.com/google/knative-gcp/issues/1000") + } cancel := logstream.Start(t) defer cancel() CloudStorageSourceWithTestImpl(t, false /*assertMetrics */, authConfig) @@ -287,6 +291,9 @@ func TestCloudStorageSourceStackDriverMetrics(t *testing.T) { // TestCloudAuditLogsSource tests we can knock down a target from an CloudAuditLogsSource. func TestCloudAuditLogsSource(t *testing.T) { + if authConfig.WorkloadIdentity { + t.Skip("Skip this test temporally for issue: https://github.com/google/knative-gcp/issues/1000") + } cancel := logstream.Start(t) defer cancel() CloudAuditLogsSourceWithTestImpl(t, authConfig) From f780beb7298acadbe75785fa2b49795a4f4066df Mon Sep 17 00:00:00 2001 From: Ian Milligan Date: Tue, 5 May 2020 15:37:44 -0700 Subject: [PATCH 05/12] Use wire to inject ingress Handler (#972) * Use wire to inject ingress Handler * Call go generate in update-codegen.sh * Vendor wire in tools.go * Install wire in update-codegen.sh * Separate ingress args --- cmd/broker/ingress/main.go | 9 +- cmd/broker/ingress/wire.go | 41 + cmd/broker/ingress/wire_gen.go | 39 + go.mod | 1 + go.sum | 4 + hack/tools.go | 2 + hack/update-codegen.sh | 3 + pkg/broker/ingress/args.go | 55 + pkg/broker/ingress/handler.go | 53 +- pkg/broker/ingress/handler_test.go | 15 +- .../ingress/multi_topic_decouple_sink.go | 58 +- .../ingress/multi_topic_decouple_sink_test.go | 7 +- pkg/broker/ingress/options.go | 80 -- pkg/broker/ingress/stats_reporter.go | 13 +- pkg/broker/ingress/stats_reporter_test.go | 2 +- .../github.com/google/wire/LICENSE | 202 +++ .../google/subcommands/CONTRIBUTING | 27 + vendor/github.com/google/subcommands/LICENSE | 202 +++ .../github.com/google/subcommands/README.md | 67 + vendor/github.com/google/subcommands/go.mod | 1 + .../google/subcommands/subcommands.go | 440 ++++++ vendor/github.com/google/wire/.codecov.yml | 13 + vendor/github.com/google/wire/.contributebot | 4 + vendor/github.com/google/wire/.travis.yml | 53 + vendor/github.com/google/wire/AUTHORS | 18 + .../github.com/google/wire/CODE_OF_CONDUCT.md | 10 + vendor/github.com/google/wire/CONTRIBUTING.md | 152 ++ vendor/github.com/google/wire/CONTRIBUTORS | 43 + vendor/github.com/google/wire/LICENSE | 202 +++ vendor/github.com/google/wire/README.md | 60 + .../github.com/google/wire/cmd/wire/main.go | 596 ++++++++ vendor/github.com/google/wire/go.mod | 10 + vendor/github.com/google/wire/go.sum | 12 + .../google/wire/internal/wire/analyze.go | 521 +++++++ .../google/wire/internal/wire/copyast.go | 493 +++++++ .../google/wire/internal/wire/errors.go | 84 ++ .../google/wire/internal/wire/parse.go | 1237 +++++++++++++++++ .../google/wire/internal/wire/wire.go | 961 +++++++++++++ vendor/github.com/google/wire/wire.go | 196 +++ vendor/modules.txt | 6 + 40 files changed, 5805 insertions(+), 187 deletions(-) create mode 100644 cmd/broker/ingress/wire.go create mode 100644 cmd/broker/ingress/wire_gen.go create mode 100644 pkg/broker/ingress/args.go delete mode 100644 pkg/broker/ingress/options.go create mode 100644 third_party/VENDOR-LICENSE/github.com/google/wire/LICENSE create mode 100644 vendor/github.com/google/subcommands/CONTRIBUTING create mode 100644 vendor/github.com/google/subcommands/LICENSE create mode 100644 vendor/github.com/google/subcommands/README.md create mode 100644 vendor/github.com/google/subcommands/go.mod create mode 100644 vendor/github.com/google/subcommands/subcommands.go create mode 100644 vendor/github.com/google/wire/.codecov.yml create mode 100644 vendor/github.com/google/wire/.contributebot create mode 100644 vendor/github.com/google/wire/.travis.yml create mode 100644 vendor/github.com/google/wire/AUTHORS create mode 100644 vendor/github.com/google/wire/CODE_OF_CONDUCT.md create mode 100644 vendor/github.com/google/wire/CONTRIBUTING.md create mode 100644 vendor/github.com/google/wire/CONTRIBUTORS create mode 100644 vendor/github.com/google/wire/LICENSE create mode 100644 vendor/github.com/google/wire/README.md create mode 100644 vendor/github.com/google/wire/cmd/wire/main.go create mode 100644 vendor/github.com/google/wire/go.mod create mode 100644 vendor/github.com/google/wire/go.sum create mode 100644 vendor/github.com/google/wire/internal/wire/analyze.go create mode 100644 vendor/github.com/google/wire/internal/wire/copyast.go create mode 100644 vendor/github.com/google/wire/internal/wire/errors.go create mode 100644 vendor/github.com/google/wire/internal/wire/parse.go create mode 100644 vendor/github.com/google/wire/internal/wire/wire.go create mode 100644 vendor/github.com/google/wire/wire.go diff --git a/cmd/broker/ingress/main.go b/cmd/broker/ingress/main.go index 2446a992b5..47676fdf75 100644 --- a/cmd/broker/ingress/main.go +++ b/cmd/broker/ingress/main.go @@ -87,8 +87,13 @@ func main() { } logger.Desugar().Info("Starting ingress handler", zap.Any("envConfig", env), zap.Any("Project ID", projectID)) - statsReporter := ingress.NewStatsReporter(env.PodName, containerName) - ingress, err := ingress.NewHandler(ctx, statsReporter, ingress.WithPort(env.Port), ingress.WithProjectID(projectID)) + ingress, err := InitializeHandler( + ctx, + ingress.Port(env.Port), + ingress.ProjectID(projectID), + ingress.PodName(env.PodName), + ingress.ContainerName(containerName), + ) if err != nil { logger.Desugar().Fatal("Unable to create ingress handler: ", zap.Error(err)) } diff --git a/cmd/broker/ingress/wire.go b/cmd/broker/ingress/wire.go new file mode 100644 index 0000000000..a16d7ff323 --- /dev/null +++ b/cmd/broker/ingress/wire.go @@ -0,0 +1,41 @@ +// +build wireinject + +/* +Copyright 2020 Google LLC. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package main + +import ( + "context" + + "github.com/google/knative-gcp/pkg/broker/config/volume" + "github.com/google/knative-gcp/pkg/broker/ingress" + "github.com/google/wire" +) + +func InitializeHandler( + ctx context.Context, + port ingress.Port, + projectID ingress.ProjectID, + podName ingress.PodName, + containerName ingress.ContainerName, +) (*ingress.Handler, error) { + panic(wire.Build( + ingress.HandlerSet, + wire.Value([]volume.Option(nil)), + volume.NewTargetsFromFile, + )) +} diff --git a/cmd/broker/ingress/wire_gen.go b/cmd/broker/ingress/wire_gen.go new file mode 100644 index 0000000000..1537bd8a76 --- /dev/null +++ b/cmd/broker/ingress/wire_gen.go @@ -0,0 +1,39 @@ +// Code generated by Wire. DO NOT EDIT. + +//go:generate wire +//+build !wireinject + +package main + +import ( + "context" + "github.com/google/knative-gcp/pkg/broker/config/volume" + "github.com/google/knative-gcp/pkg/broker/ingress" +) + +// Injectors from wire.go: + +func InitializeHandler(ctx context.Context, port ingress.Port, projectID ingress.ProjectID, podName ingress.PodName, containerName2 ingress.ContainerName) (*ingress.Handler, error) { + httpMessageReceiver := ingress.NewHTTPMessageReceiver(port) + v := _wireValue + readonlyTargets, err := volume.NewTargetsFromFile(v...) + if err != nil { + return nil, err + } + client, err := ingress.NewPubsubClient(ctx, projectID) + if err != nil { + return nil, err + } + clientClient, err := ingress.NewPubsubDecoupleClient(ctx, client) + if err != nil { + return nil, err + } + multiTopicDecoupleSink := ingress.NewMultiTopicDecoupleSink(ctx, readonlyTargets, clientClient) + statsReporter := ingress.NewStatsReporter(podName, containerName2) + handler := ingress.NewHandler(ctx, httpMessageReceiver, multiTopicDecoupleSink, statsReporter) + return handler, nil +} + +var ( + _wireValue = []volume.Option(nil) +) diff --git a/go.mod b/go.mod index c62b9db3c3..4ffc6112d8 100644 --- a/go.mod +++ b/go.mod @@ -14,6 +14,7 @@ require ( github.com/golang/protobuf v1.4.0 github.com/google/go-cmp v0.4.0 github.com/google/uuid v1.1.1 + github.com/google/wire v0.4.0 github.com/googleapis/gax-go/v2 v2.0.5 github.com/googleapis/gnostic v0.4.0 // indirect github.com/gorilla/mux v1.7.3 // indirect diff --git a/go.sum b/go.sum index 2d7c740d51..57a04e6910 100644 --- a/go.sum +++ b/go.sum @@ -322,9 +322,13 @@ github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hf github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= +github.com/google/subcommands v1.0.1 h1:/eqq+otEXm5vhfBrbREPCSVQbvofip6kIz+mX5TUH7k= +github.com/google/subcommands v1.0.1/go.mod h1:ZjhPrFU+Olkh9WazFPsl27BQ4UPiG37m3yTrtFlrHVk= github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY= github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/google/wire v0.4.0 h1:kXcsA/rIGzJImVqPdhfnr6q0xsS9gU0515q1EPpJ9fE= +github.com/google/wire v0.4.0/go.mod h1:ngWDr9Qvq3yZA10YrxfyGELY/AFWGVpy9c1LTRi1EoU= github.com/googleapis/gax-go v2.0.0+incompatible h1:j0GKcs05QVmm7yesiZq2+9cxHkNK9YM6zKx4D2qucQU= github.com/googleapis/gax-go v2.0.0+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= diff --git a/hack/tools.go b/hack/tools.go index 58b9984ee6..372c722787 100644 --- a/hack/tools.go +++ b/hack/tools.go @@ -28,4 +28,6 @@ import ( _ "knative.dev/eventing/test/test_images/transformevents" _ "knative.dev/pkg/testutils/clustermanager/perf-tests" + + _ "github.com/google/wire/cmd/wire" ) diff --git a/hack/update-codegen.sh b/hack/update-codegen.sh index 15bd06c56c..9df367d706 100755 --- a/hack/update-codegen.sh +++ b/hack/update-codegen.sh @@ -60,5 +60,8 @@ ${KNATIVE_CODEGEN_PKG}/hack/generate-knative.sh "injection" \ "security:v1beta1" \ --go-header-file ${REPO_ROOT_DIR}/hack/boilerplate/boilerplate.go.txt +go install github.com/google/wire/cmd/wire +go generate ${REPO_ROOT_DIR}/... + # Make sure our dependencies are up-to-date ${REPO_ROOT_DIR}/hack/update-deps.sh diff --git a/pkg/broker/ingress/args.go b/pkg/broker/ingress/args.go new file mode 100644 index 0000000000..455a5a4729 --- /dev/null +++ b/pkg/broker/ingress/args.go @@ -0,0 +1,55 @@ +/* +Copyright 2020 Google LLC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ingress + +import ( + "context" + + "cloud.google.com/go/pubsub" + cev2 "github.com/cloudevents/sdk-go/v2" + cepubsub "github.com/cloudevents/sdk-go/v2/protocol/pubsub" + "knative.dev/eventing/pkg/kncloudevents" +) + +type Port int +type ProjectID string + +// NewHTTPMessageReceiver wraps kncloudevents.NewHttpMessageReceiver with type-safe options. +func NewHTTPMessageReceiver(port Port) *kncloudevents.HttpMessageReceiver { + return kncloudevents.NewHttpMessageReceiver(int(port)) +} + +// NewPubsubClient provides a pubsub client from PubsubClientOpts. +func NewPubsubClient(ctx context.Context, projectID ProjectID) (*pubsub.Client, error) { + return pubsub.NewClient(ctx, string(projectID)) +} + +// NewPubsubDecoupleClient creates a pubsub Cloudevents client to use to publish events to decouple queues. +func NewPubsubDecoupleClient(ctx context.Context, client *pubsub.Client) (cev2.Client, error) { + // Make a pubsub protocol for the CloudEvents client. + p, err := cepubsub.New(ctx, cepubsub.WithClient(client)) + if err != nil { + return nil, err + } + + // Use the pubsub prototol to make a new CloudEvents client. + return cev2.NewClientObserved(p, + cev2.WithUUIDs(), + cev2.WithTimeNow(), + cev2.WithTracePropagation, + ) +} diff --git a/pkg/broker/ingress/handler.go b/pkg/broker/ingress/handler.go index fbffa102e8..ba309065c9 100644 --- a/pkg/broker/ingress/handler.go +++ b/pkg/broker/ingress/handler.go @@ -31,6 +31,7 @@ import ( "github.com/cloudevents/sdk-go/v2/binding/transformer" "github.com/cloudevents/sdk-go/v2/protocol" "github.com/cloudevents/sdk-go/v2/protocol/http" + "github.com/google/wire" "knative.dev/eventing/pkg/kncloudevents" "knative.dev/eventing/pkg/logging" ) @@ -49,6 +50,18 @@ const ( EventArrivalTime = "knativearrivaltime" ) +// HandlerSet provides a handler with a real HTTPMessageReceiver and pubsub MultiTopicDecoupleSink. +var HandlerSet wire.ProviderSet = wire.NewSet( + NewHandler, + NewHTTPMessageReceiver, + wire.Bind(new(HttpMessageReceiver), new(*kncloudevents.HttpMessageReceiver)), + NewMultiTopicDecoupleSink, + wire.Bind(new(DecoupleSink), new(*multiTopicDecoupleSink)), + NewPubsubClient, + NewPubsubDecoupleClient, + NewStatsReporter, +) + // DecoupleSink is an interface to send events to a decoupling sink (e.g., pubsub). type DecoupleSink interface { // Send sends the event from a broker to the corresponding decoupling sink. @@ -62,7 +75,7 @@ type HttpMessageReceiver interface { // handler receives events and persists them to storage (pubsub). // TODO(liu-cong) support event TTL -type handler struct { +type Handler struct { // httpReceiver is an HTTP server to receive events. httpReceiver HttpMessageReceiver // decouple is the client to send events to a decouple sink. @@ -72,35 +85,17 @@ type handler struct { } // NewHandler creates a new ingress handler. -func NewHandler(ctx context.Context, reporter *StatsReporter, options ...HandlerOption) (*handler, error) { - h := &handler{ - logger: logging.FromContext(ctx), - reporter: reporter, - } - - for _, option := range options { - if err := option(h); err != nil { - return nil, err - } +func NewHandler(ctx context.Context, httpReceiver HttpMessageReceiver, decouple DecoupleSink, reporter *StatsReporter) *Handler { + return &Handler{ + httpReceiver: httpReceiver, + decouple: decouple, + reporter: reporter, + logger: logging.FromContext(ctx), } - - if h.httpReceiver == nil { - h.httpReceiver = kncloudevents.NewHttpMessageReceiver(defaultPort) - } - - if h.decouple == nil { - sink, err := NewMultiTopicDecoupleSink(ctx) - if err != nil { - return nil, err - } - h.decouple = sink - } - - return h, nil } // Start blocks to receive events over HTTP. -func (h *handler) Start(ctx context.Context) error { +func (h *Handler) Start(ctx context.Context) error { return h.httpReceiver.StartListen(ctx, h) } @@ -109,7 +104,7 @@ func (h *handler) Start(ctx context.Context) error { // 2. Parse request URL to get namespace and broker. // 3. Convert request to event. // 4. Send event to decouple sink. -func (h *handler) ServeHTTP(response nethttp.ResponseWriter, request *nethttp.Request) { +func (h *Handler) ServeHTTP(response nethttp.ResponseWriter, request *nethttp.Request) { h.logger.Debug("Serving http", zap.Any("headers", request.Header)) startTime := time.Now() if request.Method != nethttp.MethodPost { @@ -153,7 +148,7 @@ func (h *handler) ServeHTTP(response nethttp.ResponseWriter, request *nethttp.Re } // toEvent converts an http request to an event. -func (h *handler) toEvent(request *nethttp.Request) (event *cev2.Event, msg string, statusCode int) { +func (h *Handler) toEvent(request *nethttp.Request) (event *cev2.Event, msg string, statusCode int) { message := http.NewMessageFromHttpRequest(request) defer func() { if err := message.Finish(nil); err != nil { @@ -175,7 +170,7 @@ func (h *handler) toEvent(request *nethttp.Request) (event *cev2.Event, msg stri return event, "", nethttp.StatusOK } -func (h *handler) reportMetrics(ctx context.Context, ns, broker string, event *cev2.Event, statusCode int, start time.Time) { +func (h *Handler) reportMetrics(ctx context.Context, ns, broker string, event *cev2.Event, statusCode int, start time.Time) { args := reportArgs{ namespace: ns, broker: broker, diff --git a/pkg/broker/ingress/handler_test.go b/pkg/broker/ingress/handler_test.go index 0c76e107ea..f100c8758a 100644 --- a/pkg/broker/ingress/handler_test.go +++ b/pkg/broker/ingress/handler_test.go @@ -320,21 +320,14 @@ func setupTestReceiver(ctx context.Context, t *testing.T, psSrv *pstest.Server) // createAndStartIngress creates an ingress and calls its Start() method in a goroutine. func createAndStartIngress(ctx context.Context, t *testing.T, psSrv *pstest.Server) (string, func()) { p, cancel := createPubsubClient(ctx, t, psSrv) - decouple, err := NewMultiTopicDecoupleSink(ctx, - WithBrokerConfig(memory.NewTargets(brokerConfig)), - WithPubsubClient(p)) + client, err := NewPubsubDecoupleClient(ctx, p) if err != nil { - cancel() - t.Fatalf("Failed to create decouple sink: %v", err) + t.Fatal(err) } + decouple := NewMultiTopicDecoupleSink(ctx, memory.NewTargets(brokerConfig), client) receiver := &testHttpMessageReceiver{urlCh: make(chan string)} - h := &handler{ - logger: logging.FromContext(ctx).Desugar(), - httpReceiver: receiver, - decouple: decouple, - reporter: NewStatsReporter(pod, container), - } + h := NewHandler(ctx, receiver, decouple, NewStatsReporter(PodName(pod), ContainerName(container))) errCh := make(chan error, 1) go func() { diff --git a/pkg/broker/ingress/multi_topic_decouple_sink.go b/pkg/broker/ingress/multi_topic_decouple_sink.go index 9349222c2b..95a1192559 100644 --- a/pkg/broker/ingress/multi_topic_decouple_sink.go +++ b/pkg/broker/ingress/multi_topic_decouple_sink.go @@ -19,59 +19,25 @@ package ingress import ( "context" "fmt" - "os" - "cloud.google.com/go/pubsub" "go.uber.org/zap" cev2 "github.com/cloudevents/sdk-go/v2" cecontext "github.com/cloudevents/sdk-go/v2/context" "github.com/cloudevents/sdk-go/v2/protocol" - cepubsub "github.com/cloudevents/sdk-go/v2/protocol/pubsub" "github.com/google/knative-gcp/pkg/broker/config" - "github.com/google/knative-gcp/pkg/broker/config/volume" - "github.com/google/knative-gcp/pkg/utils" "knative.dev/eventing/pkg/logging" ) const projectEnvKey = "PROJECT_ID" // NewMultiTopicDecoupleSink creates a new multiTopicDecoupleSink. -func NewMultiTopicDecoupleSink(ctx context.Context, options ...MultiTopicDecoupleSinkOption) (*multiTopicDecoupleSink, error) { - var err error - opts := new(multiTopicDecoupleSinkOptions) - for _, opt := range options { - opt(opts) - } - - // Apply defaults - if opts.client == nil { - if opts.pubsub == nil { - var projectID string - if projectID, err = utils.ProjectID(os.Getenv(projectEnvKey)); err != nil { - return nil, err - } - if opts.pubsub, err = pubsub.NewClient(ctx, projectID); err != nil { - return nil, err - } - } - if opts.client, err = newPubSubClient(ctx, opts.pubsub); err != nil { - return nil, err - } - } - - if opts.brokerConfig == nil { - if opts.brokerConfig, err = volume.NewTargetsFromFile(); err != nil { - return nil, fmt.Errorf("creating broker config for default multi topic decouple sink") - } - } - - sink := &multiTopicDecoupleSink{ +func NewMultiTopicDecoupleSink(ctx context.Context, brokerConfig config.ReadonlyTargets, client cev2.Client) *multiTopicDecoupleSink { + return &multiTopicDecoupleSink{ logger: logging.FromContext(ctx), - client: opts.client, - brokerConfig: opts.brokerConfig, + client: client, + brokerConfig: brokerConfig, } - return sink, nil } // multiTopicDecoupleSink implements DecoupleSink and routes events to pubsub topics corresponding @@ -111,19 +77,3 @@ func (m *multiTopicDecoupleSink) getTopicForBroker(ns, broker string) (string, e } return brokerConfig.DecoupleQueue.Topic, nil } - -// newPubSubClient creates a pubsub client using the given project ID. -func newPubSubClient(ctx context.Context, client *pubsub.Client) (cev2.Client, error) { - // Make a pubsub protocol for the CloudEvents client. - p, err := cepubsub.New(ctx, cepubsub.WithClient(client)) - if err != nil { - return nil, err - } - - // Use the pubsub prototol to make a new CloudEvents client. - return cev2.NewClientObserved(p, - cev2.WithUUIDs(), - cev2.WithTimeNow(), - cev2.WithTracePropagation, - ) -} diff --git a/pkg/broker/ingress/multi_topic_decouple_sink_test.go b/pkg/broker/ingress/multi_topic_decouple_sink_test.go index a869709e9b..1d7cc08d33 100644 --- a/pkg/broker/ingress/multi_topic_decouple_sink_test.go +++ b/pkg/broker/ingress/multi_topic_decouple_sink_test.go @@ -155,14 +155,11 @@ func TestMultiTopicDecoupleSink(t *testing.T) { if testCase.clientErrFn != nil { testCase.clientErrFn(fakeClient) } - sink, err := NewMultiTopicDecoupleSink(ctx, WithBrokerConfig(brokerConfig), WithClient(fakeClient)) - if err != nil { - t.Fatalf("Failed to create decouple sink: %v", err) - } + sink := NewMultiTopicDecoupleSink(ctx, brokerConfig, fakeClient) // Send events event := createTestEvent(uuid.New().String()) - err = sink.Send(context.Background(), testCase.ns, testCase.broker, *event) + err := sink.Send(context.Background(), testCase.ns, testCase.broker, *event) // Verify results. if testCase.wantErr && err == nil { diff --git a/pkg/broker/ingress/options.go b/pkg/broker/ingress/options.go deleted file mode 100644 index 49cb42c371..0000000000 --- a/pkg/broker/ingress/options.go +++ /dev/null @@ -1,80 +0,0 @@ -/* -Copyright 2020 Google LLC - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package ingress - -import ( - "context" - - "cloud.google.com/go/pubsub" - cloudevents "github.com/cloudevents/sdk-go/v2" - "github.com/google/knative-gcp/pkg/broker/config" - "knative.dev/eventing/pkg/kncloudevents" -) - -// HandlerOption is the option to configure ingress handler. -type HandlerOption func(*handler) error - -// WithPort specifies the port number that ingress listens on. It will create an HttpMessageReceiver for that port. -func WithPort(port int) HandlerOption { - return func(h *handler) error { - h.httpReceiver = kncloudevents.NewHttpMessageReceiver(port) - return nil - } -} - -// WithProjectID creates a pubsub client for the given project ID to communicate with pubsusb decouple topics. -func WithProjectID(id string) HandlerOption { - return func(h *handler) error { - ctx := context.Background() - p, err := pubsub.NewClient(ctx, id) - if err != nil { - return err - } - h.decouple, err = NewMultiTopicDecoupleSink(context.Background(), WithPubsubClient(p)) - return err - } -} - -// MultiTopicDecoupleSinkOption is the option to configure multiTopicDecoupleSink. -type MultiTopicDecoupleSinkOption func(sink *multiTopicDecoupleSinkOptions) - -type multiTopicDecoupleSinkOptions struct { - client cloudevents.Client - pubsub *pubsub.Client - brokerConfig config.ReadonlyTargets -} - -// WithClient specifies an eventing client to use. -func WithClient(client cloudevents.Client) MultiTopicDecoupleSinkOption { - return func(opts *multiTopicDecoupleSinkOptions) { - opts.client = client - } -} - -// WithPubsubClient specifies the pubsub client to use. -func WithPubsubClient(ps *pubsub.Client) MultiTopicDecoupleSinkOption { - return func(opts *multiTopicDecoupleSinkOptions) { - opts.pubsub = ps - } -} - -// WithBrokerConfig specifies the broker config. It can be created by reading a configmap mount. -func WithBrokerConfig(brokerConfig config.ReadonlyTargets) MultiTopicDecoupleSinkOption { - return func(opts *multiTopicDecoupleSinkOptions) { - opts.brokerConfig = brokerConfig - } -} diff --git a/pkg/broker/ingress/stats_reporter.go b/pkg/broker/ingress/stats_reporter.go index e5522e19d2..27de470d0b 100644 --- a/pkg/broker/ingress/stats_reporter.go +++ b/pkg/broker/ingress/stats_reporter.go @@ -58,6 +58,9 @@ var ( containerKey = tag.MustNewKey(metricskey.ContainerName) ) +type PodName string +type ContainerName string + type reportArgs struct { namespace string broker string @@ -103,7 +106,7 @@ func register() { } // NewStatsReporter creates a new StatsReporter. -func NewStatsReporter(podName, containerName string) *StatsReporter { +func NewStatsReporter(podName PodName, containerName ContainerName) *StatsReporter { return &StatsReporter{ podName: podName, containerName: containerName, @@ -112,15 +115,15 @@ func NewStatsReporter(podName, containerName string) *StatsReporter { // StatsReporter reports ingress metrics. type StatsReporter struct { - podName string - containerName string + podName PodName + containerName ContainerName } func (r *StatsReporter) reportEventDispatchTime(ctx context.Context, args reportArgs, d time.Duration) error { tag, err := tag.New( ctx, - tag.Insert(podKey, r.podName), - tag.Insert(containerKey, r.containerName), + tag.Insert(podKey, string(r.podName)), + tag.Insert(containerKey, string(r.containerName)), tag.Insert(namespaceKey, args.namespace), tag.Insert(brokerKey, args.broker), tag.Insert(eventTypeKey, args.eventType), diff --git a/pkg/broker/ingress/stats_reporter_test.go b/pkg/broker/ingress/stats_reporter_test.go index 455e892eea..8ee3b31bb2 100644 --- a/pkg/broker/ingress/stats_reporter_test.go +++ b/pkg/broker/ingress/stats_reporter_test.go @@ -44,7 +44,7 @@ func TestStatsReporter(t *testing.T) { metricskey.PodName: "testpod", } - r := NewStatsReporter("testpod", "testcontainer") + r := NewStatsReporter(PodName("testpod"), ContainerName("testcontainer")) // test ReportDispatchTime expectSuccess(t, func() error { diff --git a/third_party/VENDOR-LICENSE/github.com/google/wire/LICENSE b/third_party/VENDOR-LICENSE/github.com/google/wire/LICENSE new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/third_party/VENDOR-LICENSE/github.com/google/wire/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/google/subcommands/CONTRIBUTING b/vendor/github.com/google/subcommands/CONTRIBUTING new file mode 100644 index 0000000000..2827b7d3fa --- /dev/null +++ b/vendor/github.com/google/subcommands/CONTRIBUTING @@ -0,0 +1,27 @@ +Want to contribute? Great! First, read this page (including the small print at the end). + +### Before you contribute +Before we can use your code, you must sign the +[Google Individual Contributor License Agreement] +(https://cla.developers.google.com/about/google-individual) +(CLA), which you can do online. The CLA is necessary mainly because you own the +copyright to your changes, even after your contribution becomes part of our +codebase, so we need your permission to use and distribute your code. We also +need to be sure of various other things—for instance that you'll tell us if you +know that your code infringes on other people's patents. You don't have to sign +the CLA until after you've submitted your code for review and a member has +approved it, but you must do it before we can put your code into our codebase. +Before you start working on a larger contribution, you should get in touch with +us first through the issue tracker with your idea so that we can help out and +possibly guide you. Coordinating up front makes it much easier to avoid +frustration later on. + +### Code reviews +All submissions, including submissions by project members, require review. We +use Github pull requests for this purpose. + +### The small print +Contributions made by corporations are covered by a different agreement than +the one above, the +[Software Grant and Corporate Contributor License Agreement] +(https://cla.developers.google.com/about/google-corporate). diff --git a/vendor/github.com/google/subcommands/LICENSE b/vendor/github.com/google/subcommands/LICENSE new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/vendor/github.com/google/subcommands/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/google/subcommands/README.md b/vendor/github.com/google/subcommands/README.md new file mode 100644 index 0000000000..c769745c55 --- /dev/null +++ b/vendor/github.com/google/subcommands/README.md @@ -0,0 +1,67 @@ +# subcommands # + +[![GoDoc](https://godoc.org/github.com/google/subcommands?status.svg)](https://godoc.org/github.com/google/subcommands) +Subcommands is a Go package that implements a simple way for a single command to +have many subcommands, each of which takes arguments and so forth. + +This is not an official Google product. + +## Usage ## + +Set up a 'print' subcommand: + +```go +import ( + "context" + "flag" + "fmt" + "os" + "strings" + + "github.com/google/subcommands" +) + +type printCmd struct { + capitalize bool +} + +func (*printCmd) Name() string { return "print" } +func (*printCmd) Synopsis() string { return "Print args to stdout." } +func (*printCmd) Usage() string { + return `print [-capitalize] : + Print args to stdout. +` +} + +func (p *printCmd) SetFlags(f *flag.FlagSet) { + f.BoolVar(&p.capitalize, "capitalize", false, "capitalize output") +} + +func (p *printCmd) Execute(_ context.Context, f *flag.FlagSet, _ ...interface{}) subcommands.ExitStatus { + for _, arg := range f.Args() { + if p.capitalize { + arg = strings.ToUpper(arg) + } + fmt.Printf("%s ", arg) + } + fmt.Println() + return subcommands.ExitSuccess +} +``` + +Register using the default Commander, also use some built in subcommands, +finally run Execute using ExitStatus as the exit code: + +```go +func main() { + subcommands.Register(subcommands.HelpCommand(), "") + subcommands.Register(subcommands.FlagsCommand(), "") + subcommands.Register(subcommands.CommandsCommand(), "") + subcommands.Register(&printCmd{}, "") + + flag.Parse() + ctx := context.Background() + os.Exit(int(subcommands.Execute(ctx))) +} +``` + diff --git a/vendor/github.com/google/subcommands/go.mod b/vendor/github.com/google/subcommands/go.mod new file mode 100644 index 0000000000..f502431f96 --- /dev/null +++ b/vendor/github.com/google/subcommands/go.mod @@ -0,0 +1 @@ +module github.com/google/subcommands diff --git a/vendor/github.com/google/subcommands/subcommands.go b/vendor/github.com/google/subcommands/subcommands.go new file mode 100644 index 0000000000..9cb98e5cce --- /dev/null +++ b/vendor/github.com/google/subcommands/subcommands.go @@ -0,0 +1,440 @@ +/* +Copyright 2016 Google Inc. All Rights Reserved. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package subcommands implements a simple way for a single command to have many +// subcommands, each of which takes arguments and so forth. +package subcommands + +import ( + "context" + "flag" + "fmt" + "io" + "os" + "path" + "sort" + "strings" +) + +// A Command represents a single command. +type Command interface { + // Name returns the name of the command. + Name() string + + // Synopsis returns a short string (less than one line) describing the command. + Synopsis() string + + // Usage returns a long string explaining the command and giving usage + // information. + Usage() string + + // SetFlags adds the flags for this command to the specified set. + SetFlags(*flag.FlagSet) + + // Execute executes the command and returns an ExitStatus. + Execute(ctx context.Context, f *flag.FlagSet, args ...interface{}) ExitStatus +} + +// A Commander represents a set of commands. +type Commander struct { + commands []*commandGroup + topFlags *flag.FlagSet // top-level flags + important []string // important top-level flags + name string // normally path.Base(os.Args[0]) + + Output io.Writer // Output specifies where the commander should write its output (default: os.Stdout). + Error io.Writer // Error specifies where the commander should write its error (default: os.Stderr). +} + +// A commandGroup represents a set of commands about a common topic. +type commandGroup struct { + name string + commands []Command +} + +// An ExitStatus represents a Posix exit status that a subcommand +// expects to be returned to the shell. +type ExitStatus int + +const ( + ExitSuccess ExitStatus = iota + ExitFailure + ExitUsageError +) + +// NewCommander returns a new commander with the specified top-level +// flags and command name. The Usage function for the topLevelFlags +// will be set as well. +func NewCommander(topLevelFlags *flag.FlagSet, name string) *Commander { + cdr := &Commander{ + topFlags: topLevelFlags, + name: name, + Output: os.Stdout, + Error: os.Stderr, + } + topLevelFlags.Usage = func() { cdr.explain(cdr.Error) } + return cdr +} + +// Register adds a subcommand to the supported subcommands in the +// specified group. (Help output is sorted and arranged by group name.) +// The empty string is an acceptable group name; such subcommands are +// explained first before named groups. +func (cdr *Commander) Register(cmd Command, group string) { + for _, g := range cdr.commands { + if g.name == group { + g.commands = append(g.commands, cmd) + return + } + } + cdr.commands = append(cdr.commands, &commandGroup{ + name: group, + commands: []Command{cmd}, + }) +} + +// ImportantFlag marks a top-level flag as important, which means it +// will be printed out as part of the output of an ordinary "help" +// subcommand. (All flags, important or not, are printed by the +// "flags" subcommand.) +func (cdr *Commander) ImportantFlag(name string) { + cdr.important = append(cdr.important, name) +} + +// Execute should be called once the top-level-flags on a Commander +// have been initialized. It finds the correct subcommand and executes +// it, and returns an ExitStatus with the result. On a usage error, an +// appropriate message is printed to os.Stderr, and ExitUsageError is +// returned. The additional args are provided as-is to the Execute method +// of the selected Command. +func (cdr *Commander) Execute(ctx context.Context, args ...interface{}) ExitStatus { + if cdr.topFlags.NArg() < 1 { + cdr.topFlags.Usage() + return ExitUsageError + } + + name := cdr.topFlags.Arg(0) + + for _, group := range cdr.commands { + for _, cmd := range group.commands { + if name != cmd.Name() { + continue + } + f := flag.NewFlagSet(name, flag.ContinueOnError) + f.Usage = func() { explain(cdr.Error, cmd) } + cmd.SetFlags(f) + if f.Parse(cdr.topFlags.Args()[1:]) != nil { + return ExitUsageError + } + return cmd.Execute(ctx, f, args...) + } + } + + // Cannot find this command. + cdr.topFlags.Usage() + return ExitUsageError +} + +// Sorting of a slice of command groups. +type byGroupName []*commandGroup + +func (p byGroupName) Len() int { return len(p) } +func (p byGroupName) Less(i, j int) bool { return p[i].name < p[j].name } +func (p byGroupName) Swap(i, j int) { p[i], p[j] = p[j], p[i] } + +// explain prints a brief description of all the subcommands and the +// important top-level flags. +func (cdr *Commander) explain(w io.Writer) { + fmt.Fprintf(w, "Usage: %s \n\n", cdr.name) + sort.Sort(byGroupName(cdr.commands)) + for _, group := range cdr.commands { + explainGroup(w, group) + } + if cdr.topFlags == nil { + fmt.Fprintln(w, "\nNo top level flags.") + return + } + if len(cdr.important) == 0 { + fmt.Fprintf(w, "\nUse \"%s flags\" for a list of top-level flags\n", cdr.name) + return + } + + fmt.Fprintf(w, "\nTop-level flags (use \"%s flags\" for a full list):\n", cdr.name) + for _, name := range cdr.important { + f := cdr.topFlags.Lookup(name) + if f == nil { + panic(fmt.Sprintf("Important flag (%s) is not defined", name)) + } + fmt.Fprintf(w, " -%s=%s: %s\n", f.Name, f.DefValue, f.Usage) + } +} + +// Sorting of the commands within a group. +func (g commandGroup) Len() int { return len(g.commands) } +func (g commandGroup) Less(i, j int) bool { return g.commands[i].Name() < g.commands[j].Name() } +func (g commandGroup) Swap(i, j int) { g.commands[i], g.commands[j] = g.commands[j], g.commands[i] } + +// explainGroup explains all the subcommands for a particular group. +func explainGroup(w io.Writer, group *commandGroup) { + if len(group.commands) == 0 { + return + } + if group.name == "" { + fmt.Fprintf(w, "Subcommands:\n") + } else { + fmt.Fprintf(w, "Subcommands for %s:\n", group.name) + } + sort.Sort(group) + + aliases := make(map[string][]string) + for _, cmd := range group.commands { + if alias, ok := cmd.(*aliaser); ok { + root := dealias(alias).Name() + + if _, ok := aliases[root]; !ok { + aliases[root] = []string{} + } + aliases[root] = append(aliases[root], alias.Name()) + } + } + + for _, cmd := range group.commands { + if _, ok := cmd.(*aliaser); ok { + continue + } + + name := cmd.Name() + names := []string{name} + + if a, ok := aliases[name]; ok { + names = append(names, a...) + } + + fmt.Fprintf(w, "\t%-15s %s\n", strings.Join(names, ", "), cmd.Synopsis()) + } + fmt.Fprintln(w) +} + +// explainCmd prints a brief description of a single command. +func explain(w io.Writer, cmd Command) { + fmt.Fprintf(w, "%s", cmd.Usage()) + subflags := flag.NewFlagSet(cmd.Name(), flag.PanicOnError) + subflags.SetOutput(w) + cmd.SetFlags(subflags) + subflags.PrintDefaults() +} + +// A helper is a Command implementing a "help" command for +// a given Commander. +type helper Commander + +func (h *helper) Name() string { return "help" } +func (h *helper) Synopsis() string { return "describe subcommands and their syntax" } +func (h *helper) SetFlags(*flag.FlagSet) {} +func (h *helper) Usage() string { + return `help []: + With an argument, prints detailed information on the use of + the specified subcommand. With no argument, print a list of + all commands and a brief description of each. +` +} +func (h *helper) Execute(_ context.Context, f *flag.FlagSet, args ...interface{}) ExitStatus { + switch f.NArg() { + case 0: + (*Commander)(h).explain(h.Output) + return ExitSuccess + + case 1: + for _, group := range h.commands { + for _, cmd := range group.commands { + if f.Arg(0) != cmd.Name() { + continue + } + explain(h.Output, cmd) + return ExitSuccess + } + } + fmt.Fprintf(h.Error, "Subcommand %s not understood\n", f.Arg(0)) + } + + f.Usage() + return ExitUsageError +} + +// HelpCommand returns a Command which implements a "help" subcommand. +func (cdr *Commander) HelpCommand() Command { + return (*helper)(cdr) +} + +// A flagger is a Command implementing a "flags" command for a given Commander. +type flagger Commander + +func (flg *flagger) Name() string { return "flags" } +func (flg *flagger) Synopsis() string { return "describe all known top-level flags" } +func (flg *flagger) SetFlags(*flag.FlagSet) {} +func (flg *flagger) Usage() string { + return `flags []: + With an argument, print all flags of . Else, + print a description of all known top-level flags. (The basic + help information only discusses the most generally important + top-level flags.) +` +} +func (flg *flagger) Execute(_ context.Context, f *flag.FlagSet, _ ...interface{}) ExitStatus { + if f.NArg() > 1 { + f.Usage() + return ExitUsageError + } + + if f.NArg() == 0 { + if flg.topFlags == nil { + fmt.Fprintln(flg.Output, "No top-level flags are defined.") + } else { + flg.topFlags.PrintDefaults() + } + return ExitSuccess + } + + for _, group := range flg.commands { + for _, cmd := range group.commands { + if f.Arg(0) != cmd.Name() { + continue + } + subflags := flag.NewFlagSet(cmd.Name(), flag.PanicOnError) + subflags.SetOutput(flg.Output) + cmd.SetFlags(subflags) + subflags.PrintDefaults() + return ExitSuccess + } + } + fmt.Fprintf(flg.Error, "Subcommand %s not understood\n", f.Arg(0)) + return ExitFailure +} + +// FlagsCommand returns a Command which implements a "flags" subcommand. +func (cdr *Commander) FlagsCommand() Command { + return (*flagger)(cdr) +} + +// A lister is a Command implementing a "commands" command for a given Commander. +type lister Commander + +func (l *lister) Name() string { return "commands" } +func (l *lister) Synopsis() string { return "list all command names" } +func (l *lister) SetFlags(*flag.FlagSet) {} +func (l *lister) Usage() string { + return `commands: + Print a list of all commands. +` +} +func (l *lister) Execute(_ context.Context, f *flag.FlagSet, _ ...interface{}) ExitStatus { + if f.NArg() != 0 { + f.Usage() + return ExitUsageError + } + + for _, group := range l.commands { + for _, cmd := range group.commands { + fmt.Fprintf(l.Output, "%s\n", cmd.Name()) + } + } + return ExitSuccess +} + +// CommandsCommand returns Command which implements a "commands" subcommand. +func (cdr *Commander) CommandsCommand() Command { + return (*lister)(cdr) +} + +// An aliaser is a Command wrapping another Command but returning a +// different name as its alias. +type aliaser struct { + alias string + Command +} + +func (a *aliaser) Name() string { return a.alias } + +// Alias returns a Command alias which implements a "commands" subcommand. +func Alias(alias string, cmd Command) Command { + return &aliaser{alias, cmd} +} + +// dealias recursivly dealiases a command until a non-aliased command +// is reached. +func dealias(cmd Command) Command { + if alias, ok := cmd.(*aliaser); ok { + return dealias(alias.Command) + } + + return cmd +} + +// DefaultCommander is the default commander using flag.CommandLine for flags +// and os.Args[0] for the command name. +var DefaultCommander *Commander + +func init() { + DefaultCommander = NewCommander(flag.CommandLine, path.Base(os.Args[0])) +} + +// Register adds a subcommand to the supported subcommands in the +// specified group. (Help output is sorted and arranged by group +// name.) The empty string is an acceptable group name; such +// subcommands are explained first before named groups. It is a +// wrapper around DefaultCommander.Register. +func Register(cmd Command, group string) { + DefaultCommander.Register(cmd, group) +} + +// ImportantFlag marks a top-level flag as important, which means it +// will be printed out as part of the output of an ordinary "help" +// subcommand. (All flags, important or not, are printed by the +// "flags" subcommand.) It is a wrapper around +// DefaultCommander.ImportantFlag. +func ImportantFlag(name string) { + DefaultCommander.ImportantFlag(name) +} + +// Execute should be called once the default flags have been +// initialized by flag.Parse. It finds the correct subcommand and +// executes it, and returns an ExitStatus with the result. On a usage +// error, an appropriate message is printed to os.Stderr, and +// ExitUsageError is returned. The additional args are provided as-is +// to the Execute method of the selected Command. It is a wrapper +// around DefaultCommander.Execute. +func Execute(ctx context.Context, args ...interface{}) ExitStatus { + return DefaultCommander.Execute(ctx, args...) +} + +// HelpCommand returns a Command which implements "help" for the +// DefaultCommander. Use Register(HelpCommand(), ) for it to be +// recognized. +func HelpCommand() Command { + return DefaultCommander.HelpCommand() +} + +// FlagsCommand returns a Command which implements "flags" for the +// DefaultCommander. Use Register(FlagsCommand(), ) for it to be +// recognized. +func FlagsCommand() Command { + return DefaultCommander.FlagsCommand() +} + +// CommandsCommand returns Command which implements a "commands" subcommand. +func CommandsCommand() Command { + return DefaultCommander.CommandsCommand() +} diff --git a/vendor/github.com/google/wire/.codecov.yml b/vendor/github.com/google/wire/.codecov.yml new file mode 100644 index 0000000000..5ae6b8355c --- /dev/null +++ b/vendor/github.com/google/wire/.codecov.yml @@ -0,0 +1,13 @@ +comment: off +coverage: + status: + project: + default: + target: 0 + threshold: null + base: auto + patch: + default: + target: 0 + threshold: null + base: auto diff --git a/vendor/github.com/google/wire/.contributebot b/vendor/github.com/google/wire/.contributebot new file mode 100644 index 0000000000..9a66b3babd --- /dev/null +++ b/vendor/github.com/google/wire/.contributebot @@ -0,0 +1,4 @@ +{ + "issue_title_pattern": "^.*$", + "pull_request_title_response": "Please edit the title of this pull request with the name of the affected component, or \"all\", followed by a colon, followed by a short summary of the change." +} diff --git a/vendor/github.com/google/wire/.travis.yml b/vendor/github.com/google/wire/.travis.yml new file mode 100644 index 0000000000..680a5003ac --- /dev/null +++ b/vendor/github.com/google/wire/.travis.yml @@ -0,0 +1,53 @@ +# Copyright 2018 The Wire Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +language: go +go_import_path: github.com/google/wire + +before_install: + # The Bash that comes with OS X is ancient. + # grep is similar: it's not GNU grep, which means commands aren't portable. + # Homebrew installs grep as ggrep if you don't build from source, so it needs + # moving so it takes precedence in the PATH. + - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then + HOMEBREW_NO_AUTO_UPDATE=1 brew install bash grep; + mv $(brew --prefix)/bin/ggrep $(brew --prefix)/bin/grep; + fi + +install: + # Re-checkout files preserving line feeds. This prevents Windows builds from + # converting \n to \r\n. + - "git config --global core.autocrlf input" + - "git checkout -- ." + +script: + - 'internal/runtests.sh' + +env: + global: + - GO111MODULE=on + - GOPROXY=https://proxy.golang.org + +# When updating Go versions: +# In addition to changing the "go:" versions below, edit the version +# test in internal/runtests.sh. + +jobs: + include: + - go: "1.13.x" + os: linux + - go: "1.13.x" + os: osx + - go: "1.13.x" + os: windows diff --git a/vendor/github.com/google/wire/AUTHORS b/vendor/github.com/google/wire/AUTHORS new file mode 100644 index 0000000000..4d8d4b3197 --- /dev/null +++ b/vendor/github.com/google/wire/AUTHORS @@ -0,0 +1,18 @@ +# This is the official list of Wire authors for copyright purposes. +# This file is distinct from the CONTRIBUTORS files. +# See the latter for an explanation. + +# Names should be added to this file as one of +# Organization's name +# Individual's name +# Individual's name +# See CONTRIBUTORS for the meaning of multiple email addresses. + +# Please keep the list sorted. + +Google LLC +ktr +Kumbirai Tanekha +Oleg Kovalov +Yoichiro Shimizu +Zachary Romero diff --git a/vendor/github.com/google/wire/CODE_OF_CONDUCT.md b/vendor/github.com/google/wire/CODE_OF_CONDUCT.md new file mode 100644 index 0000000000..3a8545eccf --- /dev/null +++ b/vendor/github.com/google/wire/CODE_OF_CONDUCT.md @@ -0,0 +1,10 @@ +# Code of Conduct + +This project is covered under the [Go Code of Conduct][]. In summary: + +- Treat everyone with respect and kindness. +- Be thoughtful in how you communicate. +- Don’t be destructive or inflammatory. +- If you encounter an issue, please mail conduct@golang.org. + +[Go Code of Conduct]: https://golang.org/conduct diff --git a/vendor/github.com/google/wire/CONTRIBUTING.md b/vendor/github.com/google/wire/CONTRIBUTING.md new file mode 100644 index 0000000000..68445fc463 --- /dev/null +++ b/vendor/github.com/google/wire/CONTRIBUTING.md @@ -0,0 +1,152 @@ +# How to Contribute + +We would love to accept your patches and contributions to this project. Here is +how you can help. + +## Filing issues + +Filing issues is an important way you can contribute to the Wire Project. We +want your feedback on things like bugs, desired API changes, or just anything +that isn't working for you. + +### Bugs + +If your issue is a bug, open one +[here](https://github.com/google/wire/issues/new). The easiest way to file an +issue with all the right information is to run `go bug`. `go bug` will print out +a handy template of questions and system information that will help us get to +the root of the issue quicker. + +### Changes + +Unlike the core Go project, we do not have a formal proposal process for +changes. If you have a change you would like to see in Wire, please file an +issue with the necessary details. + +### Triaging + +The Go Cloud team triages issues at least every two weeks, but usually within +two business days. Bugs or feature requests are either placed into a **Sprint** +milestone which means the issue is intended to be worked on. Issues that we +would like to address but do not have time for are placed into the [Unplanned][] +milestone. + +[Unplanned]: https://github.com/google/wire/milestone/1 + +## Contributing Code + +We love accepting contributions! If your change is minor, please feel free +submit a [pull request](https://help.github.com/articles/about-pull-requests/). +If your change is larger, or adds a feature, please file an issue beforehand so +that we can discuss the change. You're welcome to file an implementation pull +request immediately as well, although we generally lean towards discussing the +change and then reviewing the implementation separately. + +### Finding something to work on + +If you want to write some code, but don't know where to start or what you might +want to do, take a look at our [Unplanned][] milestone. This is where you can +find issues we would like to address but can't currently find time for. See if +any of the latest ones look interesting! If you need help before you can start +work, you can comment on the issue and we will try to help as best we can. + +### Contributor License Agreement + +Contributions to this project can only be made by those who have signed Google's +Contributor License Agreement. You (or your employer) retain the copyright to +your contribution, this simply gives us permission to use and redistribute your +contributions as part of the project. Head over to + to see your current agreements on file or +to sign a new one. + +As a personal contributor, you only need to sign the Google CLA once across all +Google projects. If you've already signed the CLA, there is no need to do it +again. If you are submitting code on behalf of your employer, there's +[a separate corporate CLA that your employer manages for you](https://opensource.google.com/docs/cla/#external-contributors). + +## Making a pull request + +* Follow the normal + [pull request flow](https://help.github.com/articles/creating-a-pull-request/) +* Build your changes using Go 1.11 with Go modules enabled. Wire's continuous + integration uses Go modules in order to ensure + [reproducible builds](https://research.swtch.com/vgo-repro). +* Test your changes using `go test ./...`. Please add tests that show the + change does what it says it does, even if there wasn't a test in the first + place. +* Feel free to make as many commits as you want; we will squash them all into + a single commit before merging your change. +* Check the diffs, write a useful description (including something like + `Fixes #123` if it's fixing a bug) and send the PR out. +* [Travis CI](http://travis-ci.com) will run tests against the PR. This should + happen within 10 minutes or so. If a test fails, go back to the coding stage + and try to fix the test and push the same branch again. You won't need to + make a new pull request, the changes will be rolled directly into the PR you + already opened. Wait for Travis again. There is no need to assign a reviewer + to the PR, the project team will assign someone for review during the + standard [triage](#triaging) process. + +## Code review + +All submissions, including submissions by project members, require review. It is +almost never the case that a pull request is accepted without some changes +requested, so please do not be offended! + +When you have finished making requested changes to your pull request, please +make a comment containing "PTAL" (Please Take Another Look) on your pull +request. GitHub notifications can be noisy, and it is unfortunately easy for +things to be lost in the shuffle. + +Once your PR is approved (hooray!) the reviewer will squash your commits into a +single commit, and then merge the commit onto the Wire master branch. Thank you! + +## Github code review workflow conventions + +(For project members and frequent contributors.) + +As a contributor: + +- Try hard to make each Pull Request as small and focused as possible. In + particular, this means that if a reviewer asks you to do something that is + beyond the scope of the Pull Request, the best practice is to file another + issue and reference it from the Pull Request rather than just adding more + commits to the existing PR. +- Adding someone as a Reviewer means "please feel free to look and comment"; + the review is optional. Choose as many Reviewers as you'd like. +- Adding someone as an Assignee means that the Pull Request should not be + submitted until they approve. If you choose multiple Assignees, wait until + all of them approve. It is fine to ask someone if they are OK with being + removed as an Assignee. + - Note that if you don't select any assignees, ContributeBot will turn all + of your Reviewers into Assignees. +- Make as many commits as you want locally, but try not to push them to Github + until you've addressed comments; this allows the email notification about + the push to be a signal to reviewers that the PR is ready to be looked at + again. +- When there may be confusion about what should happen next for a PR, be + explicit; add a "PTAL" comment if it is ready for review again, or a "Please + hold off on reviewing for now" if you are still working on addressing + comments. +- "Resolve" comments that you are sure you've addressed; let your reviewers + resolve ones that you're not sure about. +- Do not use `git push --force`; this can cause comments from your reviewers + that are associated with a specific commit to be lost. This implies that + once you've sent a Pull Request, you should use `git merge` instead of `git + rebase` to incorporate commits from the master branch. + +As a reviewer: + +- Be timely in your review process, especially if you are an Assignee. +- Try to use `Start a Review` instead of single comments, to reduce email + spam. +- "Resolve" your own comments if they have been addressed. +- If you want your review to be blocking, and are not currently an Assignee, + add yourself as an Assignee. + +When squashing-and-merging: + +- Ensure that **all** of the Assignees have approved. +- Do a final review of the one-line PR summary, ensuring that it accurately + describes the change. +- Delete the automatically added commit lines; these are generally not + interesting and make commit history harder to read. diff --git a/vendor/github.com/google/wire/CONTRIBUTORS b/vendor/github.com/google/wire/CONTRIBUTORS new file mode 100644 index 0000000000..00a94f89ca --- /dev/null +++ b/vendor/github.com/google/wire/CONTRIBUTORS @@ -0,0 +1,43 @@ +# This is the official list of people who can contribute +# (and typically have contributed) code to the Wire repository. +# The AUTHORS file lists the copyright holders; this file +# lists people. For example, Google employees are listed here +# but not in AUTHORS, because Google holds the copyright. +# +# Names should be added to this file only after verifying that +# the individual or the individual's organization has agreed to +# the appropriate Contributor License Agreement, found here: +# +# http://code.google.com/legal/individual-cla-v1.0.html +# http://code.google.com/legal/corporate-cla-v1.0.html +# +# The agreement for individuals can be filled out on the web. +# +# When adding J Random Contributor's name to this file, +# either J's name or J's organization's name should be +# added to the AUTHORS file, depending on whether the +# individual or corporate CLA was used. + +# Names should be added to this file like so: +# Individual's name +# Individual's name +# +# An entry with multiple email addresses specifies that the +# first address should be used in the submit logs and +# that the other addresses should be recognized as the +# same person when interacting with Git. + +# Please keep the list sorted. + +Chris Lewis +Christina Austin <4240737+clausti@users.noreply.github.com> +Eno Compton +Issac Trotts +ktr +Kumbirai Tanekha +Oleg Kovalov +Robert van Gent +Ross Light +Tuo Shan +Yoichiro Shimizu +Zachary Romero diff --git a/vendor/github.com/google/wire/LICENSE b/vendor/github.com/google/wire/LICENSE new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/vendor/github.com/google/wire/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/google/wire/README.md b/vendor/github.com/google/wire/README.md new file mode 100644 index 0000000000..d432b63374 --- /dev/null +++ b/vendor/github.com/google/wire/README.md @@ -0,0 +1,60 @@ +# Wire: Automated Initialization in Go + +[![Build Status](https://travis-ci.com/google/wire.svg?branch=master)][travis] +[![godoc](https://godoc.org/github.com/google/wire?status.svg)][godoc] +[![Coverage](https://codecov.io/gh/google/wire/branch/master/graph/badge.svg)](https://codecov.io/gh/google/wire) + + +Wire is a code generation tool that automates connecting components using +[dependency injection][]. Dependencies between components are represented in +Wire as function parameters, encouraging explicit initialization instead of +global variables. Because Wire operates without runtime state or reflection, +code written to be used with Wire is useful even for hand-written +initialization. + +For an overview, see the [introductory blog post][]. + +[dependency injection]: https://en.wikipedia.org/wiki/Dependency_injection +[introductory blog post]: https://blog.golang.org/wire +[godoc]: https://godoc.org/github.com/google/wire +[travis]: https://travis-ci.com/google/wire + +## Installing + +Install Wire by running: + +```shell +go get github.com/google/wire/cmd/wire +``` + +and ensuring that `$GOPATH/bin` is added to your `$PATH`. + +## Documentation + +- [Tutorial][] +- [User Guide][] +- [Best Practices][] +- [FAQ][] + +[Tutorial]: ./_tutorial/README.md +[Best Practices]: ./docs/best-practices.md +[FAQ]: ./docs/faq.md +[User Guide]: ./docs/guide.md + +## Project status + +As of version v0.3.0, Wire is *beta* and is considered feature complete. It +works well for the tasks it was designed to perform, and we prefer to keep it +as simple as possible. + +We'll not be accepting new features at this time, but will gladly accept bug +reports and fixes. + +## Community + +You can contact us on the [go-cloud mailing list][]. + +This project is covered by the Go [Code of Conduct][]. + +[Code of Conduct]: ./CODE_OF_CONDUCT.md +[go-cloud mailing list]: https://groups.google.com/forum/#!forum/go-cloud diff --git a/vendor/github.com/google/wire/cmd/wire/main.go b/vendor/github.com/google/wire/cmd/wire/main.go new file mode 100644 index 0000000000..36c06e98ac --- /dev/null +++ b/vendor/github.com/google/wire/cmd/wire/main.go @@ -0,0 +1,596 @@ +// Copyright 2018 The Wire Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Wire is a compile-time dependency injection tool. +// +// For an overview, see https://github.com/google/wire/blob/master/README.md +package main + +import ( + "context" + "flag" + "fmt" + "go/token" + "go/types" + "io/ioutil" + "log" + "os" + "reflect" + "sort" + "strconv" + "strings" + + "github.com/google/subcommands" + "github.com/google/wire/internal/wire" + "github.com/pmezard/go-difflib/difflib" + "golang.org/x/tools/go/types/typeutil" +) + +func main() { + subcommands.Register(subcommands.CommandsCommand(), "") + subcommands.Register(subcommands.FlagsCommand(), "") + subcommands.Register(subcommands.HelpCommand(), "") + subcommands.Register(&checkCmd{}, "") + subcommands.Register(&diffCmd{}, "") + subcommands.Register(&genCmd{}, "") + subcommands.Register(&showCmd{}, "") + flag.Parse() + + // Initialize the default logger to log to stderr. + log.SetFlags(0) + log.SetPrefix("wire: ") + log.SetOutput(os.Stderr) + + // TODO(rvangent): Use subcommands's VisitCommands instead of hardcoded map, + // once there is a release that contains it: + // allCmds := map[string]bool{} + // subcommands.DefaultCommander.VisitCommands(func(_ *subcommands.CommandGroup, cmd subcommands.Command) { allCmds[cmd.Name()] = true }) + allCmds := map[string]bool{ + "commands": true, // builtin + "help": true, // builtin + "flags": true, // builtin + "check": true, + "diff": true, + "gen": true, + "show": true, + } + // Default to running the "gen" command. + if args := flag.Args(); len(args) == 0 || !allCmds[args[0]] { + genCmd := &genCmd{} + os.Exit(int(genCmd.Execute(context.Background(), flag.CommandLine))) + } + os.Exit(int(subcommands.Execute(context.Background()))) +} + +// packages returns the slice of packages to run wire over based on f. +// It defaults to ".". +func packages(f *flag.FlagSet) []string { + pkgs := f.Args() + if len(pkgs) == 0 { + pkgs = []string{"."} + } + return pkgs +} + +// newGenerateOptions returns an initialized wire.GenerateOptions, possibly +// with the Header option set. +func newGenerateOptions(headerFile string) (*wire.GenerateOptions, error) { + opts := new(wire.GenerateOptions) + if headerFile != "" { + var err error + opts.Header, err = ioutil.ReadFile(headerFile) + if err != nil { + return nil, fmt.Errorf("failed to read header file %q: %v", headerFile, err) + } + } + return opts, nil +} + +type genCmd struct { + headerFile string + prefixFileName string +} + +func (*genCmd) Name() string { return "gen" } +func (*genCmd) Synopsis() string { + return "generate the wire_gen.go file for each package" +} +func (*genCmd) Usage() string { + return `gen [packages] + + Given one or more packages, gen creates the wire_gen.go file for each. + + If no packages are listed, it defaults to ".". +` +} +func (cmd *genCmd) SetFlags(f *flag.FlagSet) { + f.StringVar(&cmd.headerFile, "header_file", "", "path to file to insert as a header in wire_gen.go") + f.StringVar(&cmd.prefixFileName, "output_file_prefix", "", "string to prepend to output file names.") +} + +func (cmd *genCmd) Execute(ctx context.Context, f *flag.FlagSet, args ...interface{}) subcommands.ExitStatus { + wd, err := os.Getwd() + if err != nil { + log.Println("failed to get working directory: ", err) + return subcommands.ExitFailure + } + opts, err := newGenerateOptions(cmd.headerFile) + if err != nil { + log.Println(err) + return subcommands.ExitFailure + } + + opts.PrefixOutputFile = cmd.prefixFileName + + outs, errs := wire.Generate(ctx, wd, os.Environ(), packages(f), opts) + if len(errs) > 0 { + logErrors(errs) + log.Println("generate failed") + return subcommands.ExitFailure + } + if len(outs) == 0 { + return subcommands.ExitSuccess + } + success := true + for _, out := range outs { + if len(out.Errs) > 0 { + logErrors(out.Errs) + log.Printf("%s: generate failed\n", out.PkgPath) + success = false + } + if len(out.Content) == 0 { + // No Wire output. Maybe errors, maybe no Wire directives. + continue + } + if err := out.Commit(); err == nil { + log.Printf("%s: wrote %s\n", out.PkgPath, out.OutputPath) + } else { + log.Printf("%s: failed to write %s: %v\n", out.PkgPath, out.OutputPath, err) + success = false + } + } + if !success { + log.Println("at least one generate failure") + return subcommands.ExitFailure + } + return subcommands.ExitSuccess +} + +type diffCmd struct { + headerFile string +} + +func (*diffCmd) Name() string { return "diff" } +func (*diffCmd) Synopsis() string { + return "output a diff between existing wire_gen.go files and what gen would generate" +} +func (*diffCmd) Usage() string { + return `diff [packages] + + Given one or more packages, diff generates the content for their wire_gen.go + files and outputs the diff against the existing files. + + If no packages are listed, it defaults to ".". + + Similar to the diff command, it returns 0 if no diff, 1 if different, 2 + plus an error if trouble. +` +} +func (cmd *diffCmd) SetFlags(f *flag.FlagSet) { + f.StringVar(&cmd.headerFile, "header_file", "", "path to file to insert as a header in wire_gen.go") +} +func (cmd *diffCmd) Execute(ctx context.Context, f *flag.FlagSet, args ...interface{}) subcommands.ExitStatus { + const ( + errReturn = subcommands.ExitStatus(2) + diffReturn = subcommands.ExitStatus(1) + ) + wd, err := os.Getwd() + if err != nil { + log.Println("failed to get working directory: ", err) + return errReturn + } + opts, err := newGenerateOptions(cmd.headerFile) + if err != nil { + log.Println(err) + return subcommands.ExitFailure + } + + outs, errs := wire.Generate(ctx, wd, os.Environ(), packages(f), opts) + if len(errs) > 0 { + logErrors(errs) + log.Println("generate failed") + return errReturn + } + if len(outs) == 0 { + return subcommands.ExitSuccess + } + success := true + hadDiff := false + for _, out := range outs { + if len(out.Errs) > 0 { + logErrors(out.Errs) + log.Printf("%s: generate failed\n", out.PkgPath) + success = false + } + if len(out.Content) == 0 { + // No Wire output. Maybe errors, maybe no Wire directives. + continue + } + // Assumes the current file is empty if we can't read it. + cur, _ := ioutil.ReadFile(out.OutputPath) + if diff, err := difflib.GetUnifiedDiffString(difflib.UnifiedDiff{ + A: difflib.SplitLines(string(cur)), + B: difflib.SplitLines(string(out.Content)), + }); err == nil { + if diff != "" { + // Print the actual diff to stdout, not stderr. + fmt.Printf("%s: diff from %s:\n%s\n", out.PkgPath, out.OutputPath, diff) + hadDiff = true + } + } else { + log.Printf("%s: failed to diff %s: %v\n", out.PkgPath, out.OutputPath, err) + success = false + } + } + if !success { + log.Println("at least one generate failure") + return errReturn + } + if hadDiff { + return diffReturn + } + return subcommands.ExitSuccess +} + +type showCmd struct{} + +func (*showCmd) Name() string { return "show" } +func (*showCmd) Synopsis() string { + return "describe all top-level provider sets" +} +func (*showCmd) Usage() string { + return `show [packages] + + Given one or more packages, show finds all the provider sets declared as + top-level variables and prints what other provider sets they import and what + outputs they can produce, given possible inputs. It also lists any injector + functions defined in the package. + + If no packages are listed, it defaults to ".". +` +} +func (*showCmd) SetFlags(_ *flag.FlagSet) {} +func (*showCmd) Execute(ctx context.Context, f *flag.FlagSet, args ...interface{}) subcommands.ExitStatus { + wd, err := os.Getwd() + if err != nil { + log.Println("failed to get working directory: ", err) + return subcommands.ExitFailure + } + info, errs := wire.Load(ctx, wd, os.Environ(), packages(f)) + if info != nil { + keys := make([]wire.ProviderSetID, 0, len(info.Sets)) + for k := range info.Sets { + keys = append(keys, k) + } + sort.Slice(keys, func(i, j int) bool { + if keys[i].ImportPath == keys[j].ImportPath { + return keys[i].VarName < keys[j].VarName + } + return keys[i].ImportPath < keys[j].ImportPath + }) + for i, k := range keys { + if i > 0 { + fmt.Println() + } + outGroups, imports := gather(info, k) + fmt.Println(k) + for _, imp := range sortSet(imports) { + fmt.Printf("\t%s\n", imp) + } + for i := range outGroups { + fmt.Printf("\tOutputs given %s:\n", outGroups[i].name) + out := make(map[string]token.Pos, outGroups[i].outputs.Len()) + outGroups[i].outputs.Iterate(func(t types.Type, v interface{}) { + switch v := v.(type) { + case *wire.Provider: + out[types.TypeString(t, nil)] = v.Pos + case *wire.Value: + out[types.TypeString(t, nil)] = v.Pos + case *wire.Field: + out[types.TypeString(t, nil)] = v.Pos + default: + panic("unreachable") + } + }) + for _, t := range sortSet(out) { + fmt.Printf("\t\t%s\n", t) + fmt.Printf("\t\t\tat %v\n", info.Fset.Position(out[t])) + } + } + } + if len(info.Injectors) > 0 { + injectors := append([]*wire.Injector(nil), info.Injectors...) + sort.Slice(injectors, func(i, j int) bool { + if injectors[i].ImportPath == injectors[j].ImportPath { + return injectors[i].FuncName < injectors[j].FuncName + } + return injectors[i].ImportPath < injectors[j].ImportPath + }) + fmt.Println("\nInjectors:") + for _, in := range injectors { + fmt.Printf("\t%v\n", in) + } + } + } + if len(errs) > 0 { + logErrors(errs) + log.Println("error loading packages") + return subcommands.ExitFailure + } + return subcommands.ExitSuccess +} + +type checkCmd struct{} + +func (*checkCmd) Name() string { return "check" } +func (*checkCmd) Synopsis() string { + return "print any Wire errors found" +} +func (*checkCmd) Usage() string { + return `check [packages] + + Given one or more packages, check prints any type-checking or Wire errors + found with top-level variable provider sets or injector functions. + + If no packages are listed, it defaults to ".". +` +} +func (*checkCmd) SetFlags(_ *flag.FlagSet) {} +func (*checkCmd) Execute(ctx context.Context, f *flag.FlagSet, args ...interface{}) subcommands.ExitStatus { + wd, err := os.Getwd() + if err != nil { + log.Println("failed to get working directory: ", err) + return subcommands.ExitFailure + } + _, errs := wire.Load(ctx, wd, os.Environ(), packages(f)) + if len(errs) > 0 { + logErrors(errs) + log.Println("error loading packages") + return subcommands.ExitFailure + } + return subcommands.ExitSuccess +} + +type outGroup struct { + name string + inputs *typeutil.Map // values are not important + outputs *typeutil.Map // values are *wire.Provider, *wire.Value, or *wire.Field +} + +// gather flattens a provider set into outputs grouped by the inputs +// required to create them. As it flattens the provider set, it records +// the visited named provider sets as imports. +func gather(info *wire.Info, key wire.ProviderSetID) (_ []outGroup, imports map[string]struct{}) { + set := info.Sets[key] + hash := typeutil.MakeHasher() + + // Find imports. + next := []*wire.ProviderSet{info.Sets[key]} + visited := make(map[*wire.ProviderSet]struct{}) + imports = make(map[string]struct{}) + for len(next) > 0 { + curr := next[len(next)-1] + next = next[:len(next)-1] + if _, found := visited[curr]; found { + continue + } + visited[curr] = struct{}{} + if curr.VarName != "" && !(curr.PkgPath == key.ImportPath && curr.VarName == key.VarName) { + imports[formatProviderSetName(curr.PkgPath, curr.VarName)] = struct{}{} + } + for _, imp := range curr.Imports { + next = append(next, imp) + } + } + + // Depth-first search to build groups. + var groups []outGroup + inputVisited := new(typeutil.Map) // values are int, indices into groups or -1 for input. + inputVisited.SetHasher(hash) + var stk []types.Type + for _, k := range set.Outputs() { + // Start a DFS by picking a random unvisited node. + if inputVisited.At(k) == nil { + stk = append(stk, k) + } + + // Run DFS + dfs: + for len(stk) > 0 { + curr := stk[len(stk)-1] + stk = stk[:len(stk)-1] + if inputVisited.At(curr) != nil { + continue + } + switch pv := set.For(curr); { + case pv.IsNil(): + // This is an input. + inputVisited.Set(curr, -1) + case pv.IsArg(): + // This is an injector argument. + inputVisited.Set(curr, -1) + case pv.IsProvider(): + // Try to see if any args haven't been visited. + p := pv.Provider() + allPresent := true + for _, arg := range p.Args { + if inputVisited.At(arg.Type) == nil { + allPresent = false + } + } + if !allPresent { + stk = append(stk, curr) + for _, arg := range p.Args { + if inputVisited.At(arg.Type) == nil { + stk = append(stk, arg.Type) + } + } + continue dfs + } + + // Build up set of input types, match to a group. + in := new(typeutil.Map) + in.SetHasher(hash) + for _, arg := range p.Args { + i := inputVisited.At(arg.Type).(int) + if i == -1 { + in.Set(arg.Type, true) + } else { + mergeTypeSets(in, groups[i].inputs) + } + } + for i := range groups { + if sameTypeKeys(groups[i].inputs, in) { + groups[i].outputs.Set(curr, p) + inputVisited.Set(curr, i) + continue dfs + } + } + out := new(typeutil.Map) + out.SetHasher(hash) + out.Set(curr, p) + inputVisited.Set(curr, len(groups)) + groups = append(groups, outGroup{ + inputs: in, + outputs: out, + }) + case pv.IsValue(): + v := pv.Value() + for i := range groups { + if groups[i].inputs.Len() == 0 { + groups[i].outputs.Set(curr, v) + inputVisited.Set(curr, i) + continue dfs + } + } + in := new(typeutil.Map) + in.SetHasher(hash) + out := new(typeutil.Map) + out.SetHasher(hash) + out.Set(curr, v) + inputVisited.Set(curr, len(groups)) + groups = append(groups, outGroup{ + inputs: in, + outputs: out, + }) + case pv.IsField(): + // Try to see if the parent struct hasn't been visited. + f := pv.Field() + if inputVisited.At(f.Parent) == nil { + stk = append(stk, curr, f.Parent) + continue + } + // Build the input map for the parent struct. + in := new(typeutil.Map) + in.SetHasher(hash) + i := inputVisited.At(f.Parent).(int) + if i == -1 { + in.Set(f.Parent, true) + } else { + mergeTypeSets(in, groups[i].inputs) + } + // Group all fields together under the same parent struct. + for i := range groups { + if sameTypeKeys(groups[i].inputs, in) { + groups[i].outputs.Set(curr, f) + inputVisited.Set(curr, i) + continue dfs + } + } + out := new(typeutil.Map) + out.SetHasher(hash) + out.Set(curr, f) + inputVisited.Set(curr, len(groups)) + groups = append(groups, outGroup{ + inputs: in, + outputs: out, + }) + default: + panic("unreachable") + } + } + } + + // Name and sort groups. + for i := range groups { + if groups[i].inputs.Len() == 0 { + groups[i].name = "no inputs" + continue + } + instr := make([]string, 0, groups[i].inputs.Len()) + groups[i].inputs.Iterate(func(k types.Type, _ interface{}) { + instr = append(instr, types.TypeString(k, nil)) + }) + sort.Strings(instr) + groups[i].name = strings.Join(instr, ", ") + } + sort.Slice(groups, func(i, j int) bool { + if groups[i].inputs.Len() == groups[j].inputs.Len() { + return groups[i].name < groups[j].name + } + return groups[i].inputs.Len() < groups[j].inputs.Len() + }) + return groups, imports +} + +func mergeTypeSets(dst, src *typeutil.Map) { + src.Iterate(func(k types.Type, _ interface{}) { + dst.Set(k, true) + }) +} + +func sameTypeKeys(a, b *typeutil.Map) bool { + if a.Len() != b.Len() { + return false + } + same := true + a.Iterate(func(k types.Type, _ interface{}) { + if b.At(k) == nil { + same = false + } + }) + return same +} + +func sortSet(set interface{}) []string { + rv := reflect.ValueOf(set) + a := make([]string, 0, rv.Len()) + keys := rv.MapKeys() + for _, k := range keys { + a = append(a, k.String()) + } + sort.Strings(a) + return a +} + +func formatProviderSetName(importPath, varName string) string { + // Since varName is an identifier, it doesn't make sense to quote. + return strconv.Quote(importPath) + "." + varName +} + +func logErrors(errs []error) { + for _, err := range errs { + log.Println(strings.Replace(err.Error(), "\n", "\n\t", -1)) + } +} diff --git a/vendor/github.com/google/wire/go.mod b/vendor/github.com/google/wire/go.mod new file mode 100644 index 0000000000..b2233dc52b --- /dev/null +++ b/vendor/github.com/google/wire/go.mod @@ -0,0 +1,10 @@ +module github.com/google/wire + +go 1.12 + +require ( + github.com/google/go-cmp v0.2.0 + github.com/google/subcommands v1.0.1 + github.com/pmezard/go-difflib v1.0.0 + golang.org/x/tools v0.0.0-20190422233926-fe54fb35175b +) diff --git a/vendor/github.com/google/wire/go.sum b/vendor/github.com/google/wire/go.sum new file mode 100644 index 0000000000..88ea58c528 --- /dev/null +++ b/vendor/github.com/google/wire/go.sum @@ -0,0 +1,12 @@ +github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ= +github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= +github.com/google/subcommands v1.0.1 h1:/eqq+otEXm5vhfBrbREPCSVQbvofip6kIz+mX5TUH7k= +github.com/google/subcommands v1.0.1/go.mod h1:ZjhPrFU+Olkh9WazFPsl27BQ4UPiG37m3yTrtFlrHVk= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/tools v0.0.0-20190422233926-fe54fb35175b h1:NVD8gBK33xpdqCaZVVtd6OFJp+3dxkXuz7+U7KaVN6s= +golang.org/x/tools v0.0.0-20190422233926-fe54fb35175b/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= diff --git a/vendor/github.com/google/wire/internal/wire/analyze.go b/vendor/github.com/google/wire/internal/wire/analyze.go new file mode 100644 index 0000000000..9650ef1672 --- /dev/null +++ b/vendor/github.com/google/wire/internal/wire/analyze.go @@ -0,0 +1,521 @@ +// Copyright 2018 The Wire Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package wire + +import ( + "errors" + "fmt" + "go/ast" + "go/token" + "go/types" + "sort" + "strings" + + "golang.org/x/tools/go/types/typeutil" +) + +type callKind int + +const ( + funcProviderCall callKind = iota + structProvider + valueExpr + selectorExpr +) + +// A call represents a step of an injector function. It may be either a +// function call or a composite struct literal, depending on the value +// of kind. +type call struct { + // kind indicates the code pattern to use. + kind callKind + + // out is the type this step produces. + out types.Type + + // pkg and name identify one of the following: + // 1) the provider to call for kind == funcProviderCall; + // 2) the type to construct for kind == structProvider; + // 3) the name to select for kind == selectorExpr. + pkg *types.Package + name string + + // args is a list of arguments to call the provider with. Each element is: + // a) one of the givens (args[i] < len(given)), + // b) the result of a previous provider call (args[i] >= len(given)) + // + // This will be nil for kind == valueExpr. + // + // If kind == selectorExpr, then the length of this slice will be 1 and the + // "argument" will be the value to access fields from. + args []int + + // varargs is true if the provider function is variadic. + varargs bool + + // fieldNames maps the arguments to struct field names. + // This will only be set if kind == structProvider. + fieldNames []string + + // ins is the list of types this call receives as arguments. + // This will be nil for kind == valueExpr. + ins []types.Type + + // The following are only set for kind == funcProviderCall: + + // hasCleanup is true if the provider call returns a cleanup function. + hasCleanup bool + // hasErr is true if the provider call returns an error. + hasErr bool + + // The following are only set for kind == valueExpr: + + valueExpr ast.Expr + valueTypeInfo *types.Info + + // The following are only set for kind == selectorExpr: + + ptrToField bool +} + +// solve finds the sequence of calls required to produce an output type +// with an optional set of provided inputs. +func solve(fset *token.FileSet, out types.Type, given *types.Tuple, set *ProviderSet) ([]call, []error) { + ec := new(errorCollector) + + // Start building the mapping of type to local variable of the given type. + // The first len(given) local variables are the given types. + index := new(typeutil.Map) + for i := 0; i < given.Len(); i++ { + index.Set(given.At(i).Type(), i) + } + + // Topological sort of the directed graph defined by the providers + // using a depth-first search using a stack. Provider set graphs are + // guaranteed to be acyclic. An index value of errAbort indicates that + // the type was visited, but failed due to an error added to ec. + errAbort := errors.New("failed to visit") + var used []*providerSetSrc + var calls []call + type frame struct { + t types.Type + from types.Type + up *frame + } + stk := []frame{{t: out}} +dfs: + for len(stk) > 0 { + curr := stk[len(stk)-1] + stk = stk[:len(stk)-1] + if index.At(curr.t) != nil { + continue + } + + pv := set.For(curr.t) + if pv.IsNil() { + if curr.from == nil { + ec.add(fmt.Errorf("no provider found for %s, output of injector", types.TypeString(curr.t, nil))) + index.Set(curr.t, errAbort) + continue + } + sb := new(strings.Builder) + fmt.Fprintf(sb, "no provider found for %s", types.TypeString(curr.t, nil)) + for f := curr.up; f != nil; f = f.up { + fmt.Fprintf(sb, "\nneeded by %s in %s", types.TypeString(f.t, nil), set.srcMap.At(f.t).(*providerSetSrc).description(fset, f.t)) + } + ec.add(errors.New(sb.String())) + index.Set(curr.t, errAbort) + continue + } + src := set.srcMap.At(curr.t).(*providerSetSrc) + used = append(used, src) + if concrete := pv.Type(); !types.Identical(concrete, curr.t) { + // Interface binding does not create a call. + i := index.At(concrete) + if i == nil { + stk = append(stk, curr, frame{t: concrete, from: curr.t, up: &curr}) + continue + } + index.Set(curr.t, i) + continue + } + + switch pv := set.For(curr.t); { + case pv.IsArg(): + // Continue, already added to stk. + case pv.IsProvider(): + p := pv.Provider() + // Ensure that all argument types have been visited. If not, push them + // on the stack in reverse order so that calls are added in argument + // order. + visitedArgs := true + for i := len(p.Args) - 1; i >= 0; i-- { + a := p.Args[i] + if index.At(a.Type) == nil { + if visitedArgs { + // Make sure to re-visit this type after visiting all arguments. + stk = append(stk, curr) + visitedArgs = false + } + stk = append(stk, frame{t: a.Type, from: curr.t, up: &curr}) + } + } + if !visitedArgs { + continue + } + args := make([]int, len(p.Args)) + ins := make([]types.Type, len(p.Args)) + for i := range p.Args { + ins[i] = p.Args[i].Type + v := index.At(p.Args[i].Type) + if v == errAbort { + index.Set(curr.t, errAbort) + continue dfs + } + args[i] = v.(int) + } + index.Set(curr.t, given.Len()+len(calls)) + kind := funcProviderCall + fieldNames := []string(nil) + if p.IsStruct { + kind = structProvider + for _, arg := range p.Args { + fieldNames = append(fieldNames, arg.FieldName) + } + } + calls = append(calls, call{ + kind: kind, + pkg: p.Pkg, + name: p.Name, + args: args, + varargs: p.Varargs, + fieldNames: fieldNames, + ins: ins, + out: curr.t, + hasCleanup: p.HasCleanup, + hasErr: p.HasErr, + }) + case pv.IsValue(): + v := pv.Value() + index.Set(curr.t, given.Len()+len(calls)) + calls = append(calls, call{ + kind: valueExpr, + out: curr.t, + valueExpr: v.expr, + valueTypeInfo: v.info, + }) + case pv.IsField(): + f := pv.Field() + if index.At(f.Parent) == nil { + // Fields have one dependency which is the parent struct. Make + // sure to visit it first if it is not already visited. + stk = append(stk, curr, frame{t: f.Parent, from: curr.t, up: &curr}) + continue + } + index.Set(curr.t, given.Len()+len(calls)) + v := index.At(f.Parent) + if v == errAbort { + index.Set(curr.t, errAbort) + continue dfs + } + // Use args[0] to store the position of the parent struct. + args := []int{v.(int)} + // If f.Out has 2 elements and curr.t is the 2nd one, then the call must + // provide a pointer to the field. + ptrToField := len(f.Out) == 2 && types.Identical(curr.t, f.Out[1]) + calls = append(calls, call{ + kind: selectorExpr, + pkg: f.Pkg, + name: f.Name, + out: curr.t, + args: args, + ptrToField: ptrToField, + }) + default: + panic("unknown return value from ProviderSet.For") + } + } + if len(ec.errors) > 0 { + return nil, ec.errors + } + if errs := verifyArgsUsed(set, used); len(errs) > 0 { + return nil, errs + } + return calls, nil +} + +// verifyArgsUsed ensures that all of the arguments in set were used during solve. +func verifyArgsUsed(set *ProviderSet, used []*providerSetSrc) []error { + var errs []error + for _, imp := range set.Imports { + found := false + for _, u := range used { + if u.Import == imp { + found = true + break + } + } + if !found { + if imp.VarName == "" { + errs = append(errs, errors.New("unused provider set")) + } else { + errs = append(errs, fmt.Errorf("unused provider set %q", imp.VarName)) + } + } + } + for _, p := range set.Providers { + found := false + for _, u := range used { + if u.Provider == p { + found = true + break + } + } + if !found { + errs = append(errs, fmt.Errorf("unused provider %q", p.Pkg.Name()+"."+p.Name)) + } + } + for _, v := range set.Values { + found := false + for _, u := range used { + if u.Value == v { + found = true + break + } + } + if !found { + errs = append(errs, fmt.Errorf("unused value of type %s", types.TypeString(v.Out, nil))) + } + } + for _, b := range set.Bindings { + found := false + for _, u := range used { + if u.Binding == b { + found = true + break + } + } + if !found { + errs = append(errs, fmt.Errorf("unused interface binding to type %s", types.TypeString(b.Iface, nil))) + } + } + for _, f := range set.Fields { + found := false + for _, u := range used { + if u.Field == f { + found = true + break + } + } + if !found { + errs = append(errs, fmt.Errorf("unused field %q.%s", f.Parent, f.Name)) + } + } + return errs +} + +// buildProviderMap creates the providerMap and srcMap fields for a given +// provider set. The given provider set's providerMap and srcMap fields are +// ignored. +func buildProviderMap(fset *token.FileSet, hasher typeutil.Hasher, set *ProviderSet) (*typeutil.Map, *typeutil.Map, []error) { + providerMap := new(typeutil.Map) + providerMap.SetHasher(hasher) + srcMap := new(typeutil.Map) // to *providerSetSrc + srcMap.SetHasher(hasher) + + ec := new(errorCollector) + // Process injector arguments. + if set.InjectorArgs != nil { + givens := set.InjectorArgs.Tuple + for i := 0; i < givens.Len(); i++ { + typ := givens.At(i).Type() + arg := &InjectorArg{Args: set.InjectorArgs, Index: i} + src := &providerSetSrc{InjectorArg: arg} + if prevSrc := srcMap.At(typ); prevSrc != nil { + ec.add(bindingConflictError(fset, typ, set, src, prevSrc.(*providerSetSrc))) + continue + } + providerMap.Set(typ, &ProvidedType{t: typ, a: arg}) + srcMap.Set(typ, src) + } + } + // Process imports, verifying that there are no conflicts between sets. + for _, imp := range set.Imports { + src := &providerSetSrc{Import: imp} + imp.providerMap.Iterate(func(k types.Type, v interface{}) { + if prevSrc := srcMap.At(k); prevSrc != nil { + ec.add(bindingConflictError(fset, k, set, src, prevSrc.(*providerSetSrc))) + return + } + providerMap.Set(k, v) + srcMap.Set(k, src) + }) + } + if len(ec.errors) > 0 { + return nil, nil, ec.errors + } + + // Process non-binding providers in new set. + for _, p := range set.Providers { + src := &providerSetSrc{Provider: p} + for _, typ := range p.Out { + if prevSrc := srcMap.At(typ); prevSrc != nil { + ec.add(bindingConflictError(fset, typ, set, src, prevSrc.(*providerSetSrc))) + continue + } + providerMap.Set(typ, &ProvidedType{t: typ, p: p}) + srcMap.Set(typ, src) + } + } + for _, v := range set.Values { + src := &providerSetSrc{Value: v} + if prevSrc := srcMap.At(v.Out); prevSrc != nil { + ec.add(bindingConflictError(fset, v.Out, set, src, prevSrc.(*providerSetSrc))) + continue + } + providerMap.Set(v.Out, &ProvidedType{t: v.Out, v: v}) + srcMap.Set(v.Out, src) + } + for _, f := range set.Fields { + src := &providerSetSrc{Field: f} + for _, typ := range f.Out { + if prevSrc := srcMap.At(typ); prevSrc != nil { + ec.add(bindingConflictError(fset, typ, set, src, prevSrc.(*providerSetSrc))) + continue + } + providerMap.Set(typ, &ProvidedType{t: typ, f: f}) + srcMap.Set(typ, src) + } + } + if len(ec.errors) > 0 { + return nil, nil, ec.errors + } + + // Process bindings in set. Must happen after the other providers to + // ensure the concrete type is being provided. + for _, b := range set.Bindings { + src := &providerSetSrc{Binding: b} + if prevSrc := srcMap.At(b.Iface); prevSrc != nil { + ec.add(bindingConflictError(fset, b.Iface, set, src, prevSrc.(*providerSetSrc))) + continue + } + concrete := providerMap.At(b.Provided) + if concrete == nil { + setName := set.VarName + if setName == "" { + setName = "provider set" + } + ec.add(notePosition(fset.Position(b.Pos), fmt.Errorf("wire.Bind of concrete type %q to interface %q, but %s does not include a provider for %q", b.Provided, b.Iface, setName, b.Provided))) + continue + } + providerMap.Set(b.Iface, concrete) + srcMap.Set(b.Iface, src) + } + if len(ec.errors) > 0 { + return nil, nil, ec.errors + } + return providerMap, srcMap, nil +} + +func verifyAcyclic(providerMap *typeutil.Map, hasher typeutil.Hasher) []error { + // We must visit every provider type inside provider map, but we don't + // have a well-defined starting point and there may be several + // distinct graphs. Thus, we start a depth-first search at every + // provider, but keep a shared record of visited providers to avoid + // duplicating work. + visited := new(typeutil.Map) // to bool + visited.SetHasher(hasher) + ec := new(errorCollector) + // Sort output types so that errors about cycles are consistent. + outputs := providerMap.Keys() + sort.Slice(outputs, func(i, j int) bool { return types.TypeString(outputs[i], nil) < types.TypeString(outputs[j], nil) }) + for _, root := range outputs { + // Depth-first search using a stack of trails through the provider map. + stk := [][]types.Type{{root}} + for len(stk) > 0 { + curr := stk[len(stk)-1] + stk = stk[:len(stk)-1] + head := curr[len(curr)-1] + if v, _ := visited.At(head).(bool); v { + continue + } + visited.Set(head, true) + x := providerMap.At(head) + if x == nil { + // Leaf: input. + continue + } + pt := x.(*ProvidedType) + switch { + case pt.IsValue(): + // Leaf: values do not have dependencies. + case pt.IsArg(): + // Injector arguments do not have dependencies. + case pt.IsProvider() || pt.IsField(): + var args []types.Type + if pt.IsProvider() { + for _, arg := range pt.Provider().Args { + args = append(args, arg.Type) + } + } else { + args = append(args, pt.Field().Parent) + } + for _, a := range args { + hasCycle := false + for i, b := range curr { + if types.Identical(a, b) { + sb := new(strings.Builder) + fmt.Fprintf(sb, "cycle for %s:\n", types.TypeString(a, nil)) + for j := i; j < len(curr); j++ { + t := providerMap.At(curr[j]).(*ProvidedType) + if t.IsProvider() { + p := t.Provider() + fmt.Fprintf(sb, "%s (%s.%s) ->\n", types.TypeString(curr[j], nil), p.Pkg.Path(), p.Name) + } else { + p := t.Field() + fmt.Fprintf(sb, "%s (%s.%s) ->\n", types.TypeString(curr[j], nil), p.Parent, p.Name) + } + } + fmt.Fprintf(sb, "%s", types.TypeString(a, nil)) + ec.add(errors.New(sb.String())) + hasCycle = true + break + } + } + if !hasCycle { + next := append(append([]types.Type(nil), curr...), a) + stk = append(stk, next) + } + } + default: + panic("invalid provider map value") + } + } + } + return ec.errors +} + +// bindingConflictError creates a new error describing multiple bindings +// for the same output type. +func bindingConflictError(fset *token.FileSet, typ types.Type, set *ProviderSet, cur, prev *providerSetSrc) error { + sb := new(strings.Builder) + if set.VarName != "" { + fmt.Fprintf(sb, "%s has ", set.VarName) + } + fmt.Fprintf(sb, "multiple bindings for %s\n", types.TypeString(typ, nil)) + fmt.Fprintf(sb, "current:\n<- %s\n", strings.Join(cur.trace(fset, typ), "\n<- ")) + fmt.Fprintf(sb, "previous:\n<- %s", strings.Join(prev.trace(fset, typ), "\n<- ")) + return notePosition(fset.Position(set.Pos), errors.New(sb.String())) +} diff --git a/vendor/github.com/google/wire/internal/wire/copyast.go b/vendor/github.com/google/wire/internal/wire/copyast.go new file mode 100644 index 0000000000..179d1c6434 --- /dev/null +++ b/vendor/github.com/google/wire/internal/wire/copyast.go @@ -0,0 +1,493 @@ +// Copyright 2018 The Wire Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package wire + +import ( + "fmt" + "go/ast" + + "golang.org/x/tools/go/ast/astutil" +) + +// copyAST performs a deep copy of an AST. *ast.Ident identity will be +// preserved. +// +// This allows using astutil.Apply to rewrite an AST without modifying +// the original AST. +func copyAST(original ast.Node) ast.Node { + // This function is necessarily long. No utility function exists to do this + // clone, as most any attempt would need to have customization options, which + // would need to be as expressive as Apply. A possibility to shorten the code + // here would be to use reflection, but that trades clarity for shorter code. + + m := make(map[ast.Node]ast.Node) + astutil.Apply(original, nil, func(c *astutil.Cursor) bool { + switch node := c.Node().(type) { + case nil: + // No-op. + case *ast.ArrayType: + m[node] = &ast.ArrayType{ + Lbrack: node.Lbrack, + Len: exprFromMap(m, node.Len), + Elt: exprFromMap(m, node.Elt), + } + case *ast.AssignStmt: + m[node] = &ast.AssignStmt{ + Lhs: copyExprList(m, node.Lhs), + TokPos: node.TokPos, + Tok: node.Tok, + Rhs: copyExprList(m, node.Rhs), + } + case *ast.BadDecl: + m[node] = &ast.BadDecl{ + From: node.From, + To: node.To, + } + case *ast.BadExpr: + m[node] = &ast.BadExpr{ + From: node.From, + To: node.To, + } + case *ast.BadStmt: + m[node] = &ast.BadStmt{ + From: node.From, + To: node.To, + } + case *ast.BasicLit: + m[node] = &ast.BasicLit{ + ValuePos: node.ValuePos, + Kind: node.Kind, + Value: node.Value, + } + case *ast.BinaryExpr: + m[node] = &ast.BinaryExpr{ + X: exprFromMap(m, node.X), + OpPos: node.OpPos, + Op: node.Op, + Y: exprFromMap(m, node.Y), + } + case *ast.BlockStmt: + m[node] = &ast.BlockStmt{ + Lbrace: node.Lbrace, + List: copyStmtList(m, node.List), + Rbrace: node.Rbrace, + } + case *ast.BranchStmt: + m[node] = &ast.BranchStmt{ + TokPos: node.TokPos, + Tok: node.Tok, + Label: identFromMap(m, node.Label), + } + case *ast.CallExpr: + m[node] = &ast.CallExpr{ + Fun: exprFromMap(m, node.Fun), + Lparen: node.Lparen, + Args: copyExprList(m, node.Args), + Ellipsis: node.Ellipsis, + Rparen: node.Rparen, + } + case *ast.CaseClause: + m[node] = &ast.CaseClause{ + Case: node.Case, + List: copyExprList(m, node.List), + Colon: node.Colon, + Body: copyStmtList(m, node.Body), + } + case *ast.ChanType: + m[node] = &ast.ChanType{ + Begin: node.Begin, + Arrow: node.Arrow, + Dir: node.Dir, + Value: exprFromMap(m, node.Value), + } + case *ast.CommClause: + m[node] = &ast.CommClause{ + Case: node.Case, + Comm: stmtFromMap(m, node.Comm), + Colon: node.Colon, + Body: copyStmtList(m, node.Body), + } + case *ast.Comment: + m[node] = &ast.Comment{ + Slash: node.Slash, + Text: node.Text, + } + case *ast.CommentGroup: + cg := new(ast.CommentGroup) + if node.List != nil { + cg.List = make([]*ast.Comment, len(node.List)) + for i := range node.List { + cg.List[i] = m[node.List[i]].(*ast.Comment) + } + } + m[node] = cg + case *ast.CompositeLit: + m[node] = &ast.CompositeLit{ + Type: exprFromMap(m, node.Type), + Lbrace: node.Lbrace, + Elts: copyExprList(m, node.Elts), + Rbrace: node.Rbrace, + } + case *ast.DeclStmt: + m[node] = &ast.DeclStmt{ + Decl: m[node.Decl].(ast.Decl), + } + case *ast.DeferStmt: + m[node] = &ast.DeferStmt{ + Defer: node.Defer, + Call: callExprFromMap(m, node.Call), + } + case *ast.Ellipsis: + m[node] = &ast.Ellipsis{ + Ellipsis: node.Ellipsis, + Elt: exprFromMap(m, node.Elt), + } + case *ast.EmptyStmt: + m[node] = &ast.EmptyStmt{ + Semicolon: node.Semicolon, + Implicit: node.Implicit, + } + case *ast.ExprStmt: + m[node] = &ast.ExprStmt{ + X: exprFromMap(m, node.X), + } + case *ast.Field: + m[node] = &ast.Field{ + Doc: commentGroupFromMap(m, node.Doc), + Names: copyIdentList(m, node.Names), + Type: exprFromMap(m, node.Type), + Tag: basicLitFromMap(m, node.Tag), + Comment: commentGroupFromMap(m, node.Comment), + } + case *ast.FieldList: + fl := &ast.FieldList{ + Opening: node.Opening, + Closing: node.Closing, + } + if node.List != nil { + fl.List = make([]*ast.Field, len(node.List)) + for i := range node.List { + fl.List[i] = m[node.List[i]].(*ast.Field) + } + } + m[node] = fl + case *ast.ForStmt: + m[node] = &ast.ForStmt{ + For: node.For, + Init: stmtFromMap(m, node.Init), + Cond: exprFromMap(m, node.Cond), + Post: stmtFromMap(m, node.Post), + Body: blockStmtFromMap(m, node.Body), + } + case *ast.FuncDecl: + m[node] = &ast.FuncDecl{ + Doc: commentGroupFromMap(m, node.Doc), + Recv: fieldListFromMap(m, node.Recv), + Name: identFromMap(m, node.Name), + Type: funcTypeFromMap(m, node.Type), + Body: blockStmtFromMap(m, node.Body), + } + case *ast.FuncLit: + m[node] = &ast.FuncLit{ + Type: funcTypeFromMap(m, node.Type), + Body: blockStmtFromMap(m, node.Body), + } + case *ast.FuncType: + m[node] = &ast.FuncType{ + Func: node.Func, + Params: fieldListFromMap(m, node.Params), + Results: fieldListFromMap(m, node.Results), + } + case *ast.GenDecl: + decl := &ast.GenDecl{ + Doc: commentGroupFromMap(m, node.Doc), + TokPos: node.TokPos, + Tok: node.Tok, + Lparen: node.Lparen, + Rparen: node.Rparen, + } + if node.Specs != nil { + decl.Specs = make([]ast.Spec, len(node.Specs)) + for i := range node.Specs { + decl.Specs[i] = m[node.Specs[i]].(ast.Spec) + } + } + m[node] = decl + case *ast.GoStmt: + m[node] = &ast.GoStmt{ + Go: node.Go, + Call: callExprFromMap(m, node.Call), + } + case *ast.Ident: + // Keep identifiers the same identity so they can be conveniently + // used with the original *types.Info. + m[node] = node + case *ast.IfStmt: + m[node] = &ast.IfStmt{ + If: node.If, + Init: stmtFromMap(m, node.Init), + Cond: exprFromMap(m, node.Cond), + Body: blockStmtFromMap(m, node.Body), + Else: stmtFromMap(m, node.Else), + } + case *ast.ImportSpec: + m[node] = &ast.ImportSpec{ + Doc: commentGroupFromMap(m, node.Doc), + Name: identFromMap(m, node.Name), + Path: basicLitFromMap(m, node.Path), + Comment: commentGroupFromMap(m, node.Comment), + EndPos: node.EndPos, + } + case *ast.IncDecStmt: + m[node] = &ast.IncDecStmt{ + X: exprFromMap(m, node.X), + TokPos: node.TokPos, + Tok: node.Tok, + } + case *ast.IndexExpr: + m[node] = &ast.IndexExpr{ + X: exprFromMap(m, node.X), + Lbrack: node.Lbrack, + Index: exprFromMap(m, node.Index), + Rbrack: node.Rbrack, + } + case *ast.InterfaceType: + m[node] = &ast.InterfaceType{ + Interface: node.Interface, + Methods: fieldListFromMap(m, node.Methods), + Incomplete: node.Incomplete, + } + case *ast.KeyValueExpr: + m[node] = &ast.KeyValueExpr{ + Key: exprFromMap(m, node.Key), + Colon: node.Colon, + Value: exprFromMap(m, node.Value), + } + case *ast.LabeledStmt: + m[node] = &ast.LabeledStmt{ + Label: identFromMap(m, node.Label), + Colon: node.Colon, + Stmt: stmtFromMap(m, node.Stmt), + } + case *ast.MapType: + m[node] = &ast.MapType{ + Map: node.Map, + Key: exprFromMap(m, node.Key), + Value: exprFromMap(m, node.Value), + } + case *ast.ParenExpr: + m[node] = &ast.ParenExpr{ + Lparen: node.Lparen, + X: exprFromMap(m, node.X), + Rparen: node.Rparen, + } + case *ast.RangeStmt: + m[node] = &ast.RangeStmt{ + For: node.For, + Key: exprFromMap(m, node.Key), + Value: exprFromMap(m, node.Value), + TokPos: node.TokPos, + Tok: node.Tok, + X: exprFromMap(m, node.X), + Body: blockStmtFromMap(m, node.Body), + } + case *ast.ReturnStmt: + m[node] = &ast.ReturnStmt{ + Return: node.Return, + Results: copyExprList(m, node.Results), + } + case *ast.SelectStmt: + m[node] = &ast.SelectStmt{ + Select: node.Select, + Body: blockStmtFromMap(m, node.Body), + } + case *ast.SelectorExpr: + m[node] = &ast.SelectorExpr{ + X: exprFromMap(m, node.X), + Sel: identFromMap(m, node.Sel), + } + case *ast.SendStmt: + m[node] = &ast.SendStmt{ + Chan: exprFromMap(m, node.Chan), + Arrow: node.Arrow, + Value: exprFromMap(m, node.Value), + } + case *ast.SliceExpr: + m[node] = &ast.SliceExpr{ + X: exprFromMap(m, node.X), + Lbrack: node.Lbrack, + Low: exprFromMap(m, node.Low), + High: exprFromMap(m, node.High), + Max: exprFromMap(m, node.Max), + Slice3: node.Slice3, + Rbrack: node.Rbrack, + } + case *ast.StarExpr: + m[node] = &ast.StarExpr{ + Star: node.Star, + X: exprFromMap(m, node.X), + } + case *ast.StructType: + m[node] = &ast.StructType{ + Struct: node.Struct, + Fields: fieldListFromMap(m, node.Fields), + Incomplete: node.Incomplete, + } + case *ast.SwitchStmt: + m[node] = &ast.SwitchStmt{ + Switch: node.Switch, + Init: stmtFromMap(m, node.Init), + Tag: exprFromMap(m, node.Tag), + Body: blockStmtFromMap(m, node.Body), + } + case *ast.TypeAssertExpr: + m[node] = &ast.TypeAssertExpr{ + X: exprFromMap(m, node.X), + Lparen: node.Lparen, + Type: exprFromMap(m, node.Type), + Rparen: node.Rparen, + } + case *ast.TypeSpec: + m[node] = &ast.TypeSpec{ + Doc: commentGroupFromMap(m, node.Doc), + Name: identFromMap(m, node.Name), + Assign: node.Assign, + Type: exprFromMap(m, node.Type), + Comment: commentGroupFromMap(m, node.Comment), + } + case *ast.TypeSwitchStmt: + m[node] = &ast.TypeSwitchStmt{ + Switch: node.Switch, + Init: stmtFromMap(m, node.Init), + Assign: stmtFromMap(m, node.Assign), + Body: blockStmtFromMap(m, node.Body), + } + case *ast.UnaryExpr: + m[node] = &ast.UnaryExpr{ + OpPos: node.OpPos, + Op: node.Op, + X: exprFromMap(m, node.X), + } + case *ast.ValueSpec: + m[node] = &ast.ValueSpec{ + Doc: commentGroupFromMap(m, node.Doc), + Names: copyIdentList(m, node.Names), + Type: exprFromMap(m, node.Type), + Values: copyExprList(m, node.Values), + Comment: commentGroupFromMap(m, node.Comment), + } + default: + panic(fmt.Sprintf("unhandled AST node: %T", node)) + } + return true + }) + return m[original] +} + +func commentGroupFromMap(m map[ast.Node]ast.Node, key *ast.CommentGroup) *ast.CommentGroup { + if key == nil { + return nil + } + return m[key].(*ast.CommentGroup) +} + +func exprFromMap(m map[ast.Node]ast.Node, key ast.Expr) ast.Expr { + if key == nil { + return nil + } + return m[key].(ast.Expr) +} + +func stmtFromMap(m map[ast.Node]ast.Node, key ast.Stmt) ast.Stmt { + if key == nil { + return nil + } + return m[key].(ast.Stmt) +} + +func identFromMap(m map[ast.Node]ast.Node, key *ast.Ident) *ast.Ident { + if key == nil { + return nil + } + return m[key].(*ast.Ident) +} + +func blockStmtFromMap(m map[ast.Node]ast.Node, key *ast.BlockStmt) *ast.BlockStmt { + if key == nil { + return nil + } + return m[key].(*ast.BlockStmt) +} + +func fieldListFromMap(m map[ast.Node]ast.Node, key *ast.FieldList) *ast.FieldList { + if key == nil { + return nil + } + return m[key].(*ast.FieldList) +} + +func callExprFromMap(m map[ast.Node]ast.Node, key *ast.CallExpr) *ast.CallExpr { + if key == nil { + return nil + } + return m[key].(*ast.CallExpr) +} + +func basicLitFromMap(m map[ast.Node]ast.Node, key *ast.BasicLit) *ast.BasicLit { + if key == nil { + return nil + } + return m[key].(*ast.BasicLit) +} + +func funcTypeFromMap(m map[ast.Node]ast.Node, key *ast.FuncType) *ast.FuncType { + if key == nil { + return nil + } + return m[key].(*ast.FuncType) +} + +func copyExprList(m map[ast.Node]ast.Node, exprs []ast.Expr) []ast.Expr { + if exprs == nil { + return nil + } + newExprs := make([]ast.Expr, len(exprs)) + for i := range exprs { + newExprs[i] = m[exprs[i]].(ast.Expr) + } + return newExprs +} + +func copyStmtList(m map[ast.Node]ast.Node, stmts []ast.Stmt) []ast.Stmt { + if stmts == nil { + return nil + } + newStmts := make([]ast.Stmt, len(stmts)) + for i := range stmts { + newStmts[i] = m[stmts[i]].(ast.Stmt) + } + return newStmts +} + +func copyIdentList(m map[ast.Node]ast.Node, idents []*ast.Ident) []*ast.Ident { + if idents == nil { + return nil + } + newIdents := make([]*ast.Ident, len(idents)) + for i := range idents { + newIdents[i] = m[idents[i]].(*ast.Ident) + } + return newIdents +} diff --git a/vendor/github.com/google/wire/internal/wire/errors.go b/vendor/github.com/google/wire/internal/wire/errors.go new file mode 100644 index 0000000000..73c7d1f50e --- /dev/null +++ b/vendor/github.com/google/wire/internal/wire/errors.go @@ -0,0 +1,84 @@ +// Copyright 2018 The Wire Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package wire + +import ( + "go/token" +) + +// errorCollector manages a list of errors. The zero value is an empty list. +type errorCollector struct { + errors []error +} + +// add appends any non-nil errors to the collector. +func (ec *errorCollector) add(errs ...error) { + for _, e := range errs { + if e != nil { + ec.errors = append(ec.errors, e) + } + } +} + +// mapErrors returns a new slice that wraps any errors using the given function. +func mapErrors(errs []error, f func(error) error) []error { + if len(errs) == 0 { + return nil + } + newErrs := make([]error, len(errs)) + for i := range errs { + newErrs[i] = f(errs[i]) + } + return newErrs +} + +// A wireErr is an error with an optional position. +type wireErr struct { + error error + position token.Position +} + +// notePosition wraps an error with position information if it doesn't already +// have it. +// +// notePosition is usually called multiple times as an error goes up the call +// stack, so calling notePosition on an existing *wireErr will not modify the +// position, as the assumption is that deeper calls have more precise position +// information about the source of the error. +func notePosition(p token.Position, e error) error { + switch e.(type) { + case nil: + return nil + case *wireErr: + return e + default: + return &wireErr{error: e, position: p} + } +} + +// notePositionAll wraps a list of errors with the given position. +func notePositionAll(p token.Position, errs []error) []error { + return mapErrors(errs, func(e error) error { + return notePosition(p, e) + }) +} + +// Error returns the error message prefixed by the position if valid. +func (w *wireErr) Error() string { + if !w.position.IsValid() { + return w.error.Error() + } + return w.position.String() + ": " + w.error.Error() +} diff --git a/vendor/github.com/google/wire/internal/wire/parse.go b/vendor/github.com/google/wire/internal/wire/parse.go new file mode 100644 index 0000000000..d72e171c9b --- /dev/null +++ b/vendor/github.com/google/wire/internal/wire/parse.go @@ -0,0 +1,1237 @@ +// Copyright 2018 The Wire Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package wire + +import ( + "context" + "errors" + "fmt" + "go/ast" + "go/token" + "go/types" + "os" + "reflect" + "strconv" + "strings" + + "golang.org/x/tools/go/ast/astutil" + "golang.org/x/tools/go/packages" + "golang.org/x/tools/go/types/typeutil" +) + +// A providerSetSrc captures the source for a type provided by a ProviderSet. +// Exactly one of the fields will be set. +type providerSetSrc struct { + Provider *Provider + Binding *IfaceBinding + Value *Value + Import *ProviderSet + InjectorArg *InjectorArg + Field *Field +} + +// description returns a string describing the source of p, including line numbers. +func (p *providerSetSrc) description(fset *token.FileSet, typ types.Type) string { + quoted := func(s string) string { + if s == "" { + return "" + } + return fmt.Sprintf("%q ", s) + } + switch { + case p.Provider != nil: + kind := "provider" + if p.Provider.IsStruct { + kind = "struct provider" + } + return fmt.Sprintf("%s %s(%s)", kind, quoted(p.Provider.Name), fset.Position(p.Provider.Pos)) + case p.Binding != nil: + return fmt.Sprintf("wire.Bind (%s)", fset.Position(p.Binding.Pos)) + case p.Value != nil: + return fmt.Sprintf("wire.Value (%s)", fset.Position(p.Value.Pos)) + case p.Import != nil: + return fmt.Sprintf("provider set %s(%s)", quoted(p.Import.VarName), fset.Position(p.Import.Pos)) + case p.InjectorArg != nil: + args := p.InjectorArg.Args + return fmt.Sprintf("argument %s to injector function %s (%s)", args.Tuple.At(p.InjectorArg.Index).Name(), args.Name, fset.Position(args.Pos)) + case p.Field != nil: + return fmt.Sprintf("wire.FieldsOf (%s)", fset.Position(p.Field.Pos)) + } + panic("providerSetSrc with no fields set") +} + +// trace returns a slice of strings describing the (possibly recursive) source +// of p, including line numbers. +func (p *providerSetSrc) trace(fset *token.FileSet, typ types.Type) []string { + var retval []string + // Only Imports need recursion. + if p.Import != nil { + if parent := p.Import.srcMap.At(typ); parent != nil { + retval = append(retval, parent.(*providerSetSrc).trace(fset, typ)...) + } + } + retval = append(retval, p.description(fset, typ)) + return retval +} + +// A ProviderSet describes a set of providers. The zero value is an empty +// ProviderSet. +type ProviderSet struct { + // Pos is the position of the call to wire.NewSet or wire.Build that + // created the set. + Pos token.Pos + // PkgPath is the import path of the package that declared this set. + PkgPath string + // VarName is the variable name of the set, if it came from a package + // variable. + VarName string + + Providers []*Provider + Bindings []*IfaceBinding + Values []*Value + Fields []*Field + Imports []*ProviderSet + // InjectorArgs is only filled in for wire.Build. + InjectorArgs *InjectorArgs + + // providerMap maps from provided type to a *ProvidedType. + // It includes all of the imported types. + providerMap *typeutil.Map + + // srcMap maps from provided type to a *providerSetSrc capturing the + // Provider, Binding, Value, or Import that provided the type. + srcMap *typeutil.Map +} + +// Outputs returns a new slice containing the set of possible types the +// provider set can produce. The order is unspecified. +func (set *ProviderSet) Outputs() []types.Type { + return set.providerMap.Keys() +} + +// For returns a ProvidedType for the given type, or the zero ProvidedType. +func (set *ProviderSet) For(t types.Type) ProvidedType { + pt := set.providerMap.At(t) + if pt == nil { + return ProvidedType{} + } + return *pt.(*ProvidedType) +} + +// An IfaceBinding declares that a type should be used to satisfy inputs +// of the given interface type. +type IfaceBinding struct { + // Iface is the interface type, which is what can be injected. + Iface types.Type + + // Provided is always a type that is assignable to Iface. + Provided types.Type + + // Pos is the position where the binding was declared. + Pos token.Pos +} + +// Provider records the signature of a provider. A provider is a +// single Go object, either a function or a named struct type. +type Provider struct { + // Pkg is the package that the Go object resides in. + Pkg *types.Package + + // Name is the name of the Go object. + Name string + + // Pos is the source position of the func keyword or type spec + // defining this provider. + Pos token.Pos + + // Args is the list of data dependencies this provider has. + Args []ProviderInput + + // Varargs is true if the provider function is variadic. + Varargs bool + + // IsStruct is true if this provider is a named struct type. + // Otherwise it's a function. + IsStruct bool + + // Out is the set of types this provider produces. It will always + // contain at least one type. + Out []types.Type + + // HasCleanup reports whether the provider function returns a cleanup + // function. (Always false for structs.) + HasCleanup bool + + // HasErr reports whether the provider function can return an error. + // (Always false for structs.) + HasErr bool +} + +// ProviderInput describes an incoming edge in the provider graph. +type ProviderInput struct { + Type types.Type + + // If the provider is a struct, FieldName will be the field name to set. + FieldName string +} + +// Value describes a value expression. +type Value struct { + // Pos is the source position of the expression defining this value. + Pos token.Pos + + // Out is the type this value produces. + Out types.Type + + // expr is the expression passed to wire.Value. + expr ast.Expr + + // info is the type info for the expression. + info *types.Info +} + +// InjectorArg describes a specific argument passed to an injector function. +type InjectorArg struct { + // Args is the full set of arguments. + Args *InjectorArgs + // Index is the index into Args.Tuple for this argument. + Index int +} + +// InjectorArgs describes the arguments passed to an injector function. +type InjectorArgs struct { + // Name is the name of the injector function. + Name string + // Tuple represents the arguments. + Tuple *types.Tuple + // Pos is the source position of the injector function. + Pos token.Pos +} + +// Field describes a specific field selected from a struct. +type Field struct { + // Parent is the struct or pointer to the struct that the field belongs to. + Parent types.Type + // Name is the field name. + Name string + // Pkg is the package that the struct resides in. + Pkg *types.Package + // Pos is the source position of the field declaration. + // defining these fields. + Pos token.Pos + // Out is the field's provided types. The first element provides the + // field type. If the field is coming from a pointer to a struct, + // there will be a second element providing a pointer to the field. + Out []types.Type +} + +// Load finds all the provider sets in the packages that match the given +// patterns, as well as the provider sets' transitive dependencies. It +// may return both errors and Info. The patterns are defined by the +// underlying build system. For the go tool, this is described at +// https://golang.org/cmd/go/#hdr-Package_lists_and_patterns +// +// wd is the working directory and env is the set of environment +// variables to use when loading the packages specified by patterns. If +// env is nil or empty, it is interpreted as an empty set of variables. +// In case of duplicate environment variables, the last one in the list +// takes precedence. +func Load(ctx context.Context, wd string, env []string, patterns []string) (*Info, []error) { + pkgs, errs := load(ctx, wd, env, patterns) + if len(errs) > 0 { + return nil, errs + } + if len(pkgs) == 0 { + return new(Info), nil + } + fset := pkgs[0].Fset + info := &Info{ + Fset: fset, + Sets: make(map[ProviderSetID]*ProviderSet), + } + oc := newObjectCache(pkgs) + ec := new(errorCollector) + for _, pkg := range pkgs { + if isWireImport(pkg.PkgPath) { + // The marker function package confuses analysis. + continue + } + scope := pkg.Types.Scope() + for _, name := range scope.Names() { + obj := scope.Lookup(name) + if !isProviderSetType(obj.Type()) { + continue + } + item, errs := oc.get(obj) + if len(errs) > 0 { + ec.add(notePositionAll(fset.Position(obj.Pos()), errs)...) + continue + } + pset := item.(*ProviderSet) + // pset.Name may not equal name, since it could be an alias to + // another provider set. + id := ProviderSetID{ImportPath: pset.PkgPath, VarName: name} + info.Sets[id] = pset + } + for _, f := range pkg.Syntax { + for _, decl := range f.Decls { + fn, ok := decl.(*ast.FuncDecl) + if !ok { + continue + } + buildCall, err := findInjectorBuild(pkg.TypesInfo, fn) + if err != nil { + ec.add(notePosition(fset.Position(fn.Pos()), fmt.Errorf("inject %s: %v", fn.Name.Name, err))) + continue + } + if buildCall == nil { + continue + } + sig := pkg.TypesInfo.ObjectOf(fn.Name).Type().(*types.Signature) + ins, out, err := injectorFuncSignature(sig) + if err != nil { + if w, ok := err.(*wireErr); ok { + ec.add(notePosition(w.position, fmt.Errorf("inject %s: %v", fn.Name.Name, w.error))) + } else { + ec.add(notePosition(fset.Position(fn.Pos()), fmt.Errorf("inject %s: %v", fn.Name.Name, err))) + } + continue + } + injectorArgs := &InjectorArgs{ + Name: fn.Name.Name, + Tuple: ins, + Pos: fn.Pos(), + } + set, errs := oc.processNewSet(pkg.TypesInfo, pkg.PkgPath, buildCall, injectorArgs, "") + if len(errs) > 0 { + ec.add(notePositionAll(fset.Position(fn.Pos()), errs)...) + continue + } + _, errs = solve(fset, out.out, ins, set) + if len(errs) > 0 { + ec.add(mapErrors(errs, func(e error) error { + if w, ok := e.(*wireErr); ok { + return notePosition(w.position, fmt.Errorf("inject %s: %v", fn.Name.Name, w.error)) + } + return notePosition(fset.Position(fn.Pos()), fmt.Errorf("inject %s: %v", fn.Name.Name, e)) + })...) + continue + } + info.Injectors = append(info.Injectors, &Injector{ + ImportPath: pkg.PkgPath, + FuncName: fn.Name.Name, + }) + } + } + } + return info, ec.errors +} + +// load typechecks the packages that match the given patterns and +// includes source for all transitive dependencies. The patterns are +// defined by the underlying build system. For the go tool, this is +// described at https://golang.org/cmd/go/#hdr-Package_lists_and_patterns +// +// wd is the working directory and env is the set of environment +// variables to use when loading the packages specified by patterns. If +// env is nil or empty, it is interpreted as an empty set of variables. +// In case of duplicate environment variables, the last one in the list +// takes precedence. +func load(ctx context.Context, wd string, env []string, patterns []string) ([]*packages.Package, []error) { + cfg := &packages.Config{ + Context: ctx, + Mode: packages.LoadAllSyntax, + Dir: wd, + Env: env, + BuildFlags: []string{"-tags=wireinject"}, + // TODO(light): Use ParseFile to skip function bodies and comments in indirect packages. + } + escaped := make([]string, len(patterns)) + for i := range patterns { + escaped[i] = "pattern=" + patterns[i] + } + pkgs, err := packages.Load(cfg, escaped...) + if err != nil { + return nil, []error{err} + } + var errs []error + for _, p := range pkgs { + for _, e := range p.Errors { + errs = append(errs, e) + } + } + if len(errs) > 0 { + return nil, errs + } + return pkgs, nil +} + +// Info holds the result of Load. +type Info struct { + Fset *token.FileSet + + // Sets contains all the provider sets in the initial packages. + Sets map[ProviderSetID]*ProviderSet + + // Injectors contains all the injector functions in the initial packages. + // The order is undefined. + Injectors []*Injector +} + +// A ProviderSetID identifies a named provider set. +type ProviderSetID struct { + ImportPath string + VarName string +} + +// String returns the ID as ""path/to/pkg".Foo". +func (id ProviderSetID) String() string { + return strconv.Quote(id.ImportPath) + "." + id.VarName +} + +// An Injector describes an injector function. +type Injector struct { + ImportPath string + FuncName string +} + +// String returns the injector name as ""path/to/pkg".Foo". +func (in *Injector) String() string { + return strconv.Quote(in.ImportPath) + "." + in.FuncName +} + +// objectCache is a lazily evaluated mapping of objects to Wire structures. +type objectCache struct { + fset *token.FileSet + packages map[string]*packages.Package + objects map[objRef]objCacheEntry + hasher typeutil.Hasher +} + +type objRef struct { + importPath string + name string +} + +type objCacheEntry struct { + val interface{} // *Provider, *ProviderSet, *IfaceBinding, or *Value + errs []error +} + +func newObjectCache(pkgs []*packages.Package) *objectCache { + if len(pkgs) == 0 { + panic("object cache must have packages to draw from") + } + oc := &objectCache{ + fset: pkgs[0].Fset, + packages: make(map[string]*packages.Package), + objects: make(map[objRef]objCacheEntry), + hasher: typeutil.MakeHasher(), + } + // Depth-first search of all dependencies to gather import path to + // packages.Package mapping. go/packages guarantees that for a single + // call to packages.Load and an import path X, there will exist only + // one *packages.Package value with PkgPath X. + stk := append([]*packages.Package(nil), pkgs...) + for len(stk) > 0 { + p := stk[len(stk)-1] + stk = stk[:len(stk)-1] + if oc.packages[p.PkgPath] != nil { + continue + } + oc.packages[p.PkgPath] = p + for _, imp := range p.Imports { + stk = append(stk, imp) + } + } + return oc +} + +// get converts a Go object into a Wire structure. It may return a *Provider, an +// *IfaceBinding, a *ProviderSet, a *Value, or a []*Field. +func (oc *objectCache) get(obj types.Object) (val interface{}, errs []error) { + ref := objRef{ + importPath: obj.Pkg().Path(), + name: obj.Name(), + } + if ent, cached := oc.objects[ref]; cached { + return ent.val, append([]error(nil), ent.errs...) + } + defer func() { + oc.objects[ref] = objCacheEntry{ + val: val, + errs: append([]error(nil), errs...), + } + }() + switch obj := obj.(type) { + case *types.Var: + spec := oc.varDecl(obj) + if spec == nil || len(spec.Values) == 0 { + return nil, []error{fmt.Errorf("%v is not a provider or a provider set", obj)} + } + var i int + for i = range spec.Names { + if spec.Names[i].Name == obj.Name() { + break + } + } + pkgPath := obj.Pkg().Path() + return oc.processExpr(oc.packages[pkgPath].TypesInfo, pkgPath, spec.Values[i], obj.Name()) + case *types.Func: + return processFuncProvider(oc.fset, obj) + default: + return nil, []error{fmt.Errorf("%v is not a provider or a provider set", obj)} + } +} + +// varDecl finds the declaration that defines the given variable. +func (oc *objectCache) varDecl(obj *types.Var) *ast.ValueSpec { + // TODO(light): Walk files to build object -> declaration mapping, if more performant. + // Recommended by https://golang.org/s/types-tutorial + pkg := oc.packages[obj.Pkg().Path()] + pos := obj.Pos() + for _, f := range pkg.Syntax { + tokenFile := oc.fset.File(f.Pos()) + if base := tokenFile.Base(); base <= int(pos) && int(pos) < base+tokenFile.Size() { + path, _ := astutil.PathEnclosingInterval(f, pos, pos) + for _, node := range path { + if spec, ok := node.(*ast.ValueSpec); ok { + return spec + } + } + } + } + return nil +} + +// processExpr converts an expression into a Wire structure. It may return a +// *Provider, an *IfaceBinding, a *ProviderSet, a *Value or a []*Field. +func (oc *objectCache) processExpr(info *types.Info, pkgPath string, expr ast.Expr, varName string) (interface{}, []error) { + exprPos := oc.fset.Position(expr.Pos()) + expr = astutil.Unparen(expr) + if obj := qualifiedIdentObject(info, expr); obj != nil { + item, errs := oc.get(obj) + return item, mapErrors(errs, func(err error) error { + return notePosition(exprPos, err) + }) + } + if call, ok := expr.(*ast.CallExpr); ok { + fnObj := qualifiedIdentObject(info, call.Fun) + if fnObj == nil || !isWireImport(fnObj.Pkg().Path()) { + return nil, []error{notePosition(exprPos, errors.New("unknown pattern"))} + } + switch fnObj.Name() { + case "NewSet": + pset, errs := oc.processNewSet(info, pkgPath, call, nil, varName) + return pset, notePositionAll(exprPos, errs) + case "Bind": + b, err := processBind(oc.fset, info, call) + if err != nil { + return nil, []error{notePosition(exprPos, err)} + } + return b, nil + case "Value": + v, err := processValue(oc.fset, info, call) + if err != nil { + return nil, []error{notePosition(exprPos, err)} + } + return v, nil + case "InterfaceValue": + v, err := processInterfaceValue(oc.fset, info, call) + if err != nil { + return nil, []error{notePosition(exprPos, err)} + } + return v, nil + case "Struct": + s, err := processStructProvider(oc.fset, info, call) + if err != nil { + return nil, []error{notePosition(exprPos, err)} + } + return s, nil + case "FieldsOf": + v, err := processFieldsOf(oc.fset, info, call) + if err != nil { + return nil, []error{notePosition(exprPos, err)} + } + return v, nil + default: + return nil, []error{notePosition(exprPos, errors.New("unknown pattern"))} + } + } + if tn := structArgType(info, expr); tn != nil { + p, errs := processStructLiteralProvider(oc.fset, tn) + if len(errs) > 0 { + return nil, notePositionAll(exprPos, errs) + } + return p, nil + } + return nil, []error{notePosition(exprPos, errors.New("unknown pattern"))} +} + +func (oc *objectCache) processNewSet(info *types.Info, pkgPath string, call *ast.CallExpr, args *InjectorArgs, varName string) (*ProviderSet, []error) { + // Assumes that call.Fun is wire.NewSet or wire.Build. + + pset := &ProviderSet{ + Pos: call.Pos(), + InjectorArgs: args, + PkgPath: pkgPath, + VarName: varName, + } + ec := new(errorCollector) + for _, arg := range call.Args { + item, errs := oc.processExpr(info, pkgPath, arg, "") + if len(errs) > 0 { + ec.add(errs...) + continue + } + switch item := item.(type) { + case *Provider: + pset.Providers = append(pset.Providers, item) + case *ProviderSet: + pset.Imports = append(pset.Imports, item) + case *IfaceBinding: + pset.Bindings = append(pset.Bindings, item) + case *Value: + pset.Values = append(pset.Values, item) + case []*Field: + pset.Fields = append(pset.Fields, item...) + default: + panic("unknown item type") + } + } + if len(ec.errors) > 0 { + return nil, ec.errors + } + var errs []error + pset.providerMap, pset.srcMap, errs = buildProviderMap(oc.fset, oc.hasher, pset) + if len(errs) > 0 { + return nil, errs + } + if errs := verifyAcyclic(pset.providerMap, oc.hasher); len(errs) > 0 { + return nil, errs + } + return pset, nil +} + +// structArgType attempts to interpret an expression as a simple struct type. +// It assumes any parentheses have been stripped. +func structArgType(info *types.Info, expr ast.Expr) *types.TypeName { + lit, ok := expr.(*ast.CompositeLit) + if !ok { + return nil + } + tn, ok := qualifiedIdentObject(info, lit.Type).(*types.TypeName) + if !ok { + return nil + } + if _, isStruct := tn.Type().Underlying().(*types.Struct); !isStruct { + return nil + } + return tn +} + +// qualifiedIdentObject finds the object for an identifier or a +// qualified identifier, or nil if the object could not be found. +func qualifiedIdentObject(info *types.Info, expr ast.Expr) types.Object { + switch expr := expr.(type) { + case *ast.Ident: + return info.ObjectOf(expr) + case *ast.SelectorExpr: + pkgName, ok := expr.X.(*ast.Ident) + if !ok { + return nil + } + if _, ok := info.ObjectOf(pkgName).(*types.PkgName); !ok { + return nil + } + return info.ObjectOf(expr.Sel) + default: + return nil + } +} + +// processFuncProvider creates a provider for a function declaration. +func processFuncProvider(fset *token.FileSet, fn *types.Func) (*Provider, []error) { + sig := fn.Type().(*types.Signature) + fpos := fn.Pos() + providerSig, err := funcOutput(sig) + if err != nil { + return nil, []error{notePosition(fset.Position(fpos), fmt.Errorf("wrong signature for provider %s: %v", fn.Name(), err))} + } + params := sig.Params() + provider := &Provider{ + Pkg: fn.Pkg(), + Name: fn.Name(), + Pos: fn.Pos(), + Args: make([]ProviderInput, params.Len()), + Varargs: sig.Variadic(), + Out: []types.Type{providerSig.out}, + HasCleanup: providerSig.cleanup, + HasErr: providerSig.err, + } + for i := 0; i < params.Len(); i++ { + provider.Args[i] = ProviderInput{ + Type: params.At(i).Type(), + } + for j := 0; j < i; j++ { + if types.Identical(provider.Args[i].Type, provider.Args[j].Type) { + return nil, []error{notePosition(fset.Position(fpos), fmt.Errorf("provider has multiple parameters of type %s", types.TypeString(provider.Args[j].Type, nil)))} + } + } + } + return provider, nil +} + +func injectorFuncSignature(sig *types.Signature) (*types.Tuple, outputSignature, error) { + out, err := funcOutput(sig) + if err != nil { + return nil, outputSignature{}, err + } + return sig.Params(), out, nil +} + +type outputSignature struct { + out types.Type + cleanup bool + err bool +} + +// funcOutput validates an injector or provider function's return signature. +func funcOutput(sig *types.Signature) (outputSignature, error) { + results := sig.Results() + switch results.Len() { + case 0: + return outputSignature{}, errors.New("no return values") + case 1: + return outputSignature{out: results.At(0).Type()}, nil + case 2: + out := results.At(0).Type() + switch t := results.At(1).Type(); { + case types.Identical(t, errorType): + return outputSignature{out: out, err: true}, nil + case types.Identical(t, cleanupType): + return outputSignature{out: out, cleanup: true}, nil + default: + return outputSignature{}, fmt.Errorf("second return type is %s; must be error or func()", types.TypeString(t, nil)) + } + case 3: + if t := results.At(1).Type(); !types.Identical(t, cleanupType) { + return outputSignature{}, fmt.Errorf("second return type is %s; must be func()", types.TypeString(t, nil)) + } + if t := results.At(2).Type(); !types.Identical(t, errorType) { + return outputSignature{}, fmt.Errorf("third return type is %s; must be error", types.TypeString(t, nil)) + } + return outputSignature{ + out: results.At(0).Type(), + cleanup: true, + err: true, + }, nil + default: + return outputSignature{}, errors.New("too many return values") + } +} + +// processStructLiteralProvider creates a provider for a named struct type. +// It produces pointer and non-pointer variants via two values in Out. +// +// This is a copy of the old processStructProvider, which is deprecated now. +// It will not support any new feature introduced after v0.2. Please use the new +// wire.Struct syntax for those. +func processStructLiteralProvider(fset *token.FileSet, typeName *types.TypeName) (*Provider, []error) { + out := typeName.Type() + st, ok := out.Underlying().(*types.Struct) + if !ok { + return nil, []error{fmt.Errorf("%v does not name a struct", typeName)} + } + + pos := typeName.Pos() + fmt.Fprintf(os.Stderr, + "Warning: %v, see https://godoc.org/github.com/google/wire#Struct for more information.\n", + notePosition(fset.Position(pos), + fmt.Errorf("using struct literal to inject %s is deprecated and will be removed in the next release; use wire.Struct instead", + typeName.Type()))) + provider := &Provider{ + Pkg: typeName.Pkg(), + Name: typeName.Name(), + Pos: pos, + Args: make([]ProviderInput, st.NumFields()), + IsStruct: true, + Out: []types.Type{out, types.NewPointer(out)}, + } + for i := 0; i < st.NumFields(); i++ { + f := st.Field(i) + provider.Args[i] = ProviderInput{ + Type: f.Type(), + FieldName: f.Name(), + } + for j := 0; j < i; j++ { + if types.Identical(provider.Args[i].Type, provider.Args[j].Type) { + return nil, []error{notePosition(fset.Position(pos), fmt.Errorf("provider struct has multiple fields of type %s", types.TypeString(provider.Args[j].Type, nil)))} + } + } + } + return provider, nil +} + +// processStructProvider creates a provider for a named struct type. +// It produces pointer and non-pointer variants via two values in Out. +func processStructProvider(fset *token.FileSet, info *types.Info, call *ast.CallExpr) (*Provider, error) { + // Assumes that call.Fun is wire.Struct. + + if len(call.Args) < 1 { + return nil, notePosition(fset.Position(call.Pos()), + errors.New("call to Struct must specify the struct to be injected")) + } + const firstArgReqFormat = "first argument to Struct must be a pointer to a named struct; found %s" + structType := info.TypeOf(call.Args[0]) + structPtr, ok := structType.(*types.Pointer) + if !ok { + return nil, notePosition(fset.Position(call.Pos()), + fmt.Errorf(firstArgReqFormat, types.TypeString(structType, nil))) + } + + st, ok := structPtr.Elem().Underlying().(*types.Struct) + if !ok { + return nil, notePosition(fset.Position(call.Pos()), + fmt.Errorf(firstArgReqFormat, types.TypeString(structPtr, nil))) + } + + stExpr := call.Args[0].(*ast.CallExpr) + typeName := qualifiedIdentObject(info, stExpr.Args[0]) // should be either an identifier or selector + provider := &Provider{ + Pkg: typeName.Pkg(), + Name: typeName.Name(), + Pos: typeName.Pos(), + IsStruct: true, + Out: []types.Type{structPtr.Elem(), structPtr}, + } + if allFields(call) { + for i := 0; i < st.NumFields(); i++ { + if isPrevented(st.Tag(i)) { + continue + } + f := st.Field(i) + provider.Args = append(provider.Args, ProviderInput{ + Type: f.Type(), + FieldName: f.Name(), + }) + } + } else { + provider.Args = make([]ProviderInput, len(call.Args)-1) + for i := 1; i < len(call.Args); i++ { + v, err := checkField(call.Args[i], st) + if err != nil { + return nil, notePosition(fset.Position(call.Pos()), err) + } + provider.Args[i-1] = ProviderInput{ + Type: v.Type(), + FieldName: v.Name(), + } + } + } + for i := 0; i < len(provider.Args); i++ { + for j := 0; j < i; j++ { + if types.Identical(provider.Args[i].Type, provider.Args[j].Type) { + f := st.Field(j) + return nil, notePosition(fset.Position(f.Pos()), fmt.Errorf("provider struct has multiple fields of type %s", types.TypeString(provider.Args[j].Type, nil))) + } + } + } + return provider, nil +} + +func allFields(call *ast.CallExpr) bool { + if len(call.Args) != 2 { + return false + } + b, ok := call.Args[1].(*ast.BasicLit) + if !ok { + return false + } + return strings.EqualFold(strconv.Quote("*"), b.Value) +} + +// isPrevented checks whether field i is prevented by tag "-". +// Since this is the only tag used by wire, we can do string comparison +// without using reflect. +func isPrevented(tag string) bool { + return reflect.StructTag(tag).Get("wire") == "-" +} + +// processBind creates an interface binding from a wire.Bind call. +func processBind(fset *token.FileSet, info *types.Info, call *ast.CallExpr) (*IfaceBinding, error) { + // Assumes that call.Fun is wire.Bind. + + if len(call.Args) != 2 { + return nil, notePosition(fset.Position(call.Pos()), + errors.New("call to Bind takes exactly two arguments")) + } + // TODO(light): Verify that arguments are simple expressions. + ifaceArgType := info.TypeOf(call.Args[0]) + ifacePtr, ok := ifaceArgType.(*types.Pointer) + if !ok { + return nil, notePosition(fset.Position(call.Pos()), + fmt.Errorf("first argument to Bind must be a pointer to an interface type; found %s", types.TypeString(ifaceArgType, nil))) + } + iface := ifacePtr.Elem() + methodSet, ok := iface.Underlying().(*types.Interface) + if !ok { + return nil, notePosition(fset.Position(call.Pos()), + fmt.Errorf("first argument to Bind must be a pointer to an interface type; found %s", types.TypeString(ifaceArgType, nil))) + } + + provided := info.TypeOf(call.Args[1]) + if bindShouldUsePointer(info, call) { + providedPtr, ok := provided.(*types.Pointer) + if !ok { + return nil, notePosition(fset.Position(call.Args[0].Pos()), + fmt.Errorf("second argument to Bind must be a pointer or a pointer to a pointer; found %s", types.TypeString(provided, nil))) + } + provided = providedPtr.Elem() + } + if types.Identical(iface, provided) { + return nil, notePosition(fset.Position(call.Pos()), + errors.New("cannot bind interface to itself")) + } + if !types.Implements(provided, methodSet) { + return nil, notePosition(fset.Position(call.Pos()), + fmt.Errorf("%s does not implement %s", types.TypeString(provided, nil), types.TypeString(iface, nil))) + } + return &IfaceBinding{ + Pos: call.Pos(), + Iface: iface, + Provided: provided, + }, nil +} + +// processValue creates a value from a wire.Value call. +func processValue(fset *token.FileSet, info *types.Info, call *ast.CallExpr) (*Value, error) { + // Assumes that call.Fun is wire.Value. + + if len(call.Args) != 1 { + return nil, notePosition(fset.Position(call.Pos()), errors.New("call to Value takes exactly one argument")) + } + ok := true + ast.Inspect(call.Args[0], func(node ast.Node) bool { + switch expr := node.(type) { + case nil, *ast.ArrayType, *ast.BasicLit, *ast.BinaryExpr, *ast.ChanType, *ast.CompositeLit, *ast.FuncType, *ast.Ident, *ast.IndexExpr, *ast.InterfaceType, *ast.KeyValueExpr, *ast.MapType, *ast.ParenExpr, *ast.SelectorExpr, *ast.SliceExpr, *ast.StarExpr, *ast.StructType, *ast.TypeAssertExpr: + // Good! + case *ast.UnaryExpr: + if expr.Op == token.ARROW { + ok = false + return false + } + case *ast.CallExpr: + // Only acceptable if it's a type conversion. + if _, isFunc := info.TypeOf(expr.Fun).(*types.Signature); isFunc { + ok = false + return false + } + default: + ok = false + return false + } + return true + }) + if !ok { + return nil, notePosition(fset.Position(call.Pos()), errors.New("argument to Value is too complex")) + } + // Result type can't be an interface type; use wire.InterfaceValue for that. + argType := info.TypeOf(call.Args[0]) + if _, isInterfaceType := argType.Underlying().(*types.Interface); isInterfaceType { + return nil, notePosition(fset.Position(call.Pos()), fmt.Errorf("argument to Value may not be an interface value (found %s); use InterfaceValue instead", types.TypeString(argType, nil))) + } + return &Value{ + Pos: call.Args[0].Pos(), + Out: info.TypeOf(call.Args[0]), + expr: call.Args[0], + info: info, + }, nil +} + +// processInterfaceValue creates a value from a wire.InterfaceValue call. +func processInterfaceValue(fset *token.FileSet, info *types.Info, call *ast.CallExpr) (*Value, error) { + // Assumes that call.Fun is wire.InterfaceValue. + + if len(call.Args) != 2 { + return nil, notePosition(fset.Position(call.Pos()), errors.New("call to InterfaceValue takes exactly two arguments")) + } + ifaceArgType := info.TypeOf(call.Args[0]) + ifacePtr, ok := ifaceArgType.(*types.Pointer) + if !ok { + return nil, notePosition(fset.Position(call.Pos()), fmt.Errorf("first argument to InterfaceValue must be a pointer to an interface type; found %s", types.TypeString(ifaceArgType, nil))) + } + iface := ifacePtr.Elem() + methodSet, ok := iface.Underlying().(*types.Interface) + if !ok { + return nil, notePosition(fset.Position(call.Pos()), fmt.Errorf("first argument to InterfaceValue must be a pointer to an interface type; found %s", types.TypeString(ifaceArgType, nil))) + } + provided := info.TypeOf(call.Args[1]) + if !types.Implements(provided, methodSet) { + return nil, notePosition(fset.Position(call.Pos()), fmt.Errorf("%s does not implement %s", types.TypeString(provided, nil), types.TypeString(iface, nil))) + } + return &Value{ + Pos: call.Args[1].Pos(), + Out: iface, + expr: call.Args[1], + info: info, + }, nil +} + +// processFieldsOf creates a slice of fields from a wire.FieldsOf call. +func processFieldsOf(fset *token.FileSet, info *types.Info, call *ast.CallExpr) ([]*Field, error) { + // Assumes that call.Fun is wire.FieldsOf. + + if len(call.Args) < 2 { + return nil, notePosition(fset.Position(call.Pos()), + errors.New("call to FieldsOf must specify fields to be extracted")) + } + const firstArgReqFormat = "first argument to FieldsOf must be a pointer to a struct or a pointer to a pointer to a struct; found %s" + structType := info.TypeOf(call.Args[0]) + structPtr, ok := structType.(*types.Pointer) + if !ok { + return nil, notePosition(fset.Position(call.Pos()), + fmt.Errorf(firstArgReqFormat, types.TypeString(structType, nil))) + } + + var struc *types.Struct + isPtrToStruct := false + switch t := structPtr.Elem().Underlying().(type) { + case *types.Pointer: + struc, ok = t.Elem().Underlying().(*types.Struct) + if !ok { + return nil, notePosition(fset.Position(call.Pos()), + fmt.Errorf(firstArgReqFormat, types.TypeString(struc, nil))) + } + isPtrToStruct = true + case *types.Struct: + struc = t + default: + return nil, notePosition(fset.Position(call.Pos()), + fmt.Errorf(firstArgReqFormat, types.TypeString(t, nil))) + } + if struc.NumFields() < len(call.Args)-1 { + return nil, notePosition(fset.Position(call.Pos()), + fmt.Errorf("fields number exceeds the number available in the struct which has %d fields", struc.NumFields())) + } + + fields := make([]*Field, 0, len(call.Args)-1) + for i := 1; i < len(call.Args); i++ { + v, err := checkField(call.Args[i], struc) + if err != nil { + return nil, notePosition(fset.Position(call.Pos()), err) + } + out := []types.Type{v.Type()} + if isPtrToStruct { + // If the field is from a pointer to a struct, then + // wire.Fields also provides a pointer to the field. + out = append(out, types.NewPointer(v.Type())) + } + fields = append(fields, &Field{ + Parent: structPtr.Elem(), + Name: v.Name(), + Pkg: v.Pkg(), + Pos: v.Pos(), + Out: out, + }) + } + return fields, nil +} + +// checkField reports whether f is a field of st. f should be a string with the +// field name. +func checkField(f ast.Expr, st *types.Struct) (*types.Var, error) { + b, ok := f.(*ast.BasicLit) + if !ok { + return nil, fmt.Errorf("%v must be a string with the field name", f) + } + for i := 0; i < st.NumFields(); i++ { + if strings.EqualFold(strconv.Quote(st.Field(i).Name()), b.Value) { + if isPrevented(st.Tag(i)) { + return nil, fmt.Errorf("%s is prevented from injecting by wire", b.Value) + } + return st.Field(i), nil + } + } + return nil, fmt.Errorf("%s is not a field of %s", b.Value, st.String()) +} + +// findInjectorBuild returns the wire.Build call if fn is an injector template. +// It returns nil if the function is not an injector template. +func findInjectorBuild(info *types.Info, fn *ast.FuncDecl) (*ast.CallExpr, error) { + if fn.Body == nil { + return nil, nil + } + numStatements := 0 + invalid := false + var wireBuildCall *ast.CallExpr + for _, stmt := range fn.Body.List { + switch stmt := stmt.(type) { + case *ast.ExprStmt: + numStatements++ + if numStatements > 1 { + invalid = true + } + call, ok := stmt.X.(*ast.CallExpr) + if !ok { + continue + } + if qualifiedIdentObject(info, call.Fun) == types.Universe.Lookup("panic") { + if len(call.Args) != 1 { + continue + } + call, ok = call.Args[0].(*ast.CallExpr) + if !ok { + continue + } + } + buildObj := qualifiedIdentObject(info, call.Fun) + if buildObj == nil || buildObj.Pkg() == nil || !isWireImport(buildObj.Pkg().Path()) || buildObj.Name() != "Build" { + continue + } + wireBuildCall = call + case *ast.EmptyStmt: + // Do nothing. + case *ast.ReturnStmt: + // Allow the function to end in a return. + if numStatements == 0 { + return nil, nil + } + default: + invalid = true + } + + } + if wireBuildCall == nil { + return nil, nil + } + if invalid { + return nil, errors.New("a call to wire.Build indicates that this function is an injector, but injectors must consist of only the wire.Build call and an optional return") + } + return wireBuildCall, nil +} + +func isWireImport(path string) bool { + // TODO(light): This is depending on details of the current loader. + const vendorPart = "vendor/" + if i := strings.LastIndex(path, vendorPart); i != -1 && (i == 0 || path[i-1] == '/') { + path = path[i+len(vendorPart):] + } + return path == "github.com/google/wire" +} + +func isProviderSetType(t types.Type) bool { + n, ok := t.(*types.Named) + if !ok { + return false + } + obj := n.Obj() + return obj.Pkg() != nil && isWireImport(obj.Pkg().Path()) && obj.Name() == "ProviderSet" +} + +// ProvidedType represents a type provided from a source. The source +// can be a *Provider (a provider function), a *Value (wire.Value), or an +// *InjectorArgs (arguments to the injector function). The zero value has +// none of the above, and returns true for IsNil. +type ProvidedType struct { + // t is the provided concrete type. + t types.Type + p *Provider + v *Value + a *InjectorArg + f *Field +} + +// IsNil reports whether pt is the zero value. +func (pt ProvidedType) IsNil() bool { + return pt.p == nil && pt.v == nil && pt.a == nil && pt.f == nil +} + +// Type returns the output type. +// +// - For a function provider, this is the first return value type. +// - For a struct provider, this is either the struct type or the pointer type +// whose element type is the struct type. +// - For a value, this is the type of the expression. +// - For an argument, this is the type of the argument. +func (pt ProvidedType) Type() types.Type { + return pt.t +} + +// IsProvider reports whether pt points to a Provider. +func (pt ProvidedType) IsProvider() bool { + return pt.p != nil +} + +// IsValue reports whether pt points to a Value. +func (pt ProvidedType) IsValue() bool { + return pt.v != nil +} + +// IsArg reports whether pt points to an injector argument. +func (pt ProvidedType) IsArg() bool { + return pt.a != nil +} + +// IsField reports whether pt points to a Fields. +func (pt ProvidedType) IsField() bool { + return pt.f != nil +} + +// Provider returns pt as a Provider pointer. It panics if pt does not point +// to a Provider. +func (pt ProvidedType) Provider() *Provider { + if pt.p == nil { + panic("ProvidedType does not hold a Provider") + } + return pt.p +} + +// Value returns pt as a Value pointer. It panics if pt does not point +// to a Value. +func (pt ProvidedType) Value() *Value { + if pt.v == nil { + panic("ProvidedType does not hold a Value") + } + return pt.v +} + +// Arg returns pt as an *InjectorArg representing an injector argument. It +// panics if pt does not point to an arg. +func (pt ProvidedType) Arg() *InjectorArg { + if pt.a == nil { + panic("ProvidedType does not hold an Arg") + } + return pt.a +} + +// Field returns pt as a Field pointer. It panics if pt does not point to a +// struct Field. +func (pt ProvidedType) Field() *Field { + if pt.f == nil { + panic("ProvidedType does not hold a Field") + } + return pt.f +} + +// bindShouldUsePointer loads the wire package the user is importing from their +// injector. The call is a wire marker function call. +func bindShouldUsePointer(info *types.Info, call *ast.CallExpr) bool { + // These type assertions should not fail, otherwise panic. + fun := call.Fun.(*ast.SelectorExpr) // wire.Bind + pkgName := fun.X.(*ast.Ident) // wire + wireName := info.ObjectOf(pkgName).(*types.PkgName) // wire package + return wireName.Imported().Scope().Lookup("bindToUsePointer") != nil +} diff --git a/vendor/github.com/google/wire/internal/wire/wire.go b/vendor/github.com/google/wire/internal/wire/wire.go new file mode 100644 index 0000000000..b55ff29fe5 --- /dev/null +++ b/vendor/github.com/google/wire/internal/wire/wire.go @@ -0,0 +1,961 @@ +// Copyright 2018 The Wire Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package wire provides compile-time dependency injection logic as a +// Go library. +package wire + +import ( + "bytes" + "context" + "errors" + "fmt" + "go/ast" + "go/format" + "go/printer" + "go/token" + "go/types" + "io/ioutil" + "path/filepath" + "sort" + "strconv" + "strings" + "unicode" + "unicode/utf8" + + "golang.org/x/tools/go/ast/astutil" + "golang.org/x/tools/go/packages" +) + +// GenerateResult stores the result for a package from a call to Generate. +type GenerateResult struct { + // PkgPath is the package's PkgPath. + PkgPath string + // OutputPath is the path where the generated output should be written. + // May be empty if there were errors. + OutputPath string + // Content is the gofmt'd source code that was generated. May be nil if + // there were errors during generation. + Content []byte + // Errs is a slice of errors identified during generation. + Errs []error +} + +// Commit writes the generated file to disk. +func (gen GenerateResult) Commit() error { + if len(gen.Content) == 0 { + return nil + } + return ioutil.WriteFile(gen.OutputPath, gen.Content, 0666) +} + +// GenerateOptions holds options for Generate. +type GenerateOptions struct { + // Header will be inserted at the start of each generated file. + Header []byte + PrefixOutputFile string +} + +// Generate performs dependency injection for the packages that match the given +// patterns, return a GenerateResult for each package. The package pattern is +// defined by the underlying build system. For the go tool, this is described at +// https://golang.org/cmd/go/#hdr-Package_lists_and_patterns +// +// wd is the working directory and env is the set of environment +// variables to use when loading the package specified by pkgPattern. If +// env is nil or empty, it is interpreted as an empty set of variables. +// In case of duplicate environment variables, the last one in the list +// takes precedence. +// +// Generate may return one or more errors if it failed to load the packages. +func Generate(ctx context.Context, wd string, env []string, patterns []string, opts *GenerateOptions) ([]GenerateResult, []error) { + if opts == nil { + opts = &GenerateOptions{} + } + pkgs, errs := load(ctx, wd, env, patterns) + if len(errs) > 0 { + return nil, errs + } + generated := make([]GenerateResult, len(pkgs)) + for i, pkg := range pkgs { + generated[i].PkgPath = pkg.PkgPath + outDir, err := detectOutputDir(pkg.GoFiles) + if err != nil { + generated[i].Errs = append(generated[i].Errs, err) + continue + } + generated[i].OutputPath = filepath.Join(outDir, opts.PrefixOutputFile+"wire_gen.go") + g := newGen(pkg) + injectorFiles, errs := generateInjectors(g, pkg) + if len(errs) > 0 { + generated[i].Errs = errs + continue + } + copyNonInjectorDecls(g, injectorFiles, pkg.TypesInfo) + goSrc := g.frame() + if len(opts.Header) > 0 { + goSrc = append(opts.Header, goSrc...) + } + fmtSrc, err := format.Source(goSrc) + if err != nil { + // This is likely a bug from a poorly generated source file. + // Add an error but also the unformatted source. + generated[i].Errs = append(generated[i].Errs, err) + } else { + goSrc = fmtSrc + } + generated[i].Content = goSrc + } + return generated, nil +} + +func detectOutputDir(paths []string) (string, error) { + if len(paths) == 0 { + return "", errors.New("no files to derive output directory from") + } + dir := filepath.Dir(paths[0]) + for _, p := range paths[1:] { + if dir2 := filepath.Dir(p); dir2 != dir { + return "", fmt.Errorf("found conflicting directories %q and %q", dir, dir2) + } + } + return dir, nil +} + +// generateInjectors generates the injectors for a given package. +func generateInjectors(g *gen, pkg *packages.Package) (injectorFiles []*ast.File, _ []error) { + oc := newObjectCache([]*packages.Package{pkg}) + injectorFiles = make([]*ast.File, 0, len(pkg.Syntax)) + ec := new(errorCollector) + for _, f := range pkg.Syntax { + for _, decl := range f.Decls { + fn, ok := decl.(*ast.FuncDecl) + if !ok { + continue + } + buildCall, err := findInjectorBuild(pkg.TypesInfo, fn) + if err != nil { + ec.add(err) + continue + } + if buildCall == nil { + continue + } + if len(injectorFiles) == 0 || injectorFiles[len(injectorFiles)-1] != f { + // This is the first injector generated for this file. + // Write a file header. + name := filepath.Base(g.pkg.Fset.File(f.Pos()).Name()) + g.p("// Injectors from %s:\n\n", name) + injectorFiles = append(injectorFiles, f) + } + sig := pkg.TypesInfo.ObjectOf(fn.Name).Type().(*types.Signature) + ins, _, err := injectorFuncSignature(sig) + if err != nil { + if w, ok := err.(*wireErr); ok { + ec.add(notePosition(w.position, fmt.Errorf("inject %s: %v", fn.Name.Name, w.error))) + } else { + ec.add(notePosition(g.pkg.Fset.Position(fn.Pos()), fmt.Errorf("inject %s: %v", fn.Name.Name, err))) + } + continue + } + injectorArgs := &InjectorArgs{ + Name: fn.Name.Name, + Tuple: ins, + Pos: fn.Pos(), + } + set, errs := oc.processNewSet(pkg.TypesInfo, pkg.PkgPath, buildCall, injectorArgs, "") + if len(errs) > 0 { + ec.add(notePositionAll(g.pkg.Fset.Position(fn.Pos()), errs)...) + continue + } + if errs := g.inject(fn.Pos(), fn.Name.Name, sig, set); len(errs) > 0 { + ec.add(errs...) + continue + } + } + + for _, impt := range f.Imports { + if impt.Name != nil && impt.Name.Name == "_" { + g.anonImports[impt.Path.Value] = true + } + } + } + if len(ec.errors) > 0 { + return nil, ec.errors + } + return injectorFiles, nil +} + +// copyNonInjectorDecls copies any non-injector declarations from the +// given files into the generated output. +func copyNonInjectorDecls(g *gen, files []*ast.File, info *types.Info) { + for _, f := range files { + name := filepath.Base(g.pkg.Fset.File(f.Pos()).Name()) + first := true + for _, decl := range f.Decls { + switch decl := decl.(type) { + case *ast.FuncDecl: + // OK to ignore error, as any error cases should already have + // been filtered out. + if buildCall, _ := findInjectorBuild(info, decl); buildCall != nil { + continue + } + case *ast.GenDecl: + if decl.Tok == token.IMPORT { + continue + } + default: + continue + } + if first { + g.p("// %s:\n\n", name) + first = false + } + // TODO(light): Add line number at top of each declaration. + g.writeAST(info, decl) + g.p("\n\n") + } + } +} + +// importInfo holds info about an import. +type importInfo struct { + // name is the identifier that is used in the generated source. + name string + // differs is true if the import is given an identifier that does not + // match the package's identifier. + differs bool +} + +// gen is the file-wide generator state. +type gen struct { + pkg *packages.Package + buf bytes.Buffer + imports map[string]importInfo + anonImports map[string]bool + values map[ast.Expr]string +} + +func newGen(pkg *packages.Package) *gen { + return &gen{ + pkg: pkg, + anonImports: make(map[string]bool), + imports: make(map[string]importInfo), + values: make(map[ast.Expr]string), + } +} + +// frame bakes the built up source body into an unformatted Go source file. +func (g *gen) frame() []byte { + if g.buf.Len() == 0 { + return nil + } + var buf bytes.Buffer + buf.WriteString("// Code generated by Wire. DO NOT EDIT.\n\n") + buf.WriteString("//go:generate wire\n") + buf.WriteString("//+build !wireinject\n\n") + buf.WriteString("package ") + buf.WriteString(g.pkg.Name) + buf.WriteString("\n\n") + if len(g.imports) > 0 { + buf.WriteString("import (\n") + imps := make([]string, 0, len(g.imports)) + for path := range g.imports { + imps = append(imps, path) + } + sort.Strings(imps) + for _, path := range imps { + // Omit the local package identifier if it matches the package name. + info := g.imports[path] + if info.differs { + fmt.Fprintf(&buf, "\t%s %q\n", info.name, path) + } else { + fmt.Fprintf(&buf, "\t%q\n", path) + } + } + buf.WriteString(")\n\n") + } + if len(g.anonImports) > 0 { + buf.WriteString("import (\n") + anonImps := make([]string, 0, len(g.anonImports)) + for path := range g.anonImports { + anonImps = append(anonImps, path) + } + sort.Strings(anonImps) + + for _, path := range anonImps { + fmt.Fprintf(&buf, "\t_ %s\n", path) + } + buf.WriteString(")\n\n") + } + buf.Write(g.buf.Bytes()) + return buf.Bytes() +} + +// inject emits the code for an injector. +func (g *gen) inject(pos token.Pos, name string, sig *types.Signature, set *ProviderSet) []error { + injectSig, err := funcOutput(sig) + if err != nil { + return []error{notePosition(g.pkg.Fset.Position(pos), + fmt.Errorf("inject %s: %v", name, err))} + } + params := sig.Params() + calls, errs := solve(g.pkg.Fset, injectSig.out, params, set) + if len(errs) > 0 { + return mapErrors(errs, func(e error) error { + if w, ok := e.(*wireErr); ok { + return notePosition(w.position, fmt.Errorf("inject %s: %v", name, w.error)) + } + return notePosition(g.pkg.Fset.Position(pos), fmt.Errorf("inject %s: %v", name, e)) + }) + } + type pendingVar struct { + name string + expr ast.Expr + typeInfo *types.Info + } + var pendingVars []pendingVar + ec := new(errorCollector) + for i := range calls { + c := &calls[i] + if c.hasCleanup && !injectSig.cleanup { + ts := types.TypeString(c.out, nil) + ec.add(notePosition( + g.pkg.Fset.Position(pos), + fmt.Errorf("inject %s: provider for %s returns cleanup but injection does not return cleanup function", name, ts))) + } + if c.hasErr && !injectSig.err { + ts := types.TypeString(c.out, nil) + ec.add(notePosition( + g.pkg.Fset.Position(pos), + fmt.Errorf("inject %s: provider for %s returns error but injection not allowed to fail", name, ts))) + } + if c.kind == valueExpr { + if err := accessibleFrom(c.valueTypeInfo, c.valueExpr, g.pkg.PkgPath); err != nil { + // TODO(light): Display line number of value expression. + ts := types.TypeString(c.out, nil) + ec.add(notePosition( + g.pkg.Fset.Position(pos), + fmt.Errorf("inject %s: value %s can't be used: %v", name, ts, err))) + } + if g.values[c.valueExpr] == "" { + t := c.valueTypeInfo.TypeOf(c.valueExpr) + + name := typeVariableName(t, "", func(name string) string { return "_wire" + export(name) + "Value" }, g.nameInFileScope) + g.values[c.valueExpr] = name + pendingVars = append(pendingVars, pendingVar{ + name: name, + expr: c.valueExpr, + typeInfo: c.valueTypeInfo, + }) + } + } + } + if len(ec.errors) > 0 { + return ec.errors + } + + // Perform one pass to collect all imports, followed by the real pass. + injectPass(name, sig, calls, set, &injectorGen{ + g: g, + errVar: disambiguate("err", g.nameInFileScope), + discard: true, + }) + injectPass(name, sig, calls, set, &injectorGen{ + g: g, + errVar: disambiguate("err", g.nameInFileScope), + discard: false, + }) + if len(pendingVars) > 0 { + g.p("var (\n") + for _, pv := range pendingVars { + g.p("\t%s = ", pv.name) + g.writeAST(pv.typeInfo, pv.expr) + g.p("\n") + } + g.p(")\n\n") + } + return nil +} + +// rewritePkgRefs rewrites any package references in an AST into references for the +// generated package. +func (g *gen) rewritePkgRefs(info *types.Info, node ast.Node) ast.Node { + start, end := node.Pos(), node.End() + node = copyAST(node) + // First, rewrite all package names. This lets us know all the + // potentially colliding identifiers. + node = astutil.Apply(node, func(c *astutil.Cursor) bool { + switch node := c.Node().(type) { + case *ast.Ident: + // This is an unqualified identifier (qualified identifiers are peeled off below). + obj := info.ObjectOf(node) + if obj == nil { + return false + } + if pkg := obj.Pkg(); pkg != nil && obj.Parent() == pkg.Scope() && pkg.Path() != g.pkg.PkgPath { + // An identifier from either a dot import or read from a different package. + newPkgID := g.qualifyImport(pkg.Name(), pkg.Path()) + c.Replace(&ast.SelectorExpr{ + X: ast.NewIdent(newPkgID), + Sel: ast.NewIdent(node.Name), + }) + return false + } + return true + case *ast.SelectorExpr: + pkgIdent, ok := node.X.(*ast.Ident) + if !ok { + return true + } + pkgName, ok := info.ObjectOf(pkgIdent).(*types.PkgName) + if !ok { + return true + } + // This is a qualified identifier. Rewrite and avoid visiting subexpressions. + imported := pkgName.Imported() + newPkgID := g.qualifyImport(imported.Name(), imported.Path()) + c.Replace(&ast.SelectorExpr{ + X: ast.NewIdent(newPkgID), + Sel: ast.NewIdent(node.Sel.Name), + }) + return false + default: + return true + } + }, nil) + // Now that we have all the identifiers, rename any variables declared + // in this scope to not collide. + newNames := make(map[types.Object]string) + inNewNames := func(n string) bool { + for _, other := range newNames { + if other == n { + return true + } + } + return false + } + var scopeStack []*types.Scope + pkgScope := g.pkg.Types.Scope() + node = astutil.Apply(node, func(c *astutil.Cursor) bool { + if scope := info.Scopes[c.Node()]; scope != nil { + scopeStack = append(scopeStack, scope) + } + id, ok := c.Node().(*ast.Ident) + if !ok { + return true + } + obj := info.ObjectOf(id) + if obj == nil { + // We rewrote this identifier earlier, so it does not need + // further rewriting. + return true + } + if n, ok := newNames[obj]; ok { + // We picked a new name for this symbol. Rewrite it. + c.Replace(ast.NewIdent(n)) + return false + } + if par := obj.Parent(); par == nil || par == pkgScope { + // Don't rename methods, field names, or top-level identifiers. + return true + } + + // Rename any symbols defined within rewritePkgRefs's node that conflict + // with any symbols in the generated file. + objName := obj.Name() + if pos := obj.Pos(); pos < start || end <= pos || !(g.nameInFileScope(objName) || inNewNames(objName)) { + return true + } + newName := disambiguate(objName, func(n string) bool { + if g.nameInFileScope(n) || inNewNames(n) { + return true + } + if len(scopeStack) > 0 { + // Avoid picking a name that conflicts with other names in the + // current scope. + _, obj := scopeStack[len(scopeStack)-1].LookupParent(n, token.NoPos) + if obj != nil { + return true + } + } + return false + }) + newNames[obj] = newName + c.Replace(ast.NewIdent(newName)) + return false + }, func(c *astutil.Cursor) bool { + if info.Scopes[c.Node()] != nil { + // Should be top of stack; pop it. + scopeStack = scopeStack[:len(scopeStack)-1] + } + return true + }) + return node +} + +// writeAST prints an AST node into the generated output, rewriting any +// package references it encounters. +func (g *gen) writeAST(info *types.Info, node ast.Node) { + node = g.rewritePkgRefs(info, node) + if err := printer.Fprint(&g.buf, g.pkg.Fset, node); err != nil { + panic(err) + } +} + +func (g *gen) qualifiedID(pkgName, pkgPath, sym string) string { + name := g.qualifyImport(pkgName, pkgPath) + if name == "" { + return sym + } + return name + "." + sym +} + +func (g *gen) qualifyImport(name, path string) string { + if path == g.pkg.PkgPath { + return "" + } + // TODO(light): This is depending on details of the current loader. + const vendorPart = "vendor/" + unvendored := path + if i := strings.LastIndex(path, vendorPart); i != -1 && (i == 0 || path[i-1] == '/') { + unvendored = path[i+len(vendorPart):] + } + if info, ok := g.imports[unvendored]; ok { + return info.name + } + // TODO(light): Use parts of import path to disambiguate. + newName := disambiguate(name, func(n string) bool { + // Don't let an import take the "err" name. That's annoying. + return n == "err" || g.nameInFileScope(n) + }) + g.imports[unvendored] = importInfo{ + name: newName, + differs: newName != name, + } + return newName +} + +func (g *gen) nameInFileScope(name string) bool { + for _, other := range g.imports { + if other.name == name { + return true + } + } + for _, other := range g.values { + if other == name { + return true + } + } + _, obj := g.pkg.Types.Scope().LookupParent(name, token.NoPos) + return obj != nil +} + +func (g *gen) qualifyPkg(pkg *types.Package) string { + return g.qualifyImport(pkg.Name(), pkg.Path()) +} + +func (g *gen) p(format string, args ...interface{}) { + fmt.Fprintf(&g.buf, format, args...) +} + +// injectorGen is the per-injector pass generator state. +type injectorGen struct { + g *gen + + paramNames []string + localNames []string + cleanupNames []string + errVar string + + // discard causes ig.p and ig.writeAST to no-op. Useful to run + // generation for side-effects like filling in g.imports. + discard bool +} + +// injectPass generates an injector given the output from analysis. +// The sig passed in should be verified. +func injectPass(name string, sig *types.Signature, calls []call, set *ProviderSet, ig *injectorGen) { + params := sig.Params() + injectSig, err := funcOutput(sig) + if err != nil { + // This should be checked by the caller already. + panic(err) + } + ig.p("func %s(", name) + for i := 0; i < params.Len(); i++ { + if i > 0 { + ig.p(", ") + } + pi := params.At(i) + a := pi.Name() + if a == "" || a == "_" { + a = typeVariableName(pi.Type(), "arg", unexport, ig.nameInInjector) + } else { + a = disambiguate(a, ig.nameInInjector) + } + ig.paramNames = append(ig.paramNames, a) + if sig.Variadic() && i == params.Len()-1 { + // Keep the varargs signature instead of a slice for the last argument if the + // injector is variadic. + ig.p("%s ...%s", ig.paramNames[i], types.TypeString(pi.Type().(*types.Slice).Elem(), ig.g.qualifyPkg)) + } else { + ig.p("%s %s", ig.paramNames[i], types.TypeString(pi.Type(), ig.g.qualifyPkg)) + } + } + outTypeString := types.TypeString(injectSig.out, ig.g.qualifyPkg) + switch { + case injectSig.cleanup && injectSig.err: + ig.p(") (%s, func(), error) {\n", outTypeString) + case injectSig.cleanup: + ig.p(") (%s, func()) {\n", outTypeString) + case injectSig.err: + ig.p(") (%s, error) {\n", outTypeString) + default: + ig.p(") %s {\n", outTypeString) + } + for i := range calls { + c := &calls[i] + lname := typeVariableName(c.out, "v", unexport, ig.nameInInjector) + ig.localNames = append(ig.localNames, lname) + switch c.kind { + case structProvider: + ig.structProviderCall(lname, c) + case funcProviderCall: + ig.funcProviderCall(lname, c, injectSig) + case valueExpr: + ig.valueExpr(lname, c) + case selectorExpr: + ig.fieldExpr(lname, c) + default: + panic("unknown kind") + } + } + if len(calls) == 0 { + ig.p("\treturn %s", ig.paramNames[set.For(injectSig.out).Arg().Index]) + } else { + ig.p("\treturn %s", ig.localNames[len(calls)-1]) + } + if injectSig.cleanup { + ig.p(", func() {\n") + for i := len(ig.cleanupNames) - 1; i >= 0; i-- { + ig.p("\t\t%s()\n", ig.cleanupNames[i]) + } + ig.p("\t}") + } + if injectSig.err { + ig.p(", nil") + } + ig.p("\n}\n\n") +} + +func (ig *injectorGen) funcProviderCall(lname string, c *call, injectSig outputSignature) { + ig.p("\t%s", lname) + prevCleanup := len(ig.cleanupNames) + if c.hasCleanup { + cname := disambiguate("cleanup", ig.nameInInjector) + ig.cleanupNames = append(ig.cleanupNames, cname) + ig.p(", %s", cname) + } + if c.hasErr { + ig.p(", %s", ig.errVar) + } + ig.p(" := ") + ig.p("%s(", ig.g.qualifiedID(c.pkg.Name(), c.pkg.Path(), c.name)) + for i, a := range c.args { + if i > 0 { + ig.p(", ") + } + if a < len(ig.paramNames) { + ig.p("%s", ig.paramNames[a]) + } else { + ig.p("%s", ig.localNames[a-len(ig.paramNames)]) + } + } + if c.varargs { + ig.p("...") + } + ig.p(")\n") + if c.hasErr { + ig.p("\tif %s != nil {\n", ig.errVar) + for i := prevCleanup - 1; i >= 0; i-- { + ig.p("\t\t%s()\n", ig.cleanupNames[i]) + } + ig.p("\t\treturn %s", zeroValue(injectSig.out, ig.g.qualifyPkg)) + if injectSig.cleanup { + ig.p(", nil") + } + // TODO(light): Give information about failing provider. + ig.p(", err\n") + ig.p("\t}\n") + } +} + +func (ig *injectorGen) structProviderCall(lname string, c *call) { + ig.p("\t%s", lname) + ig.p(" := ") + if _, ok := c.out.(*types.Pointer); ok { + ig.p("&") + } + ig.p("%s{\n", ig.g.qualifiedID(c.pkg.Name(), c.pkg.Path(), c.name)) + for i, a := range c.args { + ig.p("\t\t%s: ", c.fieldNames[i]) + if a < len(ig.paramNames) { + ig.p("%s", ig.paramNames[a]) + } else { + ig.p("%s", ig.localNames[a-len(ig.paramNames)]) + } + ig.p(",\n") + } + ig.p("\t}\n") +} + +func (ig *injectorGen) valueExpr(lname string, c *call) { + ig.p("\t%s := %s\n", lname, ig.g.values[c.valueExpr]) +} + +func (ig *injectorGen) fieldExpr(lname string, c *call) { + a := c.args[0] + ig.p("\t%s := ", lname) + if c.ptrToField { + ig.p("&") + } + if a < len(ig.paramNames) { + ig.p("%s.%s\n", ig.paramNames[a], c.name) + } else { + ig.p("%s.%s\n", ig.localNames[a-len(ig.paramNames)], c.name) + } +} + +// nameInInjector reports whether name collides with any other identifier +// in the current injector. +func (ig *injectorGen) nameInInjector(name string) bool { + if name == ig.errVar { + return true + } + for _, a := range ig.paramNames { + if a == name { + return true + } + } + for _, l := range ig.localNames { + if l == name { + return true + } + } + for _, l := range ig.cleanupNames { + if l == name { + return true + } + } + return ig.g.nameInFileScope(name) +} + +func (ig *injectorGen) p(format string, args ...interface{}) { + if ig.discard { + return + } + ig.g.p(format, args...) +} + +func (ig *injectorGen) writeAST(info *types.Info, node ast.Node) { + node = ig.g.rewritePkgRefs(info, node) + if ig.discard { + return + } + if err := printer.Fprint(&ig.g.buf, ig.g.pkg.Fset, node); err != nil { + panic(err) + } +} + +// zeroValue returns the shortest expression that evaluates to the zero +// value for the given type. +func zeroValue(t types.Type, qf types.Qualifier) string { + switch u := t.Underlying().(type) { + case *types.Array, *types.Struct: + return types.TypeString(t, qf) + "{}" + case *types.Basic: + info := u.Info() + switch { + case info&types.IsBoolean != 0: + return "false" + case info&(types.IsInteger|types.IsFloat|types.IsComplex) != 0: + return "0" + case info&types.IsString != 0: + return `""` + default: + panic("unreachable") + } + case *types.Chan, *types.Interface, *types.Map, *types.Pointer, *types.Signature, *types.Slice: + return "nil" + default: + panic("unreachable") + } +} + +// typeVariableName invents a disambiguated variable name derived from the type name. +// If no name can be derived from the type, defaultName is used. +// transform is used to transform the derived name(s) (including defaultName); +// commonly used functions include export and unexport. +// collides is used to see if a name is ambiguous. If any one of the derived +// names is unambiguous, it used; otherwise, the first derived name is +// disambiguated using disambiguate(). +func typeVariableName(t types.Type, defaultName string, transform func(string) string, collides func(string) bool) string { + if p, ok := t.(*types.Pointer); ok { + t = p.Elem() + } + var names []string + switch t := t.(type) { + case *types.Basic: + if t.Name() != "" { + names = append(names, t.Name()) + } + case *types.Named: + obj := t.Obj() + if name := obj.Name(); name != "" { + names = append(names, name) + } + // Provide an alternate name prefixed with the package name if possible. + // E.g., in case of collisions, we'll use "fooCfg" instead of "cfg2". + if pkg := obj.Pkg(); pkg != nil && pkg.Name() != "" { + names = append(names, fmt.Sprintf("%s%s", pkg.Name(), strings.Title(obj.Name()))) + } + } + + // If we were unable to derive a name, use defaultName. + if len(names) == 0 { + names = append(names, defaultName) + } + + // Transform the name(s). + for i, name := range names { + names[i] = transform(name) + } + + // See if there's an unambiguous name; if so, use it. + for _, name := range names { + if !token.Lookup(name).IsKeyword() && !collides(name) { + return name + } + } + // Otherwise, disambiguate the first name. + return disambiguate(names[0], collides) +} + +// unexport converts a name that is potentially exported to an unexported name. +func unexport(name string) string { + if name == "" { + return "" + } + r, sz := utf8.DecodeRuneInString(name) + if !unicode.IsUpper(r) { + // foo -> foo + return name + } + r2, sz2 := utf8.DecodeRuneInString(name[sz:]) + if !unicode.IsUpper(r2) { + // Foo -> foo + return string(unicode.ToLower(r)) + name[sz:] + } + // UPPERWord -> upperWord + sbuf := new(strings.Builder) + sbuf.WriteRune(unicode.ToLower(r)) + i := sz + r, sz = r2, sz2 + for unicode.IsUpper(r) && sz > 0 { + r2, sz2 := utf8.DecodeRuneInString(name[i+sz:]) + if sz2 > 0 && unicode.IsLower(r2) { + break + } + i += sz + sbuf.WriteRune(unicode.ToLower(r)) + r, sz = r2, sz2 + } + sbuf.WriteString(name[i:]) + return sbuf.String() +} + +// export converts a name that is potentially unexported to an exported name. +func export(name string) string { + if name == "" { + return "" + } + r, sz := utf8.DecodeRuneInString(name) + if unicode.IsUpper(r) { + // Foo -> Foo + return name + } + // fooBar -> FooBar + sbuf := new(strings.Builder) + sbuf.WriteRune(unicode.ToUpper(r)) + sbuf.WriteString(name[sz:]) + return sbuf.String() +} + +// disambiguate picks a unique name, preferring name if it is already unique. +// It also disambiguates against Go's reserved keywords. +func disambiguate(name string, collides func(string) bool) string { + if !token.Lookup(name).IsKeyword() && !collides(name) { + return name + } + buf := []byte(name) + if len(buf) > 0 && buf[len(buf)-1] >= '0' && buf[len(buf)-1] <= '9' { + buf = append(buf, '_') + } + base := len(buf) + for n := 2; ; n++ { + buf = strconv.AppendInt(buf[:base], int64(n), 10) + sbuf := string(buf) + if !token.Lookup(sbuf).IsKeyword() && !collides(sbuf) { + return sbuf + } + } +} + +// accessibleFrom reports whether node can be copied to wantPkg without +// violating Go visibility rules. +func accessibleFrom(info *types.Info, node ast.Node, wantPkg string) error { + var unexportError error + ast.Inspect(node, func(node ast.Node) bool { + if unexportError != nil { + return false + } + ident, ok := node.(*ast.Ident) + if !ok { + return true + } + obj := info.ObjectOf(ident) + if _, ok := obj.(*types.PkgName); ok { + // Local package names are fine, since we can just reimport them. + return true + } + if pkg := obj.Pkg(); pkg != nil { + if !ast.IsExported(ident.Name) && pkg.Path() != wantPkg { + unexportError = fmt.Errorf("uses unexported identifier %s", obj.Name()) + return false + } + if obj.Parent() != nil && obj.Parent() != pkg.Scope() { + unexportError = fmt.Errorf("%s is not declared in package scope", obj.Name()) + return false + } + } + return true + }) + return unexportError +} + +var ( + errorType = types.Universe.Lookup("error").Type() + cleanupType = types.NewSignature(nil, nil, nil, false) +) diff --git a/vendor/github.com/google/wire/wire.go b/vendor/github.com/google/wire/wire.go new file mode 100644 index 0000000000..fe8edc8c8a --- /dev/null +++ b/vendor/github.com/google/wire/wire.go @@ -0,0 +1,196 @@ +// Copyright 2018 The Wire Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// https://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package wire contains directives for Wire code generation. +// For an overview of working with Wire, see the user guide at +// https://github.com/google/wire/blob/master/docs/guide.md +// +// The directives in this package are used as input to the Wire code generation +// tool. The entry point of Wire's analysis are injector functions: function +// templates denoted by only containing a call to Build. The arguments to Build +// describes a set of providers and the Wire code generation tool builds a +// directed acylic graph of the providers' output types. The generated code will +// fill in the function template by using the providers from the provider set to +// instantiate any needed types. +package wire + +// ProviderSet is a marker type that collects a group of providers. +type ProviderSet struct{} + +// NewSet creates a new provider set that includes the providers in its +// arguments. Each argument is a function value, a provider set, a call to +// Struct, a call to Bind, a call to Value, a call to InterfaceValue or a call +// to FieldsOf. +// +// Passing a function value to NewSet declares that the function's first +// return value type will be provided by calling the function. The arguments +// to the function will come from the providers for their types. As such, all +// the function's parameters must be of non-identical types. The function may +// optionally return an error as its last return value and a cleanup function +// as the second return value. A cleanup function must be of type func() and is +// guaranteed to be called before the cleanup function of any of the +// provider's inputs. If any provider returns an error, the injector function +// will call all the appropriate cleanup functions and return the error from +// the injector function. +// +// Passing a ProviderSet to NewSet is the same as if the set's contents +// were passed as arguments to NewSet directly. +// +// The behavior of passing the result of a call to other functions in this +// package are described in their respective doc comments. +// +// For compatibility with older versions of Wire, passing a struct value of type +// S to NewSet declares that both S and *S will be provided by creating a new +// value of the appropriate type by filling in each field of S using the +// provider of the field's type. This form is deprecated and will be removed in +// a future version of Wire: new providers sets should use wire.Struct. +func NewSet(...interface{}) ProviderSet { + return ProviderSet{} +} + +// Build is placed in the body of an injector function template to declare the +// providers to use. The Wire code generation tool will fill in an +// implementation of the function. The arguments to Build are interpreted the +// same as NewSet: they determine the provider set presented to Wire's +// dependency graph. Build returns an error message that can be sent to a call +// to panic(). +// +// The parameters of the injector function are used as inputs in the dependency +// graph. +// +// Similar to provider functions passed into NewSet, the first return value is +// the output of the injector function, the optional second return value is a +// cleanup function, and the optional last return value is an error. If any of +// the provider functions in the injector function's provider set return errors +// or cleanup functions, the corresponding return value must be present in the +// injector function template. +// +// Examples: +// +// func injector(ctx context.Context) (*sql.DB, error) { +// wire.Build(otherpkg.FooSet, myProviderFunc) +// return nil, nil +// } +// +// func injector(ctx context.Context) (*sql.DB, error) { +// panic(wire.Build(otherpkg.FooSet, myProviderFunc)) +// } +func Build(...interface{}) string { + return "implementation not generated, run wire" +} + +// A Binding maps an interface to a concrete type. +type Binding struct{} + +// Bind declares that a concrete type should be used to satisfy a dependency on +// the type of iface. iface must be a pointer to an interface type, to must be a +// pointer to a concrete type. +// +// Example: +// +// type Fooer interface { +// Foo() +// } +// +// type MyFoo struct{} +// +// func (MyFoo) Foo() {} +// +// var MySet = wire.NewSet( +// wire.Struct(new(MyFoo)) +// wire.Bind(new(Fooer), new(MyFoo))) +func Bind(iface, to interface{}) Binding { + return Binding{} +} + +// bindToUsePointer is detected by the wire tool to indicate that Bind's second argument should take a pointer. +// See https://github.com/google/wire/issues/120 for details. +const bindToUsePointer = true + +// A ProvidedValue is an expression that is copied to the generated injector. +type ProvidedValue struct{} + +// Value binds an expression to provide the type of the expression. +// The expression may not be an interface value; use InterfaceValue for that. +// +// Example: +// +// var MySet = wire.NewSet(wire.Value([]string(nil))) +func Value(interface{}) ProvidedValue { + return ProvidedValue{} +} + +// InterfaceValue binds an expression to provide a specific interface type. +// The first argument is a pointer to the interface which user wants to provide. +// The second argument is the actual variable value whose type implements the +// interface. +// +// Example: +// +// var MySet = wire.NewSet(wire.InterfaceValue(new(io.Reader), os.Stdin)) +func InterfaceValue(typ interface{}, x interface{}) ProvidedValue { + return ProvidedValue{} +} + +// A StructProvider represents a named struct. +type StructProvider struct{} + +// Struct specifies that the given struct type will be provided by filling in +// the fields in the struct that have the names given. +// +// The first argument must be a pointer to the struct type. For a struct type +// Foo, Wire will use field-filling to provide both Foo and *Foo. The remaining +// arguments are field names to fill in. As a special case, if a single name "*" +// is given, then all of the fields in the struct will be filled in. +// +// For example: +// +// type S struct { +// MyFoo *Foo +// MyBar *Bar +// } +// var Set = wire.NewSet(wire.Struct(new(S), "MyFoo")) -> inject only S.MyFoo +// var Set = wire.NewSet(wire.Struct(new(S), "*")) -> inject all fields +func Struct(structType interface{}, fieldNames ...string) StructProvider { + return StructProvider{} +} + +// StructFields is a collection of the fields from a struct. +type StructFields struct{} + +// FieldsOf declares that the fields named of the given struct type will be used +// to provide the types of those fields. The structType argument must be a +// pointer to the struct or a pointer to a pointer to the struct it wishes to reference. +// +// The following example would provide Foo and Bar using S.MyFoo and S.MyBar respectively: +// +// type S struct { +// MyFoo Foo +// MyBar Bar +// } +// +// func NewStruct() S { /* ... */ } +// var Set = wire.NewSet(wire.FieldsOf(new(S), "MyFoo", "MyBar")) +// +// or +// +// func NewStruct() *S { /* ... */ } +// var Set = wire.NewSet(wire.FieldsOf(new(*S), "MyFoo", "MyBar")) +// +// If the structType argument is a pointer to a pointer to a struct, then FieldsOf +// additionally provides a pointer to each field type (e.g., *Foo and *Bar in the +// example above). +func FieldsOf(structType interface{}, fieldNames ...string) StructFields { + return StructFields{} +} diff --git a/vendor/modules.txt b/vendor/modules.txt index 7e362799f0..f9b620a160 100644 --- a/vendor/modules.txt +++ b/vendor/modules.txt @@ -180,8 +180,14 @@ github.com/google/go-cmp/cmp/internal/value github.com/google/go-containerregistry/pkg/name # github.com/google/gofuzz v1.1.0 github.com/google/gofuzz +# github.com/google/subcommands v1.0.1 +github.com/google/subcommands # github.com/google/uuid v1.1.1 github.com/google/uuid +# github.com/google/wire v0.4.0 +github.com/google/wire +github.com/google/wire/cmd/wire +github.com/google/wire/internal/wire # github.com/googleapis/gax-go/v2 v2.0.5 github.com/googleapis/gax-go/v2 # github.com/googleapis/gnostic v0.4.0 From 7e98cffbec2105df6abe275d8a23bb171477e2f8 Mon Sep 17 00:00:00 2001 From: capri-xiyue <52932582+capri-xiyue@users.noreply.github.com> Date: Tue, 5 May 2020 16:39:44 -0700 Subject: [PATCH 06/12] =?UTF-8?q?added=20e2e=20for=20gcp=20broker=20which?= =?UTF-8?q?=20sends=20and=20forwards=20events,=20refactored=20=E2=80=A6=20?= =?UTF-8?q?(#973)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * added e2e for gcp broker which sends and forwards events, refactored init sciprts and e2e tests * refactored the e2e and init scripts * changed some code structure,renamed variable, fixed nits * modified the comments --- hack/init_cloud_storage_source.sh | 38 ++++++ hack/init_control_plane.sh | 49 +++---- hack/init_control_plane_gke.sh | 81 +++++------ hack/init_data_plane.sh | 62 +++++---- hack/lib.sh | 170 +++++++++++++++++++++++ test/e2e-common.sh | 217 ++++++------------------------ test/e2e-secret-tests.sh | 106 +++++++++++++++ test/e2e-tests.sh | 8 +- test/e2e-wi-tests.sh | 172 +++++++++-------------- test/e2e/e2e_test.go | 7 + test/e2e/test_gcp_broker.go | 133 ++++++++++++++++++ test/lib.sh | 2 + 12 files changed, 658 insertions(+), 387 deletions(-) create mode 100755 hack/init_cloud_storage_source.sh create mode 100644 hack/lib.sh create mode 100644 test/e2e-secret-tests.sh create mode 100644 test/e2e/test_gcp_broker.go diff --git a/hack/init_cloud_storage_source.sh b/hack/init_cloud_storage_source.sh new file mode 100755 index 0000000000..5f9d04af5a --- /dev/null +++ b/hack/init_cloud_storage_source.sh @@ -0,0 +1,38 @@ +#!/usr/bin/env bash + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Usage: ./init_cloud_storage_source.sh [PROJECT_ID] +# [PROJECT_ID] is an optional parameter to specify the project to use, default to `gcloud config get-value project`. +# The script always uses the same service account called cre-pubsub. +set -o errexit +set -o nounset +set -euo pipefail + +source $(dirname $0)/lib.sh + +readonly PUBSUB_SERVICE_ACCOUNT_KEY_TEMP="$(mktemp)" + +PROJECT_ID=${1:-$(gcloud config get-value project)} +echo "PROJECT_ID used when init_cloud_storage_source is'${PROJECT_ID}'" + +# Download a JSON key for the service account. +gcloud iam service-accounts keys create "${PUBSUB_SERVICE_ACCOUNT_KEY_TEMP}" \ + --iam-account="${PUBSUB_SERVICE_ACCOUNT}"@"${PROJECT_ID}".iam.gserviceaccount.com + +storage_admin_set_up "${PROJECT_ID}" "${PUBSUB_SERVICE_ACCOUNT}" "${PUBSUB_SERVICE_ACCOUNT_KEY_TEMP}" + +# Remove the tmp file. +rm "${PUBSUB_SERVICE_ACCOUNT_KEY_TEMP}" diff --git a/hack/init_control_plane.sh b/hack/init_control_plane.sh index 737fc15371..8dfe15d2d0 100755 --- a/hack/init_control_plane.sh +++ b/hack/init_control_plane.sh @@ -15,40 +15,31 @@ # limitations under the License. # Usage: ./init_control_plane.sh -# The current project set in gcloud MUST be the same as where the cluster is running. - -NAMESPACE=cloud-run-events -SERVICE_ACCOUNT=cloud-run-events -PROJECT_ID=$(gcloud config get-value project) -KEY_TEMP=google-cloud-key.json - -# Enable APIs. -gcloud services enable pubsub.googleapis.com -gcloud services enable storage-component.googleapis.com -gcloud services enable storage-api.googleapis.com -gcloud services enable cloudscheduler.googleapis.com -gcloud services enable cloudbuild.googleapis.com -gcloud services enable logging.googleapis.com -gcloud services enable stackdriver.googleapis.com - -# Create the service account for the control plane -gcloud iam service-accounts create ${SERVICE_ACCOUNT} - -# Grant permissions to the service account for the control plane to manage native GCP resources. -gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${SERVICE_ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/pubsub.admin -gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${SERVICE_ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/storage.admin -gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${SERVICE_ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/cloudscheduler.admin -gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${SERVICE_ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/logging.configWriter -gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${SERVICE_ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/logging.privateLogViewer +# [PROJECT_ID] is an optional parameter to specify the project to use, default to `gcloud config get-value project`. +# The script always uses the same service account called cloud-run-events. +set -o errexit +set -o nounset +set -euo pipefail + +source $(dirname $0)/lib.sh + +readonly CONTROL_PLANE_SERVICE_ACCOUNT_KEY_TEMP="$(mktemp)" + +PROJECT_ID=${1:-$(gcloud config get-value project)} +echo "PROJECT_ID used when init_control_plane is'${PROJECT_ID}'" + +init_control_plane_service_account "${PROJECT_ID}" "${CONTROL_PLANE_SERVICE_ACCOUNT}" # Download a JSON key for the service account. -gcloud iam service-accounts keys create ${KEY_TEMP} --iam-account=${SERVICE_ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com +gcloud iam service-accounts keys create "${CONTROL_PLANE_SERVICE_ACCOUNT_KEY_TEMP}" \ + --iam-account="${CONTROL_PLANE_SERVICE_ACCOUNT}"@"${PROJECT_ID}".iam.gserviceaccount.com # Create/Patch the secret with the download JSON key in the control plane namespace -kubectl -n ${NAMESPACE} create secret generic google-cloud-key --from-file=key.json=${KEY_TEMP} --dry-run -o yaml | kubectl apply --filename - +kubectl --namespace "${CONTROL_PLANE_NAMESPACE}" create secret generic "${CONTROL_PLANE_SECRET_NAME}" \ + --from-file=key.json="${CONTROL_PLANE_SERVICE_ACCOUNT_KEY_TEMP}" --dry-run -o yaml | kubectl apply --filename - # Delete the controller pod in the control plane namespace to refresh the created/patched secret -kubectl delete pod -n ${NAMESPACE} --selector role=controller +kubectl delete pod -n "${CONTROL_PLANE_NAMESPACE}" --selector role=controller # Remove the tmp file. -rm ${KEY_TEMP} +rm "${CONTROL_PLANE_SERVICE_ACCOUNT_KEY_TEMP}" diff --git a/hack/init_control_plane_gke.sh b/hack/init_control_plane_gke.sh index 5cb3bbdee3..b1f6313f2b 100755 --- a/hack/init_control_plane_gke.sh +++ b/hack/init_control_plane_gke.sh @@ -14,52 +14,43 @@ # See the License for the specific language governing permissions and # limitations under the License. -# Usage: ./init_control_plane_gke.sh -# The current project set in gcloud MUST be the same as where the cluster is running. - -NAMESPACE=cloud-run-events -GSA_NAME=cloud-run-events -PROJECT_ID=$(gcloud config get-value project) -CLUSTER_NAME="$(cut -d'_' -f4 <<<"$(kubectl config current-context)")" -KSA_NAME=controller - -# Enable APIs. -gcloud services enable pubsub.googleapis.com -gcloud services enable storage-component.googleapis.com -gcloud services enable storage-api.googleapis.com -gcloud services enable cloudscheduler.googleapis.com -gcloud services enable logging.googleapis.com -gcloud services enable cloudbuild.googleapis.com -gcloud services enable stackdriver.googleapis.com -gcloud services enable iamcredentials.googleapis.com - -# Enable workload identity. -gcloud container clusters update ${CLUSTER_NAME} \ - --workload-pool=${PROJECT_ID}.svc.id.goog - -# Modify all node pools to enable GKE_METADATA. -pools=$(gcloud container node-pools list --cluster=${CLUSTER_NAME} --format="value(name)") -while read -r pool_name -do - gcloud container node-pools update "${pool_name}" \ - --cluster=${CLUSTER_NAME} \ - --workload-metadata=GKE_METADATA -done <<<"${pools}" - -# Create the service account for the control plane. -gcloud iam service-accounts create ${GSA_NAME} - -# Grant permissions to the service account for the control plane to manage native GCP resources. -gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/pubsub.admin -gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/storage.admin -gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/cloudscheduler.admin -gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/logging.configWriter -gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/logging.privateLogViewer -gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=serviceAccount:${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/iam.serviceAccountAdmin +# Usage: ./init_control_plane_gke.sh [CLUSTER_NAME] [CLUSTER_LOCATION] [CLUSTER_LOCATION_TYPE] [PROJECT_ID] +# [CLUSTER_NAME] is an optional parameter to specify the cluster to use, default to `gcloud config get-value run/cluster`. +# [CLUSTER_LOCATION] is an optional parameter to specify the cluster location to use, default to `gcloud config get-value run/cluster_location`. +# [CLUSTER_LOCATION_TYPE] is an optional parameter to specify the cluster location type to use, default to `zonal`. CLUSTER_LOCATION_TYPE must be `zonal` or `regional`. +# [PROJECT_ID] is an optional parameter to specify the project to use, default to `gcloud config get-value project`. +# If user want to specify a parameter, user will also need to specify all parameters before that specific paramater +# The script always uses the same service account called cloud-run-events. +set -o errexit +set -o nounset +set -euo pipefail + +source $(dirname $0)/lib.sh + +readonly DEFAULT_CLUSTER_LOCATION_TYPE="zonal" + +CLUSTER_NAME=${1:-$(gcloud config get-value run/cluster)} +CLUSTER_LOCATION=${2:-$(gcloud config get-value run/cluster_location)} +CLUSTER_LOCATION_TYPE=${3:-$DEFAULT_CLUSTER_LOCATION_TYPE} +PROJECT_ID=${4:-$(gcloud config get-value project)} + +echo "CLUSTER_NAME used when init_control_plane_gke is'${CLUSTER_NAME}'" +echo "CLUSTER_LOCATION used when init_control_plane_gke is'${CLUSTER_LOCATION}'" +echo "CLUSTER_LOCATION_TYPE used when init_control_plane_gke is'${CLUSTER_LOCATION_TYPE}'" +echo "PROJECT_ID used when init_control_plane_gke is'${PROJECT_ID}'" + +init_control_plane_service_account "${PROJECT_ID}" "${CONTROL_PLANE_SERVICE_ACCOUNT}" +enable_workload_identity "${PROJECT_ID}" "${CONTROL_PLANE_SERVICE_ACCOUNT}" "${CLUSTER_NAME}" "${CLUSTER_LOCATION}" "${CLUSTER_LOCATION_TYPE}" # Allow the Kubernetes service account to use Google service account. -MEMBER="serviceAccount:"${PROJECT_ID}".svc.id.goog["${NAMESPACE}"/"${KSA_NAME}"]" -gcloud iam service-accounts add-iam-policy-binding --role roles/iam.workloadIdentityUser --member $MEMBER ${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com +MEMBER="serviceAccount:${PROJECT_ID}.svc.id.goog[${CONTROL_PLANE_NAMESPACE}/${K8S_CONTROLLER_SERVICE_ACCOUNT}]" +gcloud iam service-accounts add-iam-policy-binding \ + --role roles/iam.workloadIdentityUser \ + --member "$MEMBER" "${CONTROL_PLANE_SERVICE_ACCOUNT}"@"${PROJECT_ID}".iam.gserviceaccount.com # Add annotation to Kubernetes service account. -kubectl annotate serviceaccount --namespace ${NAMESPACE} ${KSA_NAME} iam.gke.io/gcp-service-account=${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com \ No newline at end of file +kubectl annotate --overwrite serviceaccount "${K8S_CONTROLLER_SERVICE_ACCOUNT}" iam.gke.io/gcp-service-account="${CONTROL_PLANE_SERVICE_ACCOUNT}"@"${PROJECT_ID}".iam.gserviceaccount.com \ + --namespace "${CONTROL_PLANE_NAMESPACE}" + +# Delete the controller pod in the control plane namespace to refresh +kubectl delete pod -n "${CONTROL_PLANE_NAMESPACE}" --selector role=controller diff --git a/hack/init_data_plane.sh b/hack/init_data_plane.sh index 66c56815a4..fc175191d5 100755 --- a/hack/init_data_plane.sh +++ b/hack/init_data_plane.sh @@ -14,38 +14,42 @@ # See the License for the specific language governing permissions and # limitations under the License. -# Usage: ./init_data_plane.sh [NAMESPACE] -# where [NAMESPACE] is an optional parameter to specify the namespace to use. If it's not specified, we use the default one. -# if the namespace does not exist, the script will create it. -# The current project set in gcloud MUST be the same as where the cluster is running. +# Usage: ./init_data_plane.sh [NAMESPACE] [PROJECT_ID] +# [NAMESPACE] is an optional parameter to specify the namespace to use, default to `default`. If the namespace does not exist, the script will create it. +# [PROJECT_ID] is an optional parameter to specify the project to use, default to `gcloud config get-value project`. +# If user wants to sepcify PROJECT_ID, user also need to specify NAMESPACE # The script always uses the same service account called cre-pubsub. - -SERVICE_ACCOUNT=cre-pubsub -KEY_TEMP=cre-pubsub.json -PROJECT_ID=$(gcloud config get-value project) -NAMESPACE=default -if [[ -z "$1" ]]; then - echo "NAMESPACE not provided, using default" -else - NAMESPACE="$1" - echo "NAMESPACE provided, using ${NAMESPACE}" - kubectl create namespace $NAMESPACE -fi - -# Create the service account for the data plane -gcloud iam service-accounts create ${SERVICE_ACCOUNT} - -# Grant pubsub.editor role to the service account for the data plane to read and/or write to Pub/Sub. -gcloud projects add-iam-policy-binding $PROJECT_ID --member=serviceAccount:${SERVICE_ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com --role roles/pubsub.editor +set -o errexit +set -o nounset +set -euo pipefail + +source $(dirname $0)/lib.sh + +PUBSUB_SERVICE_ACCOUNT_KEY_TEMP="$(mktemp)" +DEFAULT_NAMESPACE="default" + +NAMESPACE=${1:-$DEFAULT_NAMESPACE} + # Create the namespace for the data plane if it doesn't exist +existing_namespace=$(kubectl get namespace "${NAMESPACE}") + if [ -z "${existing_namespace}" ]; then + echo "Create NAMESPACE'${NAMESPACE}' neeeded for the Data Plane" + kubectl create namespace "${NAMESPACE}" + else + echo "NAMESPACE needed for the Data Plane '${NAMESPACE}' already existed" + fi +echo "NAMESPACE used when init_data_plane is'${NAMESPACE}'" +PROJECT_ID=${2:-$(gcloud config get-value project)} +echo "PROJECT_ID used when init_data_plane is'${PROJECT_ID}'" + +init_pubsub_service_account "${PROJECT_ID}" "${PUBSUB_SERVICE_ACCOUNT}" # Download a JSON key for the service account. -gcloud iam service-accounts keys create ${KEY_TEMP} --iam-account=${SERVICE_ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com - -# Create the secret with the download JSON key. -kubectl --namespace $NAMESPACE create secret generic google-cloud-key --from-file=key.json=${KEY_TEMP} +gcloud iam service-accounts keys create "${PUBSUB_SERVICE_ACCOUNT_KEY_TEMP}" \ + --iam-account="${PUBSUB_SERVICE_ACCOUNT}"@"${PROJECT_ID}".iam.gserviceaccount.com -# Label the namespace to inject a Broker. -kubectl label namespace $NAMESPACE knative-eventing-injection=enabled +# Create/Patch the secret with the download JSON key in the data plane namespace +kubectl --namespace "${NAMESPACE}" create secret generic ${PUBSUB_SECRET_NAME} \ + --from-file=key.json="${PUBSUB_SERVICE_ACCOUNT_KEY_TEMP}" --dry-run -o yaml | kubectl apply --filename - # Remove the tmp file. -rm ${KEY_TEMP} +rm "${PUBSUB_SERVICE_ACCOUNT_KEY_TEMP}" \ No newline at end of file diff --git a/hack/lib.sh b/hack/lib.sh new file mode 100644 index 0000000000..979cd2265c --- /dev/null +++ b/hack/lib.sh @@ -0,0 +1,170 @@ +#!/usr/bin/env bash + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +readonly CONTROL_PLANE_SERVICE_ACCOUNT="cloud-run-events" +readonly CONTROL_PLANE_NAMESPACE="cloud-run-events" + +readonly PUBSUB_SERVICE_ACCOUNT="cre-pubsub" + +readonly SERVICE_ACCOUNT_EMAIL_KEY="EMAIL" + +readonly ZONAL_CLUSTER_LOCATION_TYPE="zonal" +readonly REGIONAL_CLUSTER_LOCATION_TYPE="regional" + +# Constants used for both init_XXX.sh and e2e-xxx.sh +export K8S_CONTROLLER_SERVICE_ACCOUNT="controller" +export CONTROL_PLANE_SECRET_NAME="google-cloud-key" +export PUBSUB_SECRET_NAME="google-cloud-key" + +function init_control_plane_service_account() { + local project_id=${1} + local control_plane_service_account=${2} + + echo "parameter project_id used when initiating control plane service account is'${project_id}'" + echo "parameter control_plane_service_account used when initiating control plane service account is'${control_plane_service_account}'" + + # Enable APIs. + gcloud services enable pubsub.googleapis.com + gcloud services enable storage-component.googleapis.com + gcloud services enable storage-api.googleapis.com + gcloud services enable cloudscheduler.googleapis.com + gcloud services enable cloudbuild.googleapis.com + gcloud services enable logging.googleapis.com + gcloud services enable stackdriver.googleapis.com + # Create the service account for the control plane if it doesn't exist + existing_control_plane_service_account=$(gcloud iam service-accounts list \ + --filter="${SERVICE_ACCOUNT_EMAIL_KEY} ~ ^${control_plane_service_account}@") + if [ -z "${existing_control_plane_service_account}" ]; then + echo "Create Service Account '${control_plane_service_account}' neeeded for the Control Plane" + gcloud iam service-accounts create ${control_plane_service_account} + else + echo "Service Account needed for the Control Plane '${control_plane_service_account}' already existed" + fi + + # Grant permissions to the service account for the control plane to manage native GCP resources. + echo "Set up Service Account used by the Control Plane" + gcloud projects add-iam-policy-binding ${project_id} \ + --member=serviceAccount:${control_plane_service_account}@${project_id}.iam.gserviceaccount.com \ + --role roles/pubsub.admin + gcloud projects add-iam-policy-binding ${project_id} \ + --member=serviceAccount:${control_plane_service_account}@${project_id}.iam.gserviceaccount.com \ + --role roles/storage.admin + gcloud projects add-iam-policy-binding ${project_id} \ + --member=serviceAccount:${control_plane_service_account}@${project_id}.iam.gserviceaccount.com \ + --role roles/cloudscheduler.admin + gcloud projects add-iam-policy-binding ${project_id} \ + --member=serviceAccount:${control_plane_service_account}@${project_id}.iam.gserviceaccount.com \ + --role roles/logging.configWriter + gcloud projects add-iam-policy-binding ${project_id} \ + --member=serviceAccount:${control_plane_service_account}@${project_id}.iam.gserviceaccount.com \ + --role roles/logging.privateLogViewer + +} + +function init_pubsub_service_account() { + local project_id=${1} + local pubsub_service_account=${2} + echo "parameter project_id used when initiating pubsub service account is'${project_id}'" + echo "parameter control_plane_service_account used when initiating pubsub service account is'${pubsub_service_account}'" + # Enable APIs. + gcloud services enable pubsub.googleapis.com + + # Create the pubsub service account for the data plane if it doesn't exist + existing_pubsub_service_account=$(gcloud iam service-accounts list \ + --filter="${SERVICE_ACCOUNT_EMAIL_KEY} ~ ^${pubsub_service_account}@") + if [ -z "${existing_pubsub_service_account}" ]; then + echo "Create PubSub Service Account '${pubsub_service_account}' neeeded for the Data Plane" + gcloud iam service-accounts create ${pubsub_service_account} + else + echo "PubSub Service Account '${pubsub_service_account}' needed for the Data Plane already existed" + fi + + # Grant pubsub.editor role to the service account for the data plane to read and/or write to Pub/Sub. + gcloud projects add-iam-policy-binding ${project_id} \ + --member=serviceAccount:${pubsub_service_account}@${project_id}.iam.gserviceaccount.com \ + --role roles/pubsub.editor + +} + +function enable_workload_identity(){ + local project_id=${1} + local control_plane_service_account=${2} + local cluster_name=${3} + local cluster_location=${4} + local cluster_location_type=${5} + + # Print and Verify parameters + echo "parameter project_id used when enabling workload identity is'${project_id}'" + echo "parameter control_plane_service_account used when enabling workload identity is'${control_plane_service_account}'" + echo "parameter cluster_name used when enabling workload identity is'${cluster_name}'" + echo "parameter cluster_location used when enabling workload identity is'${cluster_location}'" + echo "parameter cluster_location_type used when enabling workload identity is'${cluster_location_type}'" + + local cluster_location_option + if [[ ${cluster_location_type} == "${ZONAL_CLUSTER_LOCATION_TYPE}" ]]; then + cluster_location_option=zone + elif [[ ${cluster_location_type} == "${REGIONAL_CLUSTER_LOCATION_TYPE}" ]]; then + cluster_location_option=region + else + echo >&2 "Fatal error: cluster_location_type used when enabling workload identity must be '${ZONAL_CLUSTER_LOCATION_TYPE}' or '${REGIONAL_CLUSTER_LOCATION_TYPE}'" + exit 1 + fi + echo "cluster_location_option used when enabling workload identity is'${cluster_location_option}'" + + # Enable API + gcloud services enable iamcredentials.googleapis.com + # Enable workload identity. + echo "Enable Workload Identity" + gcloud container clusters update ${cluster_name} \ + --${cluster_location_option}=${cluster_location} \ + --workload-pool=${project_id}.svc.id.goog + + # Modify all node pools to enable GKE_METADATA. + echo "Modify all node pools to enable GKE_METADATA" + pools=$(gcloud container node-pools list --cluster=${cluster_name} --${cluster_location_option}=${cluster_location} --format="value(name)") + while read -r pool_name + do + gcloud container node-pools update "${pool_name}" \ + --cluster=${cluster_name} \ + --${cluster_location_option}=${cluster_location} \ + --workload-metadata=GKE_METADATA + done <<<"${pools}" + + gcloud projects add-iam-policy-binding ${project_id} \ + --member=serviceAccount:${control_plane_service_account}@${project_id}.iam.gserviceaccount.com \ + --role roles/iam.serviceAccountAdmin + } + +function storage_admin_set_up() { + echo "Update ServiceAccount for Storage Admin" + local project_id=${1} + local pubsub_service_account=${2} + local pubsub_service_account_key_temp=${3} + + echo "parameter project_id used when setting up storage admin is'${project_id}'" + echo "parameter pubsub_service_account used when setting up storage admin is'${pubsub_service_account}'" + echo "parameter pubsub_service_account_key_temp used when setting up storage admin is'${pubsub_service_account_key_temp}'" + echo "Update ServiceAccount for Storage Admin" + gcloud services enable storage-component.googleapis.com + gcloud services enable storage-api.googleapis.com + gcloud projects add-iam-policy-binding ${project_id} \ + --member=serviceAccount:${pubsub_service_account}@${project_id}.iam.gserviceaccount.com \ + --role roles/storage.admin + export GCS_SERVICE_ACCOUNT=`curl -s -X GET -H "Authorization: Bearer \`GOOGLE_APPLICATION_CREDENTIALS=${pubsub_service_account_key_temp} gcloud auth application-default print-access-token\`" "https://www.googleapis.com/storage/v1/projects/${project_id}/serviceAccount" | grep email_address | cut -d '"' -f 4` + gcloud projects add-iam-policy-binding ${project_id} \ + --member=serviceAccount:"${GCS_SERVICE_ACCOUNT}" \ + --role roles/pubsub.publisher +} \ No newline at end of file diff --git a/test/e2e-common.sh b/test/e2e-common.sh index 5a5295a4a5..40e2d74686 100755 --- a/test/e2e-common.sh +++ b/test/e2e-common.sh @@ -1,4 +1,4 @@ -#!/bin/bash +#!/usr/bin/env bash # Copyright 2019 Google LLC # @@ -16,48 +16,36 @@ # This script includes common functions for testing setup and teardown. -source $(dirname $0)/../vendor/knative.dev/test-infra/scripts/e2e-tests.sh - -source $(dirname $0)/lib.sh - -# random6 returns 6 random letters. -function random6() { - go run github.com/google/knative-gcp/test/cmd/randstr/ --length=6 -} - # If gcloud is not available make it a no-op, not an error. which gcloud &> /dev/null || gcloud() { echo "[ignore-gcloud $*]" 1>&2; } -# Vendored eventing test iamges. -readonly VENDOR_EVENTING_TEST_IMAGES="vendor/knative.dev/eventing/test/test_images/" +# Constants used for creating ServiceAccount for the Control Plane if it's not running on Prow. +readonly CONTROL_PLANE_SERVICE_ACCOUNT_NON_PROW="cloud-run-events" -# Eventing main config. -readonly E2E_TEST_NAMESPACE="default" -readonly CONTROL_PLANE_NAMESPACE="cloud-run-events" +# Constants used for creating ServiceAccount for Data Plane(Pub/Sub Admin) if it's not running on Prow. +readonly PUBSUB_SERVICE_ACCOUNT_NON_PROW="cre-pubsub" -# Constants used for creating ServiceAccount for the Control-Plane if it's not running on Prow. -readonly CONTROL_PLANE_SERVICE_ACCOUNT="e2e-cr-events-test-$(random6)" -readonly CONTROL_PLANE_SERVICE_ACCOUNT_KEY="$(mktemp)" -readonly CONTROL_PLANE_SECRET_NAME="google-cloud-key" +# Vendored eventing test iamges. +readonly VENDOR_EVENTING_TEST_IMAGES="vendor/knative.dev/eventing/test/test_images/" -# Constants used for creating ServiceAccount for Pub/Sub Admin if it's not running on Prow. -readonly PUBSUB_SERVICE_ACCOUNT="e2e-pubsub-test-$(random6)" -readonly PUBSUB_SERVICE_ACCOUNT_KEY="$(mktemp)" -readonly PUBSUB_SECRET_NAME="google-cloud-key" +# Constants used for authentication setup for GCP Broker if it's not running on Prow. +readonly APP_ENGINE_REGION="us-central" # Setup Knative GCP. function knative_setup() { - control_plane_setup || return 1 start_knative_gcp || return 1 + export_variable || return 1 + control_plane_setup || return 1 } -# Setup resources common to all eventing tests. -function test_setup() { - pubsub_setup || return 1 - storage_setup || return 1 - echo "Sleep 2 mins to wait for all resources to setup" - sleep 120 +# Tear down tmp files which store the private key. +function test_teardown() { + if (( ! IS_PROW )); then + rm ${PUBSUB_SERVICE_ACCOUNT_KEY_TEMP} + fi +} +function publish_test_images() { # Publish test images. echo ">> Publishing test images" sed -i 's@ko://knative.dev/eventing/test/test_images@ko://github.com/google/knative-gcp/vendor/knative.dev/eventing/test/test_images@g' vendor/knative.dev/eventing/test/test_images/*/*.yaml @@ -65,62 +53,27 @@ function test_setup() { $(dirname $0)/upload-test-images.sh "test/test_images" e2e || fail_test "Error uploading test images from knative-gcp" } -# Tear down Knative GCP. -# Note we only delete the gcloud service account and iam policy bindings here, as if the cluster is deleted, -# other resources created on the cluster will automatically be gone. -function knative_teardown() { - control_plane_teardown +# Create resources required for CloudSchedulerSource. +function create_app_engine() { + echo "Create App Engine with region US-central needed for CloudSchedulerSource" + # Please rememeber the region of App Engine and the location of CloudSchedulerSource defined in e2e tests(./test_scheduler.go) should be consistent. + gcloud app create --region=${APP_ENGINE_REGION} || echo "AppEngine app with region ${APP_ENGINE_REGION} probably already exists, ignoring..." } -# Tear down resources common to all eventing tests. -function test_teardown() { - pubsub_teardown - storage_teardown +function scheduler_setup() { + if (( ! IS_PROW )); then + create_app_engine + fi } -# Create resources required for the Control Plane setup. -function control_plane_setup() { - local service_account_key="${GOOGLE_APPLICATION_CREDENTIALS}" - # When not running on Prow we need to set up a service account for managing resources. +# Create resources required for Storage Admin setup. +function storage_setup() { if (( ! IS_PROW )); then - echo "Set up ServiceAccount used by the Control Plane" - gcloud iam service-accounts create ${CONTROL_PLANE_SERVICE_ACCOUNT} - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/pubsub.admin - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/pubsub.editor - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/storage.admin - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/cloudscheduler.admin - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/logging.configWriter - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/logging.privateLogViewer - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/cloudscheduler.admin - gcloud iam service-accounts keys create ${CONTROL_PLANE_SERVICE_ACCOUNT_KEY} \ - --iam-account=${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com - service_account_key="${CONTROL_PLANE_SERVICE_ACCOUNT_KEY}" + storage_admin_set_up ${E2E_PROJECT_ID} ${PUBSUB_SERVICE_ACCOUNT_NON_PROW} ${PUBSUB_SERVICE_ACCOUNT_KEY_TEMP} fi - echo "Create the control plane namespace" - kubectl create namespace ${CONTROL_PLANE_NAMESPACE} - echo "Create the control plane secret" - kubectl -n ${CONTROL_PLANE_NAMESPACE} create secret generic ${CONTROL_PLANE_SECRET_NAME} --from-file=key.json=${service_account_key} } -# Create resources required for Pub/Sub Admin setup. -function pubsub_setup() { - # If the tests are run on Prow, clean up the topics and subscriptions before running them. - # See https://github.com/google/knative-gcp/issues/494 - if (( IS_PROW )); then +function delete_topics_and_subscriptions() { subs=$(gcloud pubsub subscriptions list --format="value(name)") while read -r sub_name do @@ -131,103 +84,19 @@ function pubsub_setup() { do gcloud pubsub topics delete "${topic_name}" done <<<"$topics" - fi - - local service_account_key="${GOOGLE_APPLICATION_CREDENTIALS}" - # When not running on Prow we need to set up a service account for PubSub - if (( ! IS_PROW )); then - # Enable monitoring - gcloud services enable monitoring - echo "Set up ServiceAccount for Pub/Sub Admin" - gcloud services enable pubsub.googleapis.com - gcloud iam service-accounts create ${PUBSUB_SERVICE_ACCOUNT} - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${PUBSUB_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/pubsub.editor - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${PUBSUB_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/monitoring.editor - gcloud iam service-accounts keys create ${PUBSUB_SERVICE_ACCOUNT_KEY} \ - --iam-account=${PUBSUB_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com - service_account_key="${PUBSUB_SERVICE_ACCOUNT_KEY}" - fi - kubectl -n ${E2E_TEST_NAMESPACE} create secret generic ${PUBSUB_SECRET_NAME} --from-file=key.json=${service_account_key} -} - -# Create resources required for Storage Admin setup. -function storage_setup() { - if (( ! IS_PROW )); then - echo "Update ServiceAccount for Storage Admin" - gcloud services enable storage-component.googleapis.com - gcloud services enable storage-api.googleapis.com - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${PUBSUB_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/storage.admin - export GCS_SERVICE_ACCOUNT=`curl -s -X GET -H "Authorization: Bearer \`GOOGLE_APPLICATION_CREDENTIALS=${PUBSUB_SERVICE_ACCOUNT_KEY} gcloud auth application-default print-access-token\`" "https://www.googleapis.com/storage/v1/projects/${E2E_PROJECT_ID}/serviceAccount" | grep email_address | cut -d '"' -f 4` - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${GCS_SERVICE_ACCOUNT} \ - --role roles/pubsub.publisher - fi } -# Tear down resources required for Pub/Sub Admin setup. -function pubsub_teardown() { - # When not running on Prow we need to delete the service accounts and namespaces created. - if (( ! IS_PROW )); then - echo "Tear down ServiceAccount for Pub/Sub Admin" - gcloud iam service-accounts keys delete -q ${PUBSUB_SERVICE_ACCOUNT_KEY} \ - --iam-account=${PUBSUB_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${PUBSUB_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/pubsub.editor - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${PUBSUB_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/monitoring.editor - fi -} - -# Tear down resources required for Storage Admin setup. -function storage_teardown() { - if (( ! IS_PROW )); then - echo "Tear down ServiceAccount for Storage Admin" - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${PUBSUB_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/storage.admin - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${GCS_SERVICE_ACCOUNT} \ - --role roles/pubsub.publisher - gcloud iam service-accounts delete -q ${PUBSUB_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com - fi +function enable_monitoring(){ + local project_id=${1} + local pubsub_service_account=${2} + + echo "parameter project_id used when enabling monitoring is'${project_id}'" + echo "parameter control_plane_service_account used when enabling monitoring is'${pubsub_service_account}'" + # Enable monitoring + echo "Enable Monitoring" + gcloud services enable monitoring + gcloud projects add-iam-policy-binding ${project_id} \ + --member=serviceAccount:${pubsub_service_account}@${project_id}.iam.gserviceaccount.com \ + --role roles/monitoring.editor } -# Tear down resources required for Control Plane setup. -function control_plane_teardown() { - # When not running on Prow we need to delete the service accounts and namespaces created - if (( ! IS_PROW )); then - echo "Tear down ServiceAccount for Control Plane" - gcloud iam service-accounts keys delete -q ${CONTROL_PLANE_SERVICE_ACCOUNT_KEY} \ - --iam-account=${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/pubsub.admin - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/pubsub.editor - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/storage.admin - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/cloudscheduler.admin - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/logging.configWriter - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/logging.privateLogViewer - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/cloudscheduler.admin - gcloud iam service-accounts delete -q ${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com - fi -} diff --git a/test/e2e-secret-tests.sh b/test/e2e-secret-tests.sh new file mode 100644 index 0000000000..639bf8ae91 --- /dev/null +++ b/test/e2e-secret-tests.sh @@ -0,0 +1,106 @@ +#!/usr/bin/env bash + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +source $(dirname $0)/../vendor/knative.dev/test-infra/scripts/e2e-tests.sh + +source $(dirname $0)/lib.sh + +source $(dirname $0)/../hack/lib.sh + +source $(dirname $0)/e2e-common.sh + +# Eventing main config. +readonly E2E_TEST_NAMESPACE="default" + +# Constants used for creating ServiceAccount for the Control Plane if it's not running on Prow. +readonly CONTROL_PLANE_SERVICE_ACCOUNT_NON_PROW_KEY_TEMP="$(mktemp)" + +# Constants used for creating ServiceAccount for Data Plane(Pub/Sub Admin) if it's not running on Prow. +readonly PUBSUB_SERVICE_ACCOUNT_NON_PROW_KEY_TEMP="$(mktemp)" + +# Constants used for authentication setup for GCP Broker if it's not running on Prow. +readonly GCP_BROKER_SECRET_NAME="google-broker-key" + +function export_variable() { + if (( ! IS_PROW )); then + readonly CONTROL_PLANE_SERVICE_ACCOUNT_KEY_TEMP="${CONTROL_PLANE_SERVICE_ACCOUNT_NON_PROW_KEY_TEMP}" + readonly PUBSUB_SERVICE_ACCOUNT_KEY_TEMP="${PUBSUB_SERVICE_ACCOUNT_NON_PROW_KEY_TEMP}" + else + readonly CONTROL_PLANE_SERVICE_ACCOUNT_KEY_TEMP="${GOOGLE_APPLICATION_CREDENTIALS}" + readonly PUBSUB_SERVICE_ACCOUNT_KEY_TEMP="${GOOGLE_APPLICATION_CREDENTIALS}" + fi +} + +# Setup resources common to all eventing tests. +function test_setup() { + pubsub_setup || return 1 + gcp_broker_setup || return 1 + storage_setup || return 1 + scheduler_setup || return 1 + echo "Sleep 2 mins to wait for all resources to setup" + sleep 120 + + # Publish test images. + publish_test_images +} + +# Tear down tmp files which store the private key. +function knative_teardown() { + if (( ! IS_PROW )); then + rm ${CONTROL_PLANE_SERVICE_ACCOUNT_NON_PROW_KEY_TEMP} + fi +} + +# Create resources required for the Control Plane setup. +function control_plane_setup() { + # When not running on Prow we need to set up a service account for managing resources. + if (( ! IS_PROW )); then + echo "Set up ServiceAccount used by the Control Plane" + init_control_plane_service_account ${E2E_PROJECT_ID} ${CONTROL_PLANE_SERVICE_ACCOUNT_NON_PROW} + gcloud iam service-accounts keys create ${CONTROL_PLANE_SERVICE_ACCOUNT_NON_PROW_KEY_TEMP} \ + --iam-account=${CONTROL_PLANE_SERVICE_ACCOUNT_NON_PROW}@${E2E_PROJECT_ID}.iam.gserviceaccount.com + fi + echo "Create the control plane secret" + kubectl -n ${CONTROL_PLANE_NAMESPACE} create secret generic ${CONTROL_PLANE_SECRET_NAME} --from-file=key.json=${CONTROL_PLANE_SERVICE_ACCOUNT_KEY_TEMP} + echo "Delete the controller pod in the namespace '${CONTROL_PLANE_NAMESPACE}' to refresh the created/patched secret" + kubectl delete pod -n ${CONTROL_PLANE_NAMESPACE} --selector role=controller +} + +# Create resources required for Pub/Sub Admin setup. +function pubsub_setup() { + # If the tests are run on Prow, clean up the topics and subscriptions before running them. + # See https://github.com/google/knative-gcp/issues/494 + if (( IS_PROW )); then + delete_topics_and_subscriptions + fi + + # When not running on Prow we need to set up a service account for PubSub + if (( ! IS_PROW )); then + echo "Set up ServiceAccount for Pub/Sub Admin" + init_pubsub_service_account ${E2E_PROJECT_ID} ${PUBSUB_SERVICE_ACCOUNT_NON_PROW} + enable_monitoring ${E2E_PROJECT_ID} ${PUBSUB_SERVICE_ACCOUNT_NON_PROW} + gcloud iam service-accounts keys create ${PUBSUB_SERVICE_ACCOUNT_KEY_TEMP} \ + --iam-account=${PUBSUB_SERVICE_ACCOUNT_NON_PROW}@${E2E_PROJECT_ID}.iam.gserviceaccount.com + fi + kubectl -n ${E2E_TEST_NAMESPACE} create secret generic ${PUBSUB_SECRET_NAME} --from-file=key.json=${PUBSUB_SERVICE_ACCOUNT_KEY_TEMP} +} + +# Create resources required for GCP Broker authentication setup. +function gcp_broker_setup() { + echo "Authentication setup for GCP Broker" + kubectl -n ${CONTROL_PLANE_NAMESPACE} create secret generic ${GCP_BROKER_SECRET_NAME} --from-file=key.json=${PUBSUB_SERVICE_ACCOUNT_KEY_TEMP} +} + diff --git a/test/e2e-tests.sh b/test/e2e-tests.sh index f8e0d52571..620a1ac477 100755 --- a/test/e2e-tests.sh +++ b/test/e2e-tests.sh @@ -1,6 +1,6 @@ #!/usr/bin/env bash -# Copyright 2019 Google LLC +# Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -13,13 +13,11 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - -source $(dirname $0)/e2e-common.sh - # Script entry point. +source $(dirname $0)/e2e-secret-tests.sh initialize $@ go_test_e2e -timeout=20m -parallel=12 ./test/e2e -channels=messaging.cloud.google.com/v1alpha1:Channel || fail_test -success +success \ No newline at end of file diff --git a/test/e2e-wi-tests.sh b/test/e2e-wi-tests.sh index adeb6665ec..3182ef54e5 100755 --- a/test/e2e-wi-tests.sh +++ b/test/e2e-wi-tests.sh @@ -13,72 +13,67 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. +source $(dirname $0)/../vendor/knative.dev/test-infra/scripts/e2e-tests.sh + +source $(dirname $0)/lib.sh + +source $(dirname $0)/../hack/lib.sh source $(dirname $0)/e2e-common.sh -# Override the setup and teardown functions to install wi-enabled control plane components. -readonly K8S_CONTROLLER_SERVICE_ACCOUNT="controller" -readonly PROW_SERVICE_ACCOUNT=$(gcloud config get-value core/account) +readonly BROKER_SERVICE_ACCOUNT="broker" +readonly PROW_SERVICE_ACCOUNT_EMAIL=$(gcloud config get-value core/account) +# Constants used for creating ServiceAccount for Data Plane(Pub/Sub Admin) if it's not running on Prow. +readonly PUBSUB_SERVICE_ACCOUNT_NON_PROW_KEY_TEMP="$(mktemp)" +function export_variable() { if (( ! IS_PROW )); then - readonly AUTHENTICATED_SERVICE_ACCOUNT="${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com" + readonly CONTROL_PLANE_SERVICE_ACCOUNT_EMAIL="${CONTROL_PLANE_SERVICE_ACCOUNT_NON_PROW}@${E2E_PROJECT_ID}.iam.gserviceaccount.com" readonly MEMBER="serviceAccount:${E2E_PROJECT_ID}.svc.id.goog[${CONTROL_PLANE_NAMESPACE}/${K8S_CONTROLLER_SERVICE_ACCOUNT}]" - readonly DATA_PLANE_SERVICE_ACCOUNT="${PUBSUB_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com" + readonly BROKER_MEMBER="serviceAccount:${E2E_PROJECT_ID}.svc.id.goog[${CONTROL_PLANE_NAMESPACE}/${BROKER_SERVICE_ACCOUNT}]" + readonly PUBSUB_SERVICE_ACCOUNT_EMAIL="${PUBSUB_SERVICE_ACCOUNT_NON_PROW}@${E2E_PROJECT_ID}.iam.gserviceaccount.com" + readonly PUBSUB_SERVICE_ACCOUNT_KEY_TEMP="${PUBSUB_SERVICE_ACCOUNT_NON_PROW_KEY_TEMP}" else - readonly AUTHENTICATED_SERVICE_ACCOUNT=${PROW_SERVICE_ACCOUNT} + readonly CONTROL_PLANE_SERVICE_ACCOUNT_EMAIL=${PROW_SERVICE_ACCOUNT_EMAIL} readonly MEMBER="serviceAccount:${PROJECT}.svc.id.goog[${CONTROL_PLANE_NAMESPACE}/${K8S_CONTROLLER_SERVICE_ACCOUNT}]" + readonly BROKER_MEMBER="serviceAccount:${PROJECT}.svc.id.goog[${CONTROL_PLANE_NAMESPACE}/${BROKER_SERVICE_ACCOUNT}]" # Get the PROW service account. - readonly PROW_PROJECT_NAME=$(cut -d'.' -f1 <<< $(cut -d'@' -f2 <<< ${AUTHENTICATED_SERVICE_ACCOUNT})) - readonly DATA_PLANE_SERVICE_ACCOUNT=${AUTHENTICATED_SERVICE_ACCOUNT} + readonly PROW_PROJECT_NAME=$(cut -d'.' -f1 <<< $(cut -d'@' -f2 <<< ${PROW_SERVICE_ACCOUNT_EMAIL})) + readonly PUBSUB_SERVICE_ACCOUNT_EMAIL=${PROW_SERVICE_ACCOUNT_EMAIL} + readonly PUBSUB_SERVICE_ACCOUNT_KEY_TEMP="${GOOGLE_APPLICATION_CREDENTIALS}" fi +} -# Create resources required for the Control Plane setup. -function knative_setup() { - control_plane_setup || return 1 - start_knative_gcp || return 1 - kubectl annotate serviceaccount ${K8S_CONTROLLER_SERVICE_ACCOUNT} iam.gke.io/gcp-service-account=${AUTHENTICATED_SERVICE_ACCOUNT} \ - --namespace ${CONTROL_PLANE_NAMESPACE} +# Setup resources common to all eventing tests. +function test_setup() { + pubsub_setup || return 1 + gcp_broker_setup || return 1 + # Create private key that will be used in storage_setup + create_private_key_for_pubsub_service_account || return 1 + storage_setup || return 1 + scheduler_setup || return 1 + echo "Sleep 2 mins to wait for all resources to setup" + sleep 120 + + # Publish test images. + publish_test_images } function control_plane_setup() { # When not running on Prow we need to set up a service account for managing resources. if (( ! IS_PROW )); then echo "Set up ServiceAccount used by the Control Plane" - # Enable iamcredentials.googleapis.com service for Workload Identity. - gcloud services enable iamcredentials.googleapis.com - gcloud iam service-accounts create ${CONTROL_PLANE_SERVICE_ACCOUNT} - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/pubsub.admin - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/pubsub.editor - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/storage.admin - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/cloudscheduler.admin - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/logging.configWriter - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/logging.privateLogViewer - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/cloudscheduler.admin - # Give iam.serviceAccountAdmin role to the Google service account. - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/iam.serviceAccountAdmin + init_control_plane_service_account ${E2E_PROJECT_ID} ${CONTROL_PLANE_SERVICE_ACCOUNT_NON_PROW} + local cluster_name="$(cut -d'_' -f4 <<<"$(kubectl config current-context)")" + local cluster_location="$(cut -d'_' -f3 <<<"$(kubectl config current-context)")" + enable_workload_identity ${E2E_PROJECT_ID} ${CONTROL_PLANE_SERVICE_ACCOUNT_NON_PROW} ${cluster_name} ${cluster_location} ${REGIONAL_CLUSTER_LOCATION_TYPE} gcloud iam service-accounts add-iam-policy-binding \ --role roles/iam.workloadIdentityUser \ - --member ${MEMBER} ${AUTHENTICATED_SERVICE_ACCOUNT} + --member ${MEMBER} ${CONTROL_PLANE_SERVICE_ACCOUNT_EMAIL} else # If the tests are run on Prow, clean up the member for roles/iam.workloadIdentityUser before running it. members=$(gcloud iam service-accounts get-iam-policy \ - --project=${PROW_PROJECT_NAME} ${AUTHENTICATED_SERVICE_ACCOUNT} \ + --project=${PROW_PROJECT_NAME} ${CONTROL_PLANE_SERVICE_ACCOUNT_EMAIL} \ --format="value(bindings.members)" \ --filter="bindings.role:roles/iam.workloadIdentityUser" \ --flatten="bindings[].members") @@ -89,15 +84,19 @@ function control_plane_setup() { gcloud iam service-accounts remove-iam-policy-binding \ --role roles/iam.workloadIdentityUser \ --member ${member_name} \ - --project ${PROW_PROJECT_NAME} ${AUTHENTICATED_SERVICE_ACCOUNT} + --project ${PROW_PROJECT_NAME} ${CONTROL_PLANE_SERVICE_ACCOUNT_EMAIL} fi done <<< "$members" # Allow the Kubernetes service account to use Google service account. gcloud iam service-accounts add-iam-policy-binding \ --role roles/iam.workloadIdentityUser \ --member ${MEMBER} \ - --project ${PROW_PROJECT_NAME} ${AUTHENTICATED_SERVICE_ACCOUNT} + --project ${PROW_PROJECT_NAME} ${CONTROL_PLANE_SERVICE_ACCOUNT_EMAIL} fi + kubectl annotate --overwrite serviceaccount ${K8S_CONTROLLER_SERVICE_ACCOUNT} iam.gke.io/gcp-service-account=${CONTROL_PLANE_SERVICE_ACCOUNT_EMAIL} \ + --namespace ${CONTROL_PLANE_NAMESPACE} + echo "Delete the controller pod in the namespace '${CONTROL_PLANE_NAMESPACE}' to refresh " + kubectl delete pod -n ${CONTROL_PLANE_NAMESPACE} --selector role=controller } # Create resources required for Pub/Sub Admin setup. @@ -105,16 +104,7 @@ function pubsub_setup() { # If the tests are run on Prow, clean up the topics and subscriptions before running them. # See https://github.com/google/knative-gcp/issues/494 if (( IS_PROW )); then - subs=$(gcloud pubsub subscriptions list --format="value(name)") - while read -r sub_name - do - gcloud pubsub subscriptions delete "${sub_name}" - done <<<"$subs" - topics=$(gcloud pubsub topics list --format="value(name)") - while read -r topic_name - do - gcloud pubsub topics delete "${topic_name}" - done <<<"$topics" + delete_topics_and_subscriptions fi # When not running on Prow we need to set up a service account for PubSub. @@ -122,60 +112,32 @@ function pubsub_setup() { # Enable monitoring gcloud services enable monitoring echo "Set up ServiceAccount for Pub/Sub Admin" - gcloud services enable pubsub.googleapis.com - gcloud iam service-accounts create ${PUBSUB_SERVICE_ACCOUNT} - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${PUBSUB_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/pubsub.editor - gcloud projects add-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${PUBSUB_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/monitoring.editor - gcloud iam service-accounts keys create ${PUBSUB_SERVICE_ACCOUNT_KEY} \ - --iam-account=${PUBSUB_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com - service_account_key="${PUBSUB_SERVICE_ACCOUNT_KEY}" + init_pubsub_service_account ${E2E_PROJECT_ID} ${PUBSUB_SERVICE_ACCOUNT_NON_PROW} + enable_monitoring ${E2E_PROJECT_ID} ${PUBSUB_SERVICE_ACCOUNT_NON_PROW} fi } -# Tear down resources required for Control Plane setup. -function control_plane_teardown() { - # When not running on Prow we need to delete the service accounts and namespaces created. +# Create resources required for GCP Broker authentication setup. +function gcp_broker_setup() { + echo "Authentication setup for GCP Broker" if (( ! IS_PROW )); then - echo "Tear down ServiceAccount for Control Plane" - gcloud iam service-accounts keys delete -q ${CONTROL_PLANE_SERVICE_ACCOUNT_KEY} \ - --iam-account=${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/pubsub.admin - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/pubsub.editor - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/storage.admin - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/cloudscheduler.admin - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/logging.configWriter - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/logging.privateLogViewer - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/cloudscheduler.admin - # Remove iam.serviceAccountAdmin role to the Google service account. - gcloud projects remove-iam-policy-binding ${E2E_PROJECT_ID} \ - --member=serviceAccount:${CONTROL_PLANE_SERVICE_ACCOUNT}@${E2E_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/iam.serviceAccountAdmin - gcloud iam service-accounts remove-iam-policy-binding \ - --role roles/iam.workloadIdentityUser \ - --member ${MEMBER} ${AUTHENTICATED_SERVICE_ACCOUNT} + gcloud iam service-accounts add-iam-policy-binding \ + --role roles/iam.workloadIdentityUser \ + --member ${BROKER_MEMBER} ${PUBSUB_SERVICE_ACCOUNT_EMAIL} else - gcloud iam service-accounts remove-iam-policy-binding \ + gcloud iam service-accounts add-iam-policy-binding \ --role roles/iam.workloadIdentityUser \ - --member ${MEMBER} \ - --project ${PROW_PROJECT_NAME} ${AUTHENTICATED_SERVICE_ACCOUNT} + --member ${BROKER_MEMBER} \ + --project ${PROW_PROJECT_NAME} ${PUBSUB_SERVICE_ACCOUNT_EMAIL} + fi + kubectl annotate --overwrite serviceaccount ${BROKER_SERVICE_ACCOUNT} iam.gke.io/gcp-service-account=${PUBSUB_SERVICE_ACCOUNT_EMAIL} \ + --namespace ${CONTROL_PLANE_NAMESPACE} +} + +function create_private_key_for_pubsub_service_account { + if (( ! IS_PROW )); then + gcloud iam service-accounts keys create ${PUBSUB_SERVICE_ACCOUNT_KEY_TEMP} \ + --iam-account=${PUBSUB_SERVICE_ACCOUNT_EMAIL} fi } @@ -183,6 +145,6 @@ function control_plane_teardown() { initialize $@ --cluster-creation-flag "--workload-pool=\${PROJECT}.svc.id.goog" # Channel related e2e tests we have in Eventing is not running here. -go_test_e2e -timeout=30m -parallel=6 ./test/e2e -workloadIndentity=true -pubsubServiceAccount=${DATA_PLANE_SERVICE_ACCOUNT} || fail_test +go_test_e2e -timeout=30m -parallel=6 ./test/e2e -workloadIndentity=true -pubsubServiceAccount=${PUBSUB_SERVICE_ACCOUNT_EMAIL} || fail_test success diff --git a/test/e2e/e2e_test.go b/test/e2e/e2e_test.go index 867f02c79d..10a0586065 100644 --- a/test/e2e/e2e_test.go +++ b/test/e2e/e2e_test.go @@ -312,3 +312,10 @@ func TestCloudSchedulerSourceWithTargetTestImpl(t *testing.T) { defer cancel() CloudSchedulerSourceWithTargetTestImpl(t, authConfig) } + +// TestGCPBroker tests we can knock a Knative Service from a gcp broker. +func TestGCPBroker(t *testing.T) { + cancel := logstream.Start(t) + defer cancel() + GCPBrokerTestImpl(t, authConfig) +} diff --git a/test/e2e/test_gcp_broker.go b/test/e2e/test_gcp_broker.go new file mode 100644 index 0000000000..86fe4d9f4a --- /dev/null +++ b/test/e2e/test_gcp_broker.go @@ -0,0 +1,133 @@ +/* +Copyright 2020 Google LLC + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package e2e + +import ( + "knative.dev/eventing/test/lib/duck" + "net/url" + "testing" + "time" + + v1 "k8s.io/api/core/v1" + eventingv1alpha1 "knative.dev/eventing/pkg/apis/eventing/v1alpha1" + eventingtestlib "knative.dev/eventing/test/lib" + eventingtestresources "knative.dev/eventing/test/lib/resources" + "knative.dev/pkg/test/helpers" + + // The following line to load the gcp plugin (only required to authenticate against GKE clusters). + _ "k8s.io/client-go/plugin/pkg/client/auth/gcp" + + "github.com/google/knative-gcp/test/e2e/lib" + "github.com/google/knative-gcp/test/e2e/lib/resources" +) + +/* +PubSubWithBrokerTestImpl tests the following scenario: + + 5 4 + ------------------ -------------------- + | | | | + 1 v 2 | v 3 | +(Sender) ---> GCP Broker ---> dummyTrigger -------> Knative Service(Receiver) + | + | 6 7 + |-------> respTrigger -------> Service(Target) + +Note: the number denotes the sequence of the event that flows in this test case. +*/ + +func GCPBrokerTestImpl(t *testing.T, authConfig lib.AuthConfig) { + senderName := helpers.AppendRandomString("sender") + targetName := helpers.AppendRandomString("target") + + client := lib.Setup(t, true, authConfig.WorkloadIdentity) + defer lib.TearDown(client) + + // Create a target Job to receive the events. + makeTargetJobOrDie(client, targetName) + + u := createGCPBroker(t, client, targetName) + + // Just to make sure all resources are ready. + time.Sleep(10 * time.Second) + + // Create a sender Job to sender the event. + senderJob := resources.SenderJob(senderName, []v1.EnvVar{{ + Name: "BROKER_URL", + Value: u.String(), + }}) + client.CreateJobOrFail(senderJob) + + // Check if dummy CloudEvent is sent out. + if done := jobDone(client, senderName, t); !done { + t.Error("dummy event wasn't sent to broker") + t.Failed() + } + // Check if resp CloudEvent hits the target Service. + if done := jobDone(client, targetName, t); !done { + t.Error("resp event didn't hit the target pod") + t.Failed() + } +} + +func createGCPBroker(t *testing.T, client *lib.Client, targetName string) url.URL { + brokerName := helpers.AppendRandomString("gcp") + dummyTriggerName := "dummy-broker-" + brokerName + respTriggerName := "resp-broker-" + brokerName + kserviceName := helpers.AppendRandomString("kservice") + + // Create a new GCP Broker. + client.Core.CreateBrokerV1Beta1OrFail(brokerName, eventingtestresources.WithBrokerClassForBrokerV1Beta1("googlecloud")) + + // Create the Knative Service. + kservice := resources.ReceiverKService( + kserviceName, client.Namespace) + client.CreateUnstructuredObjOrFail(kservice) + + // Create a Trigger with the Knative Service subscriber. + client.Core.CreateTriggerOrFail( + dummyTriggerName, + eventingtestresources.WithBroker(brokerName), + eventingtestresources.WithAttributesTriggerFilter( + eventingv1alpha1.TriggerAnyFilter, eventingv1alpha1.TriggerAnyFilter, + map[string]interface{}{"type": "e2e-testing-dummy"}), + eventingtestresources.WithSubscriberServiceRefForTrigger(kserviceName), + ) + + // Create a Trigger with the target Service subscriber. + client.Core.CreateTriggerOrFail( + respTriggerName, + eventingtestresources.WithBroker(brokerName), + eventingtestresources.WithAttributesTriggerFilter( + eventingv1alpha1.TriggerAnyFilter, eventingv1alpha1.TriggerAnyFilter, + map[string]interface{}{"type": "e2e-testing-resp"}), + eventingtestresources.WithSubscriberServiceRefForTrigger(targetName), + ) + + // Wait for broker, trigger, ksvc ready. + client.Core.WaitForResourceReadyOrFail(brokerName, eventingtestlib.BrokerTypeMeta) + client.Core.WaitForResourcesReadyOrFail(eventingtestlib.TriggerTypeMeta) + client.Core.WaitForResourceReadyOrFail(kserviceName, lib.KsvcTypeMeta) + + // Get broker URL. + metaAddressable := eventingtestresources.NewMetaResource(brokerName, client.Namespace, eventingtestlib.BrokerTypeMeta) + u, err := duck.GetAddressableURI(client.Core.Dynamic, metaAddressable) + if err != nil { + t.Error(err.Error()) + } + return u +} diff --git a/test/lib.sh b/test/lib.sh index bed6c971a0..8faa232218 100644 --- a/test/lib.sh +++ b/test/lib.sh @@ -17,6 +17,7 @@ # Include after test-infra/scripts/library.sh readonly CLOUD_RUN_EVENTS_CONFIG="config/" +readonly CLOUD_RUN_EVENTS_GCP_BROKER_CONFIG="config/broker" readonly CLOUD_RUN_EVENTS_ISTIO_CONFIG="config/istio" # Install all required components for running knative-gcp. @@ -34,6 +35,7 @@ function cloud_run_events_setup() { header "Starting Cloud Run Events" subheader "Installing Cloud Run Events" ko apply --strict -f ${CLOUD_RUN_EVENTS_CONFIG} || return 1 + ko apply --strict -f ${CLOUD_RUN_EVENTS_GCP_BROKER_CONFIG} || return 1 ko apply --strict -f ${CLOUD_RUN_EVENTS_ISTIO_CONFIG} || return 1 wait_until_pods_running cloud-run-events || return 1 } From 2ce8014a3dcc5bcd2ff07d1e6466ed6e97a171f3 Mon Sep 17 00:00:00 2001 From: Matt Moore Date: Tue, 5 May 2020 16:39:52 -0700 Subject: [PATCH 07/12] [master] golang format tools (#992) Produced via: `gofmt -s -w $(find -path './vendor' -prune -o -path './third_party' -prune -o -type f -name '*.go' -print)` `goimports -w $(find -name '*.go' | grep -v vendor | grep -v third_party)` /assign grantr nachocano /cc grantr nachocano --- test/e2e/test_broker_pubsub.go | 1 - 1 file changed, 1 deletion(-) diff --git a/test/e2e/test_broker_pubsub.go b/test/e2e/test_broker_pubsub.go index a12828c347..9e7c8393dc 100644 --- a/test/e2e/test_broker_pubsub.go +++ b/test/e2e/test_broker_pubsub.go @@ -390,4 +390,3 @@ func jobDone(client *lib.Client, podName string, t *testing.T) bool { } return true } - From 218ea15de5120660e4a31c25a7c8471132f83354 Mon Sep 17 00:00:00 2001 From: Adam Harwayne Date: Tue, 5 May 2020 17:07:44 -0700 Subject: [PATCH 08/12] Prefix PullSubscription and Topic testing functions with PubSub (#989) * Prefix PullSubscription and Topic functions with PubSub. This is being done because the intevents version will soon need the same functions and will be given the non-prefixed names (as they will continue to exist past the next release). * Listers too. --- .../events/auditlogs/auditlogs_test.go | 228 +++++----- pkg/reconciler/events/build/build_test.go | 28 +- pkg/reconciler/events/pubsub/pubsub_test.go | 30 +- .../events/scheduler/scheduler_test.go | 204 ++++----- pkg/reconciler/events/storage/storage_test.go | 180 ++++---- .../messaging/channel/channel_test.go | 4 +- .../keda/pullsubscription_test.go | 418 +++++++++--------- .../keda/resources/scaled_object_test.go | 8 +- .../static/pullsubscription_test.go | 396 ++++++++--------- pkg/reconciler/pubsub/reconciler_test.go | 348 +++++++-------- pkg/reconciler/pubsub/topic/topic_test.go | 256 +++++------ pkg/reconciler/testing/listers.go | 4 +- ...cription.go => pubsub_pullsubscription.go} | 101 ++--- .../testing/{topic.go => pubsub_topic.go} | 66 +-- test/e2e/test_pullsubscription.go | 12 +- 15 files changed, 1115 insertions(+), 1168 deletions(-) rename pkg/reconciler/testing/{pullsubscription.go => pubsub_pullsubscription.go} (54%) rename pkg/reconciler/testing/{topic.go => pubsub_topic.go} (59%) diff --git a/pkg/reconciler/events/auditlogs/auditlogs_test.go b/pkg/reconciler/events/auditlogs/auditlogs_test.go index 2a5f0e0898..dc086c9d60 100644 --- a/pkg/reconciler/events/auditlogs/auditlogs_test.go +++ b/pkg/reconciler/events/auditlogs/auditlogs_test.go @@ -155,16 +155,16 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceTopicUnknown("TopicNotConfigured", failedToReconcileTopicMsg)), }}, WantCreates: []runtime.Object{ - NewTopic(sourceName, testNS, - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Topic: "cloudauditlogssource-" + sourceUID, PropagationPolicy: "CreateDelete", }), - WithTopicLabels(map[string]string{ + WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": sourceName, }), - WithTopicOwnerReferences([]metav1.OwnerReference{sourceOwnerRef(sourceName, sourceUID)}), + WithPubSubTopicOwnerReferences([]metav1.OwnerReference{sourceOwnerRef(sourceName, sourceUID)}), ), }, WantPatches: []clientgotesting.PatchActionImpl{ @@ -182,8 +182,8 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceMethodName(testMethodName), WithCloudAuditLogsSourceServiceName(testServiceName), ), - NewTopic(sourceName, testNS, - WithTopicTopicID(testTopicID), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicTopicID(testTopicID), ), }, Key: testNS + "/" + sourceName, @@ -209,9 +209,9 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceMethodName(testMethodName), WithCloudAuditLogsSourceServiceName(testServiceName), ), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), ), }, Key: testNS + "/" + sourceName, @@ -239,10 +239,10 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceMethodName(testMethodName), WithCloudAuditLogsSourceServiceName(testServiceName), ), - NewTopic(sourceName, testNS, - WithTopicReady(""), - WithTopicProjectID(testProject), - WithTopicAddress(testTopicURI), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(""), + WithPubSubTopicProjectID(testProject), + WithPubSubTopicAddress(testTopicURI), ), }, Key: testNS + "/" + sourceName, @@ -270,10 +270,10 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceMethodName(testMethodName), WithCloudAuditLogsSourceServiceName(testServiceName), ), - NewTopic(sourceName, testNS, - WithTopicReady("garbaaaaage"), - WithTopicProjectID(testProject), - WithTopicAddress(testTopicURI), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady("garbaaaaage"), + WithPubSubTopicProjectID(testProject), + WithPubSubTopicAddress(testTopicURI), ), }, Key: testNS + "/" + sourceName, @@ -301,9 +301,9 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceMethodName(testMethodName), WithCloudAuditLogsSourceServiceName(testServiceName), ), - NewTopic(sourceName, testNS, - WithTopicFailed(), - WithTopicTopicID(testTopicID), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicFailed(), + WithPubSubTopicTopicID(testTopicID), ), }, Key: testNS + "/" + sourceName, @@ -330,9 +330,9 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceMethodName(testMethodName), WithCloudAuditLogsSourceServiceName(testServiceName), ), - NewTopic(sourceName, testNS, - WithTopicUnknown(), - WithTopicTopicID(testTopicID), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicUnknown(), + WithPubSubTopicTopicID(testTopicID), ), }, Key: testNS + "/" + sourceName, @@ -359,10 +359,10 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceServiceName(testServiceName), WithCloudAuditLogsSourceSink(sinkGVK, sinkName), ), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), }, Key: testNS + "/" + sourceName, @@ -378,23 +378,23 @@ func TestAllCases(t *testing.T) { ), }}, WantCreates: []runtime.Object{ - NewPullSubscriptionWithNoDefaults(sourceName, testNS, - WithPullSubscriptionSpecWithNoDefaults(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscriptionWithNoDefaults(sourceName, testNS, + WithPubSubPullSubscriptionSpecWithNoDefaults(pubsubv1alpha1.PullSubscriptionSpec{ Topic: testTopicID, PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, }, AdapterType: converters.CloudAuditLogsConverter, }), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionLabels(map[string]string{ + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": sourceName, }), - WithPullSubscriptionAnnotations(map[string]string{ + WithPubSubPullSubscriptionAnnotations(map[string]string{ "metrics-resource-group": resourceGroup, }), - WithPullSubscriptionOwnerReferences([]metav1.OwnerReference{sourceOwnerRef(sourceName, sourceUID)}), + WithPubSubPullSubscriptionOwnerReferences([]metav1.OwnerReference{sourceOwnerRef(sourceName, sourceUID)}), ), }, WantPatches: []clientgotesting.PatchActionImpl{ @@ -412,12 +412,12 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceServiceName(testServiceName), WithCloudAuditLogsSourceSink(sinkGVK, sinkName), ), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(sourceName, testNS), + NewPubSubPullSubscriptionWithNoDefaults(sourceName, testNS), }, Key: testNS + "/" + sourceName, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ @@ -446,12 +446,12 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceServiceName(testServiceName), WithCloudAuditLogsSourceSink(sinkGVK, sinkName), ), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(sourceName, testNS, WithPullSubscriptionFailed()), + NewPubSubPullSubscriptionWithNoDefaults(sourceName, testNS, WithPubSubPullSubscriptionFailed()), }, Key: testNS + "/" + sourceName, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ @@ -480,12 +480,12 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceServiceName(testServiceName), WithCloudAuditLogsSourceSink(sinkGVK, sinkName), ), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(sourceName, testNS, WithPullSubscriptionUnknown()), + NewPubSubPullSubscriptionWithNoDefaults(sourceName, testNS, WithPubSubPullSubscriptionUnknown()), }, Key: testNS + "/" + sourceName, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ @@ -514,13 +514,13 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceServiceName(testServiceName), WithCloudAuditLogsSourceSink(sinkGVK, sinkName), ), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(sourceName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(sourceName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), }, Key: testNS + "/" + sourceName, @@ -557,13 +557,13 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceServiceName(testServiceName), WithCloudAuditLogsSourceSink(sinkGVK, sinkName), ), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(sourceName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(sourceName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), }, Key: testNS + "/" + sourceName, @@ -600,13 +600,13 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceServiceName(testServiceName), WithCloudAuditLogsSourceSink(sinkGVK, sinkName), ), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(sourceName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(sourceName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), }, Key: testNS + "/" + sourceName, @@ -643,13 +643,13 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceServiceName(testServiceName), WithCloudAuditLogsSourceSink(sinkGVK, sinkName), ), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(sourceName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(sourceName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), }, Key: testNS + "/" + sourceName, @@ -687,13 +687,13 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceSink(sinkGVK, sinkName), WithCloudAuditLogsSourceServiceName(testServiceName), WithCloudAuditLogsSourceMethodName(testMethodName)), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(sourceName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(sourceName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), }, Key: testNS + "/" + sourceName, @@ -741,13 +741,13 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceSink(sinkGVK, sinkName), WithCloudAuditLogsSourceServiceName(testServiceName), WithCloudAuditLogsSourceMethodName(testMethodName)), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(sourceName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(sourceName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), }, Key: testNS + "/" + sourceName, @@ -795,13 +795,13 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceSink(sinkGVK, sinkName), WithCloudAuditLogsSourceServiceName(testServiceName), WithCloudAuditLogsSourceMethodName(testMethodName)), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(sourceName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(sourceName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), }, Key: testNS + "/" + sourceName, @@ -844,13 +844,13 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceServiceName(testServiceName), WithCloudAuditLogsSourceSink(sinkGVK, sinkName), ), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(sourceName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(sourceName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), }, Key: testNS + "/" + sourceName, @@ -904,13 +904,13 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceSinkID(testSinkID), WithCloudAuditLogsSourceDeletionTimestamp, ), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(sourceName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(sourceName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), }, Key: testNS + "/" + sourceName, @@ -950,13 +950,13 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceSinkID(testSinkID), WithCloudAuditLogsSourceDeletionTimestamp, ), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(sourceName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(sourceName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), }, Key: testNS + "/" + sourceName, @@ -1008,13 +1008,13 @@ func TestAllCases(t *testing.T) { WithCloudAuditLogsSourceSinkID(testSinkID), WithCloudAuditLogsSourceDeletionTimestamp, ), - NewTopic(sourceName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(sourceName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(sourceName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(sourceName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), }, Key: testNS + "/" + sourceName, diff --git a/pkg/reconciler/events/build/build_test.go b/pkg/reconciler/events/build/build_test.go index 1ff488d2be..7d36e23255 100644 --- a/pkg/reconciler/events/build/build_test.go +++ b/pkg/reconciler/events/build/build_test.go @@ -166,22 +166,22 @@ func TestAllCases(t *testing.T) { ), }}, WantCreates: []runtime.Object{ - NewPullSubscriptionWithNoDefaults(buildName, testNS, - WithPullSubscriptionSpecWithNoDefaults(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscriptionWithNoDefaults(buildName, testNS, + WithPubSubPullSubscriptionSpecWithNoDefaults(pubsubv1alpha1.PullSubscriptionSpec{ Topic: testTopicID, PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, }, }), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionLabels(map[string]string{ + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": buildName, }), - WithPullSubscriptionAnnotations(map[string]string{ + WithPubSubPullSubscriptionAnnotations(map[string]string{ "metrics-resource-group": resourceGroup, }), - WithPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), + WithPubSubPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), }, WantPatches: []clientgotesting.PatchActionImpl{ @@ -199,8 +199,8 @@ func TestAllCases(t *testing.T) { WithCloudBuildSourceTopic(testTopicID), WithCloudBuildSourceSink(sinkGVK, sinkName), ), - NewPullSubscriptionWithNoDefaults(buildName, testNS, - WithPullSubscriptionReadyStatus(corev1.ConditionFalse, "PullSubscriptionFalse", "status false test message")), + NewPubSubPullSubscriptionWithNoDefaults(buildName, testNS, + WithPubSubPullSubscriptionReadyStatus(corev1.ConditionFalse, "PullSubscriptionFalse", "status false test message")), newSink(), }, Key: testNS + "/" + buildName, @@ -230,8 +230,8 @@ func TestAllCases(t *testing.T) { WithCloudBuildSourceTopic(testTopicID), WithCloudBuildSourceSink(sinkGVK, sinkName), ), - NewPullSubscriptionWithNoDefaults(buildName, testNS, - WithPullSubscriptionReadyStatus(corev1.ConditionUnknown, "PullSubscriptionUnknown", "status unknown test message")), + NewPubSubPullSubscriptionWithNoDefaults(buildName, testNS, + WithPubSubPullSubscriptionReadyStatus(corev1.ConditionUnknown, "PullSubscriptionUnknown", "status unknown test message")), newSink(), }, Key: testNS + "/" + buildName, @@ -261,9 +261,9 @@ func TestAllCases(t *testing.T) { WithCloudBuildSourceTopic(testTopicID), WithCloudBuildSourceSink(sinkGVK, sinkName), ), - NewPullSubscriptionWithNoDefaults(buildName, testNS, - WithPullSubscriptionReady(sinkURI), - WithPullSubscriptionReadyStatus(corev1.ConditionTrue, "PullSubscriptionNoReady", ""), + NewPubSubPullSubscriptionWithNoDefaults(buildName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), + WithPubSubPullSubscriptionReadyStatus(corev1.ConditionTrue, "PullSubscriptionNoReady", ""), ), newSink(), }, @@ -342,7 +342,7 @@ func TestAllCases(t *testing.T) { PubSubBase: pubsub.NewPubSubBase(ctx, controllerAgentName, receiveAdapterName, cmw), Identity: identity.NewIdentity(ctx, NoopIAMPolicyManager), buildLister: listers.GetCloudBuildSourceLister(), - pullsubscriptionLister: listers.GetPullSubscriptionLister(), + pullsubscriptionLister: listers.GetPubSubPullSubscriptionLister(), serviceAccountLister: listers.GetServiceAccountLister(), } return cloudbuildsource.NewReconciler(ctx, r.Logger, r.RunClientSet, listers.GetCloudBuildSourceLister(), r.Recorder, r) diff --git a/pkg/reconciler/events/pubsub/pubsub_test.go b/pkg/reconciler/events/pubsub/pubsub_test.go index f8fbf6c2e1..061e4b08f0 100644 --- a/pkg/reconciler/events/pubsub/pubsub_test.go +++ b/pkg/reconciler/events/pubsub/pubsub_test.go @@ -164,23 +164,23 @@ func TestAllCases(t *testing.T) { ), }}, WantCreates: []runtime.Object{ - NewPullSubscriptionWithNoDefaults(pubsubName, testNS, - WithPullSubscriptionSpecWithNoDefaults(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscriptionWithNoDefaults(pubsubName, testNS, + WithPubSubPullSubscriptionSpecWithNoDefaults(pubsubv1alpha1.PullSubscriptionSpec{ Topic: testTopicID, PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, }, }), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMode(pubsubv1alpha1.ModePushCompatible), - WithPullSubscriptionLabels(map[string]string{ + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMode(pubsubv1alpha1.ModePushCompatible), + WithPubSubPullSubscriptionLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": pubsubName, }), - WithPullSubscriptionAnnotations(map[string]string{ + WithPubSubPullSubscriptionAnnotations(map[string]string{ "metrics-resource-group": resourceGroup, }), - WithPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), + WithPubSubPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), }, WantPatches: []clientgotesting.PatchActionImpl{ @@ -198,8 +198,8 @@ func TestAllCases(t *testing.T) { WithCloudPubSubSourceTopic(testTopicID), WithCloudPubSubSourceSink(sinkGVK, sinkName), ), - NewPullSubscriptionWithNoDefaults(pubsubName, testNS, - WithPullSubscriptionReadyStatus(corev1.ConditionFalse, "PullSubscriptionFalse", "status false test message")), + NewPubSubPullSubscriptionWithNoDefaults(pubsubName, testNS, + WithPubSubPullSubscriptionReadyStatus(corev1.ConditionFalse, "PullSubscriptionFalse", "status false test message")), newSink(), }, Key: testNS + "/" + pubsubName, @@ -229,8 +229,8 @@ func TestAllCases(t *testing.T) { WithCloudPubSubSourceTopic(testTopicID), WithCloudPubSubSourceSink(sinkGVK, sinkName), ), - NewPullSubscriptionWithNoDefaults(pubsubName, testNS, - WithPullSubscriptionReadyStatus(corev1.ConditionUnknown, "PullSubscriptionUnknown", "status unknown test message")), + NewPubSubPullSubscriptionWithNoDefaults(pubsubName, testNS, + WithPubSubPullSubscriptionReadyStatus(corev1.ConditionUnknown, "PullSubscriptionUnknown", "status unknown test message")), newSink(), }, Key: testNS + "/" + pubsubName, @@ -260,9 +260,9 @@ func TestAllCases(t *testing.T) { WithCloudPubSubSourceTopic(testTopicID), WithCloudPubSubSourceSink(sinkGVK, sinkName), ), - NewPullSubscriptionWithNoDefaults(pubsubName, testNS, - WithPullSubscriptionReady(sinkURI), - WithPullSubscriptionReadyStatus(corev1.ConditionTrue, "PullSubscriptionNoReady", ""), + NewPubSubPullSubscriptionWithNoDefaults(pubsubName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), + WithPubSubPullSubscriptionReadyStatus(corev1.ConditionTrue, "PullSubscriptionNoReady", ""), ), newSink(), }, @@ -341,7 +341,7 @@ func TestAllCases(t *testing.T) { PubSubBase: pubsub.NewPubSubBase(ctx, controllerAgentName, receiveAdapterName, cmw), Identity: identity.NewIdentity(ctx, NoopIAMPolicyManager), pubsubLister: listers.GetCloudPubSubSourceLister(), - pullsubscriptionLister: listers.GetPullSubscriptionLister(), + pullsubscriptionLister: listers.GetPubSubPullSubscriptionLister(), serviceAccountLister: listers.GetServiceAccountLister(), } return cloudpubsubsource.NewReconciler(ctx, r.Logger, r.RunClientSet, listers.GetCloudPubSubSourceLister(), r.Recorder, r) diff --git a/pkg/reconciler/events/scheduler/scheduler_test.go b/pkg/reconciler/events/scheduler/scheduler_test.go index 08e3f61777..81392aca0f 100644 --- a/pkg/reconciler/events/scheduler/scheduler_test.go +++ b/pkg/reconciler/events/scheduler/scheduler_test.go @@ -178,16 +178,16 @@ func TestAllCases(t *testing.T) { ), }}, WantCreates: []runtime.Object{ - NewTopic(schedulerName, testNS, - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - WithTopicLabels(map[string]string{ + WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": schedulerName, }), - WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), }, WantPatches: []clientgotesting.PatchActionImpl{ @@ -206,8 +206,8 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicTopicID(testTopicID), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicTopicID(testTopicID), ), newSink(), }, @@ -237,9 +237,9 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), ), newSink(), }, @@ -270,10 +270,10 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicReady(""), - WithTopicProjectID(testProject), - WithTopicAddress(testTopicURI), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady(""), + WithPubSubTopicProjectID(testProject), + WithPubSubTopicAddress(testTopicURI), ), newSink(), }, @@ -304,10 +304,10 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicReady("garbaaaaage"), - WithTopicProjectID(testProject), - WithTopicAddress(testTopicURI), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady("garbaaaaage"), + WithPubSubTopicProjectID(testProject), + WithPubSubTopicAddress(testTopicURI), ), newSink(), }, @@ -338,9 +338,9 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicFailed(), - WithTopicProjectID(testProject), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicFailed(), + WithPubSubTopicProjectID(testProject), ), newSink(), }, @@ -371,9 +371,9 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicUnknown(), - WithTopicProjectID(testProject), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicUnknown(), + WithPubSubTopicProjectID(testProject), ), newSink(), }, @@ -405,10 +405,10 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), newSink(), }, @@ -425,21 +425,21 @@ func TestAllCases(t *testing.T) { ), }}, WantCreates: []runtime.Object{ - NewPullSubscriptionWithNoDefaults(schedulerName, testNS, - WithPullSubscriptionSpecWithNoDefaults(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscriptionWithNoDefaults(schedulerName, testNS, + WithPubSubPullSubscriptionSpecWithNoDefaults(pubsubv1alpha1.PullSubscriptionSpec{ Topic: testTopicID, PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, }, }), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionLabels(map[string]string{ + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": schedulerName}), - WithPullSubscriptionAnnotations(map[string]string{ + WithPubSubPullSubscriptionAnnotations(map[string]string{ "metrics-resource-group": resourceGroup, }), - WithPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), + WithPubSubPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), }, WantPatches: []clientgotesting.PatchActionImpl{ @@ -458,12 +458,12 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(schedulerName, testNS), + NewPubSubPullSubscriptionWithNoDefaults(schedulerName, testNS), newSink(), }, Key: testNS + "/" + schedulerName, @@ -494,12 +494,12 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(schedulerName, testNS, WithPullSubscriptionFailed()), + NewPubSubPullSubscriptionWithNoDefaults(schedulerName, testNS, WithPubSubPullSubscriptionFailed()), newSink(), }, Key: testNS + "/" + schedulerName, @@ -530,12 +530,12 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(schedulerName, testNS, WithPullSubscriptionUnknown()), + NewPubSubPullSubscriptionWithNoDefaults(schedulerName, testNS, WithPubSubPullSubscriptionUnknown()), newSink(), }, Key: testNS + "/" + schedulerName, @@ -567,13 +567,13 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(schedulerName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(schedulerName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, @@ -613,13 +613,13 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(schedulerName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(schedulerName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, @@ -659,13 +659,13 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(schedulerName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(schedulerName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, @@ -705,13 +705,13 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(schedulerName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(schedulerName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, @@ -752,13 +752,13 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(schedulerName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(schedulerName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, @@ -798,13 +798,13 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceData(testData), WithCloudSchedulerSourceSchedule(onceAMinuteSchedule), ), - NewTopic(schedulerName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(schedulerName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(schedulerName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, @@ -845,13 +845,13 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceSinkURI(schedulerSinkURL), WithCloudSchedulerSourceDeletionTimestamp, ), - NewTopic(schedulerName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(schedulerName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(schedulerName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, @@ -881,13 +881,13 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceSinkURI(schedulerSinkURL), WithCloudSchedulerSourceDeletionTimestamp, ), - NewTopic(schedulerName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(schedulerName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(schedulerName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, @@ -917,13 +917,13 @@ func TestAllCases(t *testing.T) { WithCloudSchedulerSourceSinkURI(schedulerSinkURL), WithCloudSchedulerSourceDeletionTimestamp, ), - NewTopic(schedulerName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(schedulerName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(schedulerName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(schedulerName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, diff --git a/pkg/reconciler/events/storage/storage_test.go b/pkg/reconciler/events/storage/storage_test.go index 6b1b995ce2..e5639040ce 100644 --- a/pkg/reconciler/events/storage/storage_test.go +++ b/pkg/reconciler/events/storage/storage_test.go @@ -176,16 +176,16 @@ func TestAllCases(t *testing.T) { ), }}, WantCreates: []runtime.Object{ - NewTopic(storageName, testNS, - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + NewPubSubTopic(storageName, testNS, + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - WithTopicLabels(map[string]string{ + WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": storageName, }), - WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), }, WantPatches: []clientgotesting.PatchActionImpl{ @@ -203,8 +203,8 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceBucket(bucket), WithCloudStorageSourceSink(sinkGVK, sinkName), ), - NewTopic(storageName, testNS, - WithTopicTopicID(testTopicID), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicTopicID(testTopicID), ), newSink(), }, @@ -233,9 +233,9 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceBucket(bucket), WithCloudStorageSourceSink(sinkGVK, sinkName), ), - NewTopic(storageName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), ), newSink(), }, @@ -265,10 +265,10 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceBucket(bucket), WithCloudStorageSourceSink(sinkGVK, sinkName), ), - NewTopic(storageName, testNS, - WithTopicReady(""), - WithTopicProjectID(testProject), - WithTopicAddress(testTopicURI), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicReady(""), + WithPubSubTopicProjectID(testProject), + WithPubSubTopicAddress(testTopicURI), ), newSink(), }, @@ -298,10 +298,10 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceBucket(bucket), WithCloudStorageSourceSink(sinkGVK, sinkName), ), - NewTopic(storageName, testNS, - WithTopicReady("garbaaaaage"), - WithTopicProjectID(testProject), - WithTopicAddress(testTopicURI), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicReady("garbaaaaage"), + WithPubSubTopicProjectID(testProject), + WithPubSubTopicAddress(testTopicURI), ), newSink(), }, @@ -331,9 +331,9 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceBucket(bucket), WithCloudStorageSourceSink(sinkGVK, sinkName), ), - NewTopic(storageName, testNS, - WithTopicFailed(), - WithTopicProjectID(testProject), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicFailed(), + WithPubSubTopicProjectID(testProject), ), newSink(), }, @@ -363,9 +363,9 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceBucket(bucket), WithCloudStorageSourceSink(sinkGVK, sinkName), ), - NewTopic(storageName, testNS, - WithTopicUnknown(), - WithTopicProjectID(testProject), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicUnknown(), + WithPubSubTopicProjectID(testProject), ), newSink(), }, @@ -395,10 +395,10 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceBucket(bucket), WithCloudStorageSourceSink(sinkGVK, sinkName), ), - NewTopic(storageName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), newSink(), }, @@ -416,22 +416,22 @@ func TestAllCases(t *testing.T) { ), }}, WantCreates: []runtime.Object{ - NewPullSubscriptionWithNoDefaults(storageName, testNS, - WithPullSubscriptionSpecWithNoDefaults(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscriptionWithNoDefaults(storageName, testNS, + WithPubSubPullSubscriptionSpecWithNoDefaults(pubsubv1alpha1.PullSubscriptionSpec{ Topic: testTopicID, PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, }, }), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionLabels(map[string]string{ + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": storageName, }), - WithPullSubscriptionAnnotations(map[string]string{ + WithPubSubPullSubscriptionAnnotations(map[string]string{ "metrics-resource-group": resourceGroup, }), - WithPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), + WithPubSubPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), }, WantPatches: []clientgotesting.PatchActionImpl{ @@ -450,12 +450,12 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceBucket(bucket), WithCloudStorageSourceSink(sinkGVK, sinkName), ), - NewTopic(storageName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(storageName, testNS), + NewPubSubPullSubscriptionWithNoDefaults(storageName, testNS), newSink(), }, Key: testNS + "/" + storageName, @@ -486,12 +486,12 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceBucket(bucket), WithCloudStorageSourceSink(sinkGVK, sinkName), ), - NewTopic(storageName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(storageName, testNS, WithPullSubscriptionFailed()), + NewPubSubPullSubscriptionWithNoDefaults(storageName, testNS, WithPubSubPullSubscriptionFailed()), }, Key: testNS + "/" + storageName, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ @@ -521,12 +521,12 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceBucket(bucket), WithCloudStorageSourceSink(sinkGVK, sinkName), ), - NewTopic(storageName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(storageName, testNS, WithPullSubscriptionUnknown()), + NewPubSubPullSubscriptionWithNoDefaults(storageName, testNS, WithPubSubPullSubscriptionUnknown()), }, Key: testNS + "/" + storageName, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ @@ -559,13 +559,13 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceSink(sinkGVK, sinkName), WithCloudStorageSourceEventTypes([]string{storagev1alpha1.CloudStorageSourceFinalize}), ), - NewTopic(storageName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(storageName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(storageName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, @@ -609,13 +609,13 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceSink(sinkGVK, sinkName), WithCloudStorageSourceEventTypes([]string{storagev1alpha1.CloudStorageSourceFinalize}), ), - NewTopic(storageName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(storageName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(storageName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, @@ -661,13 +661,13 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceSink(sinkGVK, sinkName), WithCloudStorageSourceEventTypes([]string{storagev1alpha1.CloudStorageSourceFinalize}), ), - NewTopic(storageName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(storageName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(storageName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, @@ -713,13 +713,13 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceSink(sinkGVK, sinkName), WithCloudStorageSourceEventTypes([]string{storagev1alpha1.CloudStorageSourceFinalize}), ), - NewTopic(storageName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(storageName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(storageName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, @@ -768,13 +768,13 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceTopicReady(testTopicID), WithDeletionTimestamp(), ), - NewTopic(storageName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(storageName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(storageName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, @@ -809,13 +809,13 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceTopicReady(testTopicID), WithDeletionTimestamp(), ), - NewTopic(storageName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(storageName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(storageName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, @@ -881,13 +881,13 @@ func TestAllCases(t *testing.T) { WithCloudStorageSourceTopicReady(testTopicID), WithDeletionTimestamp(), ), - NewTopic(storageName, testNS, - WithTopicReady(testTopicID), - WithTopicAddress(testTopicURI), - WithTopicProjectID(testProject), + NewPubSubTopic(storageName, testNS, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicAddress(testTopicURI), + WithPubSubTopicProjectID(testProject), ), - NewPullSubscriptionWithNoDefaults(storageName, testNS, - WithPullSubscriptionReady(sinkURI), + NewPubSubPullSubscriptionWithNoDefaults(storageName, testNS, + WithPubSubPullSubscriptionReady(sinkURI), ), newSink(), }, diff --git a/pkg/reconciler/messaging/channel/channel_test.go b/pkg/reconciler/messaging/channel/channel_test.go index 702e09b290..6af85adc81 100644 --- a/pkg/reconciler/messaging/channel/channel_test.go +++ b/pkg/reconciler/messaging/channel/channel_test.go @@ -534,8 +534,8 @@ func TestAllCases(t *testing.T) { Base: reconciler.NewBase(ctx, controllerAgentName, cmw), Identity: identity.NewIdentity(ctx, NoopIAMPolicyManager), channelLister: listers.GetChannelLister(), - topicLister: listers.GetTopicLister(), - pullSubscriptionLister: listers.GetPullSubscriptionLister(), + topicLister: listers.GetPubSubTopicLister(), + pullSubscriptionLister: listers.GetPubSubPullSubscriptionLister(), serviceAccountLister: listers.GetServiceAccountLister(), } return channel.NewReconciler(ctx, r.Logger, r.RunClientSet, listers.GetChannelLister(), r.Recorder, r) diff --git a/pkg/reconciler/pubsub/pullsubscription/keda/pullsubscription_test.go b/pkg/reconciler/pubsub/pullsubscription/keda/pullsubscription_test.go index f2e0a2f271..3679b89c38 100644 --- a/pkg/reconciler/pubsub/pullsubscription/keda/pullsubscription_test.go +++ b/pkg/reconciler/pubsub/pullsubscription/keda/pullsubscription_test.go @@ -126,21 +126,21 @@ func newSecret() *corev1.Secret { } func newPullSubscription(subscriptionId string) *pubsubv1alpha1.PullSubscription { - return NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + return NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ Topic: testTopicID, PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, }), - WithPullSubscriptionSubscriptionID(subscriptionId), - WithInitPullSubscriptionConditions, - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionSubscriptionID(subscriptionId), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), ) } @@ -203,17 +203,17 @@ func TestAllCases(t *testing.T) { }, { Name: "cannot get sink", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), ), newSecret(), }, @@ -227,40 +227,40 @@ func TestAllCases(t *testing.T) { patchFinalizers(testNS, sourceName, resourceGroup), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), // Updates - WithPullSubscriptionStatusObservedGeneration(generation), - WithInitPullSubscriptionConditions, - WithPullSubscriptionSinkNotFound(), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionSinkNotFound(), ), }}, }, { Name: "create client fails", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), ), newSink(), newSecret(), @@ -276,25 +276,25 @@ func TestAllCases(t *testing.T) { }, }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionStatusObservedGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), - WithPullSubscriptionTransformerURI(nil), - WithPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "client-create-induced-error"))), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), + WithPubSubPullSubscriptionTransformerURI(nil), + WithPubSubPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "client-create-induced-error"))), }}, WantPatches: []clientgotesting.PatchActionImpl{ patchFinalizers(testNS, sourceName, resourceGroup), @@ -302,20 +302,20 @@ func TestAllCases(t *testing.T) { }, { Name: "topic exists fails", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), ), newSink(), newSecret(), @@ -333,25 +333,25 @@ func TestAllCases(t *testing.T) { }, }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionStatusObservedGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), - WithPullSubscriptionTransformerURI(nil), - WithPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "topic-exists-induced-error"))), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), + WithPubSubPullSubscriptionTransformerURI(nil), + WithPubSubPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "topic-exists-induced-error"))), }}, WantPatches: []clientgotesting.PatchActionImpl{ patchFinalizers(testNS, sourceName, resourceGroup), @@ -359,20 +359,20 @@ func TestAllCases(t *testing.T) { }, { Name: "topic does not exist", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), ), newSink(), newSecret(), @@ -390,25 +390,25 @@ func TestAllCases(t *testing.T) { }, }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionStatusObservedGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), - WithPullSubscriptionTransformerURI(nil), - WithPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: Topic %q does not exist", failedToReconcileSubscriptionMsg, testTopicID))), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), + WithPubSubPullSubscriptionTransformerURI(nil), + WithPubSubPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: Topic %q does not exist", failedToReconcileSubscriptionMsg, testTopicID))), }}, WantPatches: []clientgotesting.PatchActionImpl{ patchFinalizers(testNS, sourceName, resourceGroup), @@ -416,20 +416,20 @@ func TestAllCases(t *testing.T) { }, { Name: "subscription exists fails", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), ), newSink(), newSecret(), @@ -447,25 +447,25 @@ func TestAllCases(t *testing.T) { }, }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionStatusObservedGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), - WithPullSubscriptionTransformerURI(nil), - WithPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "subscription-exists-induced-error"))), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), + WithPubSubPullSubscriptionTransformerURI(nil), + WithPubSubPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "subscription-exists-induced-error"))), }}, WantPatches: []clientgotesting.PatchActionImpl{ patchFinalizers(testNS, sourceName, resourceGroup), @@ -473,20 +473,20 @@ func TestAllCases(t *testing.T) { }, { Name: "create subscription fails", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), ), newSink(), newSecret(), @@ -505,25 +505,25 @@ func TestAllCases(t *testing.T) { }, }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionStatusObservedGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), - WithPullSubscriptionTransformerURI(nil), - WithPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "subscription-create-induced-error"))), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), + WithPubSubPullSubscriptionTransformerURI(nil), + WithPubSubPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "subscription-create-induced-error"))), }}, WantPatches: []clientgotesting.PatchActionImpl{ patchFinalizers(testNS, sourceName, resourceGroup), @@ -552,27 +552,27 @@ func TestAllCases(t *testing.T) { newReceiveAdapter(context.Background(), testImage, nil), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), - WithPullSubscriptionTransformerURI(nil), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), + WithPubSubPullSubscriptionTransformerURI(nil), // Updates - WithPullSubscriptionStatusObservedGeneration(generation), - WithPullSubscriptionMarkSubscribed(testSubscriptionID), - WithPullSubscriptionMarkDeployed, + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubPullSubscriptionMarkSubscribed(testSubscriptionID), + WithPubSubPullSubscriptionMarkDeployed, ), }}, WantPatches: []clientgotesting.PatchActionImpl{ @@ -581,18 +581,18 @@ func TestAllCases(t *testing.T) { }, { Name: "successful create - reuse existing receive adapter - match", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), ), newSink(), newSecret(), @@ -614,26 +614,26 @@ func TestAllCases(t *testing.T) { Eventf(corev1.EventTypeNormal, "PullSubscriptionReconciled", `PullSubscription reconciled: "%s/%s"`, testNS, sourceName), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSubscribed(testSubscriptionID), - WithPullSubscriptionMarkDeployed, - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), - WithPullSubscriptionTransformerURI(nil), - WithPullSubscriptionStatusObservedGeneration(generation), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSubscribed(testSubscriptionID), + WithPubSubPullSubscriptionMarkDeployed, + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), + WithPubSubPullSubscriptionTransformerURI(nil), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), ), }}, WantPatches: []clientgotesting.PatchActionImpl{ @@ -642,19 +642,19 @@ func TestAllCases(t *testing.T) { }, { Name: "successful create - reuse existing receive adapter - mismatch", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionTransformer(transformerGVK, transformerName), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionTransformer(transformerGVK, transformerName), ), newSink(), newTransformer(), @@ -685,26 +685,26 @@ func TestAllCases(t *testing.T) { Object: newReceiveAdapter(context.Background(), testImage, transformerURI), }}, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionTransformer(transformerGVK, transformerName), - WithPullSubscriptionMarkSubscribed(testSubscriptionID), - WithPullSubscriptionMarkDeployed, - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkTransformer(transformerURI), - WithPullSubscriptionStatusObservedGeneration(generation), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionTransformer(transformerGVK, transformerName), + WithPubSubPullSubscriptionMarkSubscribed(testSubscriptionID), + WithPubSubPullSubscriptionMarkDeployed, + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkTransformer(transformerURI), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), ), }}, WantPatches: []clientgotesting.PatchActionImpl{ @@ -713,22 +713,22 @@ func TestAllCases(t *testing.T) { }, { Name: "deleting - failed to delete subscription", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSubscribed(testSubscriptionID), - WithPullSubscriptionMarkDeployed, - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionDeleted, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSubscribed(testSubscriptionID), + WithPubSubPullSubscriptionMarkDeployed, + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionDeleted, ), newSecret(), }, @@ -751,23 +751,23 @@ func TestAllCases(t *testing.T) { }, { Name: "successfully deleted subscription", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSubscribed(testSubscriptionID), - WithPullSubscriptionMarkDeployed, - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionSubscriptionID(""), - WithPullSubscriptionDeleted, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSubscribed(testSubscriptionID), + WithPubSubPullSubscriptionMarkDeployed, + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionSubscriptionID(""), + WithPubSubPullSubscriptionDeleted, ), newSecret(), }, @@ -797,7 +797,7 @@ func TestAllCases(t *testing.T) { Base: &psreconciler.Base{ PubSubBase: pubsubBase, DeploymentLister: listers.GetDeploymentLister(), - PullSubscriptionLister: listers.GetPullSubscriptionLister(), + PullSubscriptionLister: listers.GetPubSubPullSubscriptionLister(), UriResolver: resolver.NewURIResolver(ctx, func(types.NamespacedName) {}), ReceiveAdapterImage: testImage, CreateClientFn: gpubsub.TestClientCreator(testData["ps"]), @@ -808,7 +808,7 @@ func TestAllCases(t *testing.T) { r.ReconcileDataPlaneFn = r.ReconcileScaledObject r.scaledObjectTracker = duck.NewListableTracker(ctx, resource.Get, func(types.NamespacedName) {}, 0) r.discoveryFn = mockDiscoveryFunc - return pullsubscription.NewReconciler(ctx, r.Logger, r.RunClientSet, listers.GetPullSubscriptionLister(), r.Recorder, r) + return pullsubscription.NewReconciler(ctx, r.Logger, r.RunClientSet, listers.GetPubSubPullSubscriptionLister(), r.Recorder, r) })) } @@ -817,9 +817,9 @@ func mockDiscoveryFunc(_ discovery.DiscoveryInterface, _ schema.GroupVersion) er } func newReceiveAdapter(ctx context.Context, image string, transformer *apis.URL) runtime.Object { - source := NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionAnnotations(map[string]string{ + source := NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionAnnotations(map[string]string{ v1alpha1.AutoscalingClassAnnotation: v1alpha1.KEDA, v1alpha1.AutoscalingMinScaleAnnotation: "0", v1alpha1.AutoscalingMaxScaleAnnotation: "3", @@ -827,7 +827,7 @@ func newReceiveAdapter(ctx context.Context, image string, transformer *apis.URL) v1alpha1.KedaAutoscalingCooldownPeriodAnnotation: "60", v1alpha1.KedaAutoscalingPollingIntervalAnnotation: "30", }), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, diff --git a/pkg/reconciler/pubsub/pullsubscription/keda/resources/scaled_object_test.go b/pkg/reconciler/pubsub/pullsubscription/keda/resources/scaled_object_test.go index dad29a76b7..890100179d 100644 --- a/pkg/reconciler/pubsub/pullsubscription/keda/resources/scaled_object_test.go +++ b/pkg/reconciler/pubsub/pullsubscription/keda/resources/scaled_object_test.go @@ -43,10 +43,10 @@ func newAnnotations() map[string]string { } func newPullSubscription() *v1alpha1.PullSubscription { - return NewPullSubscription("psname", "psnamespace", - WithPullSubscriptionUID("psuid"), - WithPullSubscriptionAnnotations(newAnnotations()), - WithPullSubscriptionSubscriptionID("subscriptionId"), + return NewPubSubPullSubscription("psname", "psnamespace", + WithPubSubPullSubscriptionUID("psuid"), + WithPubSubPullSubscriptionAnnotations(newAnnotations()), + WithPubSubPullSubscriptionSubscriptionID("subscriptionId"), ) } diff --git a/pkg/reconciler/pubsub/pullsubscription/static/pullsubscription_test.go b/pkg/reconciler/pubsub/pullsubscription/static/pullsubscription_test.go index b8263e89a7..39caa90db4 100644 --- a/pkg/reconciler/pubsub/pullsubscription/static/pullsubscription_test.go +++ b/pkg/reconciler/pubsub/pullsubscription/static/pullsubscription_test.go @@ -167,16 +167,16 @@ func TestAllCases(t *testing.T) { }, { Name: "cannot get sink", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), ), newSecret(), }, @@ -190,38 +190,38 @@ func TestAllCases(t *testing.T) { patchFinalizers(testNS, sourceName, resourceGroup), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionStatusObservedGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), // updates - WithInitPullSubscriptionConditions, - WithPullSubscriptionSinkNotFound(), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionSinkNotFound(), ), }}, }, { Name: "create client fails", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), ), newSink(), newSecret(), @@ -237,24 +237,24 @@ func TestAllCases(t *testing.T) { }, }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionStatusObservedGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), - WithPullSubscriptionTransformerURI(nil), - WithPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "client-create-induced-error"))), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), + WithPubSubPullSubscriptionTransformerURI(nil), + WithPubSubPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "client-create-induced-error"))), }}, WantPatches: []clientgotesting.PatchActionImpl{ patchFinalizers(testNS, sourceName, resourceGroup), @@ -262,19 +262,19 @@ func TestAllCases(t *testing.T) { }, { Name: "topic exists fails", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), ), newSink(), newSecret(), @@ -295,41 +295,41 @@ func TestAllCases(t *testing.T) { patchFinalizers(testNS, sourceName, resourceGroup), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionStatusObservedGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), - WithPullSubscriptionTransformerURI(nil), - WithPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "topic-exists-induced-error"))), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), + WithPubSubPullSubscriptionTransformerURI(nil), + WithPubSubPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "topic-exists-induced-error"))), }}, }, { Name: "topic does not exist", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), ), newSink(), newSecret(), @@ -350,41 +350,41 @@ func TestAllCases(t *testing.T) { patchFinalizers(testNS, sourceName, resourceGroup), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionStatusObservedGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), - WithPullSubscriptionTransformerURI(nil), - WithPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: Topic %q does not exist", failedToReconcileSubscriptionMsg, testTopicID))), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), + WithPubSubPullSubscriptionTransformerURI(nil), + WithPubSubPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: Topic %q does not exist", failedToReconcileSubscriptionMsg, testTopicID))), }}, }, { Name: "subscription exists fails", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), ), newSink(), newSecret(), @@ -405,41 +405,41 @@ func TestAllCases(t *testing.T) { patchFinalizers(testNS, sourceName, resourceGroup), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionStatusObservedGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), - WithPullSubscriptionTransformerURI(nil), - WithPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "subscription-exists-induced-error"))), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), + WithPubSubPullSubscriptionTransformerURI(nil), + WithPubSubPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "subscription-exists-induced-error"))), }}, }, { Name: "create subscription fails", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), ), newSink(), newSecret(), @@ -461,41 +461,41 @@ func TestAllCases(t *testing.T) { patchFinalizers(testNS, sourceName, resourceGroup), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionStatusObservedGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), - WithPullSubscriptionTransformerURI(nil), - WithPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "subscription-create-induced-error"))), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), + WithPubSubPullSubscriptionTransformerURI(nil), + WithPubSubPullSubscriptionMarkNoSubscription("SubscriptionReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileSubscriptionMsg, "subscription-create-induced-error"))), }}, }, { Name: "successfully created subscription", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), ), newSink(), newSecret(), @@ -516,26 +516,26 @@ func TestAllCases(t *testing.T) { newReceiveAdapter(context.Background(), testImage, nil), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), - WithPullSubscriptionTransformerURI(nil), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), + WithPubSubPullSubscriptionTransformerURI(nil), // Updates - WithPullSubscriptionStatusObservedGeneration(generation), - WithPullSubscriptionMarkSubscribed(testSubscriptionID), - WithPullSubscriptionMarkDeployed, + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubPullSubscriptionMarkSubscribed(testSubscriptionID), + WithPubSubPullSubscriptionMarkDeployed, ), }}, WantPatches: []clientgotesting.PatchActionImpl{ @@ -544,17 +544,17 @@ func TestAllCases(t *testing.T) { }, { Name: "successful create - reuse existing receive adapter - match", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), ), newSink(), newSecret(), @@ -576,42 +576,42 @@ func TestAllCases(t *testing.T) { patchFinalizers(testNS, sourceName, resourceGroup), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSubscribed(testSubscriptionID), - WithPullSubscriptionMarkDeployed, - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), - WithPullSubscriptionTransformerURI(nil), - WithPullSubscriptionStatusObservedGeneration(generation), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSubscribed(testSubscriptionID), + WithPubSubPullSubscriptionMarkDeployed, + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkNoTransformer("TransformerNil", "Transformer is nil"), + WithPubSubPullSubscriptionTransformerURI(nil), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), ), }}, }, { Name: "successful create - reuse existing receive adapter - mismatch", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionTransformer(transformerGVK, transformerName), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionTransformer(transformerGVK, transformerName), ), newSink(), newTransformer(), @@ -642,46 +642,46 @@ func TestAllCases(t *testing.T) { patchFinalizers(testNS, sourceName, resourceGroup), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), //WithPullSubscriptionFinalizers(resourceGroup), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithInitPullSubscriptionConditions, - WithPullSubscriptionProjectID(testProject), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionTransformer(transformerGVK, transformerName), - WithPullSubscriptionMarkSubscribed(testSubscriptionID), - WithPullSubscriptionMarkDeployed, - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionMarkTransformer(transformerURI), - WithPullSubscriptionStatusObservedGeneration(generation), + WithPubSubInitPullSubscriptionConditions, + WithPubSubPullSubscriptionProjectID(testProject), + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionTransformer(transformerGVK, transformerName), + WithPubSubPullSubscriptionMarkSubscribed(testSubscriptionID), + WithPubSubPullSubscriptionMarkDeployed, + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionMarkTransformer(transformerURI), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), ), }}, }, { Name: "deleting - failed to delete subscription", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSubscribed(testSubscriptionID), - WithPullSubscriptionMarkDeployed, - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionDeleted, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSubscribed(testSubscriptionID), + WithPubSubPullSubscriptionMarkDeployed, + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionDeleted, ), newSecret(), }, @@ -704,22 +704,22 @@ func TestAllCases(t *testing.T) { }, { Name: "successfully deleted subscription", Objects: []runtime.Object{ - NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionStatusObservedGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSubscribed(testSubscriptionID), - WithPullSubscriptionMarkDeployed, - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionDeleted, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSubscribed(testSubscriptionID), + WithPubSubPullSubscriptionMarkDeployed, + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionDeleted, ), newSecret(), }, @@ -736,23 +736,23 @@ func TestAllCases(t *testing.T) { Key: testNS + "/" + sourceName, WantEvents: nil, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionObjectMetaGeneration(generation), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + Object: NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionObjectMetaGeneration(generation), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, }, Topic: testTopicID, }), - WithPullSubscriptionSink(sinkGVK, sinkName), - WithPullSubscriptionMarkSubscribed(testSubscriptionID), - WithPullSubscriptionSubscriptionID(""), - WithPullSubscriptionMarkDeployed, - WithPullSubscriptionMarkSink(sinkURI), - WithPullSubscriptionStatusObservedGeneration(generation), - WithPullSubscriptionDeleted, + WithPubSubPullSubscriptionSink(sinkGVK, sinkName), + WithPubSubPullSubscriptionMarkSubscribed(testSubscriptionID), + WithPubSubPullSubscriptionSubscriptionID(""), + WithPubSubPullSubscriptionMarkDeployed, + WithPubSubPullSubscriptionMarkSink(sinkURI), + WithPubSubPullSubscriptionStatusObservedGeneration(generation), + WithPubSubPullSubscriptionDeleted, ), }}, }} @@ -767,7 +767,7 @@ func TestAllCases(t *testing.T) { Base: &psreconciler.Base{ PubSubBase: pubsubBase, DeploymentLister: listers.GetDeploymentLister(), - PullSubscriptionLister: listers.GetPullSubscriptionLister(), + PullSubscriptionLister: listers.GetPubSubPullSubscriptionLister(), UriResolver: resolver.NewURIResolver(ctx, func(types.NamespacedName) {}), ReceiveAdapterImage: testImage, CreateClientFn: gpubsub.TestClientCreator(testData["ps"]), @@ -776,14 +776,14 @@ func TestAllCases(t *testing.T) { }, } r.ReconcileDataPlaneFn = r.ReconcileDeployment - return pullsubscription.NewReconciler(ctx, r.Logger, r.RunClientSet, listers.GetPullSubscriptionLister(), r.Recorder, r) + return pullsubscription.NewReconciler(ctx, r.Logger, r.RunClientSet, listers.GetPubSubPullSubscriptionLister(), r.Recorder, r) })) } func newReceiveAdapter(ctx context.Context, image string, transformer *apis.URL) runtime.Object { - source := NewPullSubscription(sourceName, testNS, - WithPullSubscriptionUID(sourceUID), - WithPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ + source := NewPubSubPullSubscription(sourceName, testNS, + WithPubSubPullSubscriptionUID(sourceUID), + WithPubSubPullSubscriptionSpec(pubsubv1alpha1.PullSubscriptionSpec{ PubSubSpec: duckv1alpha1.PubSubSpec{ Secret: &secret, Project: testProject, diff --git a/pkg/reconciler/pubsub/reconciler_test.go b/pkg/reconciler/pubsub/reconciler_test.go index ea31fcdd30..6db6c1c426 100644 --- a/pkg/reconciler/pubsub/reconciler_test.go +++ b/pkg/reconciler/pubsub/reconciler_test.go @@ -90,472 +90,472 @@ func TestCreates(t *testing.T) { wantCreates []runtime.Object }{{ name: "topic does not exist, created, not yet been reconciled", - expectedTopic: rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + expectedTopic: rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Secret: &secret, Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), expectedPS: nil, expectedErr: fmt.Sprintf("Topic %q has not yet been reconciled", name), wantCreates: []runtime.Object{ - rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), }, }, { name: "topic exists but is not yet been reconciled", objects: []runtime.Object{ - rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), }, - expectedTopic: rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + expectedTopic: rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Secret: &secret, Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), expectedPS: nil, expectedErr: fmt.Sprintf("Topic %q has not yet been reconciled", name), }, { name: "topic exists and is ready but no projectid", objects: []runtime.Object{ - rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicReady(testTopicID), - rectesting.WithTopicAddress(testTopicURI), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicReady(testTopicID), + rectesting.WithPubSubTopicAddress(testTopicURI), ), }, - expectedTopic: rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + expectedTopic: rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Secret: &secret, Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicReady(testTopicID), - rectesting.WithTopicAddress(testTopicURI), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicReady(testTopicID), + rectesting.WithPubSubTopicAddress(testTopicURI), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), expectedPS: nil, expectedErr: fmt.Sprintf("Topic %q did not expose projectid", name), }, { name: "topic exists and the status of topic is false", objects: []runtime.Object{ - rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicProjectID(testProjectID), - rectesting.WithTopicFailed(), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicProjectID(testProjectID), + rectesting.WithPubSubTopicFailed(), ), }, - expectedTopic: rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + expectedTopic: rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Secret: &secret, Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicFailed(), - rectesting.WithTopicProjectID(testProjectID), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicFailed(), + rectesting.WithPubSubTopicProjectID(testProjectID), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), expectedPS: nil, expectedErr: fmt.Sprintf("the status of Topic %q is False", name), }, { name: "topic exists and the status of topic is unknown", objects: []runtime.Object{ - rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicProjectID(testProjectID), - rectesting.WithTopicUnknown(), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicProjectID(testProjectID), + rectesting.WithPubSubTopicUnknown(), ), }, - expectedTopic: rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + expectedTopic: rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Secret: &secret, Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicUnknown(), - rectesting.WithTopicProjectID(testProjectID), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicUnknown(), + rectesting.WithPubSubTopicProjectID(testProjectID), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), expectedPS: nil, expectedErr: fmt.Sprintf("the status of Topic %q is Unknown", name), }, { name: "topic exists and is ready but no topicid", objects: []runtime.Object{ - rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicProjectID(testProjectID), - rectesting.WithTopicReady(""), - rectesting.WithTopicAddress(testTopicURI), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicProjectID(testProjectID), + rectesting.WithPubSubTopicReady(""), + rectesting.WithPubSubTopicAddress(testTopicURI), ), }, - expectedTopic: rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + expectedTopic: rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Secret: &secret, Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicReady(""), - rectesting.WithTopicProjectID(testProjectID), - rectesting.WithTopicAddress(testTopicURI), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicReady(""), + rectesting.WithPubSubTopicProjectID(testProjectID), + rectesting.WithPubSubTopicAddress(testTopicURI), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), expectedPS: nil, expectedErr: fmt.Sprintf("Topic %q did not expose topicid", name), }, { name: "topic exists and is ready, pullsubscription created, not yet been reconciled", objects: []runtime.Object{ - rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicProjectID(testProjectID), - rectesting.WithTopicReady(testTopicID), - rectesting.WithTopicAddress(testTopicURI), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicProjectID(testProjectID), + rectesting.WithPubSubTopicReady(testTopicID), + rectesting.WithPubSubTopicAddress(testTopicURI), ), }, - expectedTopic: rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + expectedTopic: rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Secret: &secret, Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicReady(testTopicID), - rectesting.WithTopicProjectID(testProjectID), - rectesting.WithTopicAddress(testTopicURI), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicReady(testTopicID), + rectesting.WithPubSubTopicProjectID(testProjectID), + rectesting.WithPubSubTopicAddress(testTopicURI), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), - expectedPS: rectesting.NewPullSubscriptionWithNoDefaults(name, testNS, - rectesting.WithPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ + expectedPS: rectesting.NewPubSubPullSubscriptionWithNoDefaults(name, testNS, + rectesting.WithPubSubPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ Topic: testTopicID, PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, }, }), - rectesting.WithPullSubscriptionLabels(map[string]string{ + rectesting.WithPubSubPullSubscriptionLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithPullSubscriptionAnnotations(map[string]string{ + rectesting.WithPubSubPullSubscriptionAnnotations(map[string]string{ "metrics-resource-group": resourceGroup, }), - rectesting.WithPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), expectedErr: fmt.Sprintf("%s: PullSubscription %q has not yet been reconciled", failedToPropagatePullSubscriptionStatusMsg, name), wantCreates: []runtime.Object{ - rectesting.NewPullSubscriptionWithNoDefaults(name, testNS, - rectesting.WithPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ + rectesting.NewPubSubPullSubscriptionWithNoDefaults(name, testNS, + rectesting.WithPubSubPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ Topic: testTopicID, PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, }, }), - rectesting.WithPullSubscriptionLabels(map[string]string{ + rectesting.WithPubSubPullSubscriptionLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithPullSubscriptionAnnotations(map[string]string{ + rectesting.WithPubSubPullSubscriptionAnnotations(map[string]string{ "metrics-resource-group": resourceGroup, }), - rectesting.WithPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), }, }, { name: "topic exists and is ready, pullsubscription exists, not yet been reconciled", objects: []runtime.Object{ - rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicProjectID(testProjectID), - rectesting.WithTopicReady(testTopicID), - rectesting.WithTopicAddress(testTopicURI), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicProjectID(testProjectID), + rectesting.WithPubSubTopicReady(testTopicID), + rectesting.WithPubSubTopicAddress(testTopicURI), ), - rectesting.NewPullSubscriptionWithNoDefaults(name, testNS, - rectesting.WithPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ + rectesting.NewPubSubPullSubscriptionWithNoDefaults(name, testNS, + rectesting.WithPubSubPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ Topic: testTopicID, PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, }, }), - rectesting.WithPullSubscriptionLabels(map[string]string{ + rectesting.WithPubSubPullSubscriptionLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithPullSubscriptionAnnotations(map[string]string{ + rectesting.WithPubSubPullSubscriptionAnnotations(map[string]string{ "metrics-resource-group": resourceGroup, }), - rectesting.WithPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), }, - expectedTopic: rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + expectedTopic: rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Secret: &secret, Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicReady(testTopicID), - rectesting.WithTopicProjectID(testProjectID), - rectesting.WithTopicAddress(testTopicURI), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicReady(testTopicID), + rectesting.WithPubSubTopicProjectID(testProjectID), + rectesting.WithPubSubTopicAddress(testTopicURI), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), - expectedPS: rectesting.NewPullSubscriptionWithNoDefaults(name, testNS, - rectesting.WithPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ + expectedPS: rectesting.NewPubSubPullSubscriptionWithNoDefaults(name, testNS, + rectesting.WithPubSubPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ Topic: testTopicID, PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, }, }), - rectesting.WithPullSubscriptionLabels(map[string]string{ + rectesting.WithPubSubPullSubscriptionLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithPullSubscriptionAnnotations(map[string]string{ + rectesting.WithPubSubPullSubscriptionAnnotations(map[string]string{ "metrics-resource-group": resourceGroup, }), - rectesting.WithPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), expectedErr: fmt.Sprintf("%s: PullSubscription %q has not yet been reconciled", failedToPropagatePullSubscriptionStatusMsg, name), }, { name: "topic exists and is ready, pullsubscription exists and the status is false", objects: []runtime.Object{ - rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicProjectID(testProjectID), - rectesting.WithTopicReady(testTopicID), - rectesting.WithTopicAddress(testTopicURI), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicProjectID(testProjectID), + rectesting.WithPubSubTopicReady(testTopicID), + rectesting.WithPubSubTopicAddress(testTopicURI), ), - rectesting.NewPullSubscriptionWithNoDefaults(name, testNS, - rectesting.WithPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ + rectesting.NewPubSubPullSubscriptionWithNoDefaults(name, testNS, + rectesting.WithPubSubPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ Topic: testTopicID, PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, }, }), - rectesting.WithPullSubscriptionLabels(map[string]string{ + rectesting.WithPubSubPullSubscriptionLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithPullSubscriptionAnnotations(map[string]string{ + rectesting.WithPubSubPullSubscriptionAnnotations(map[string]string{ "metrics-resource-group": resourceGroup, }), - rectesting.WithPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithPullSubscriptionFailed(), + rectesting.WithPubSubPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubPullSubscriptionFailed(), ), }, - expectedTopic: rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + expectedTopic: rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Secret: &secret, Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicReady(testTopicID), - rectesting.WithTopicProjectID(testProjectID), - rectesting.WithTopicAddress(testTopicURI), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicReady(testTopicID), + rectesting.WithPubSubTopicProjectID(testProjectID), + rectesting.WithPubSubTopicAddress(testTopicURI), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), - expectedPS: rectesting.NewPullSubscriptionWithNoDefaults(name, testNS, - rectesting.WithPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ + expectedPS: rectesting.NewPubSubPullSubscriptionWithNoDefaults(name, testNS, + rectesting.WithPubSubPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ Topic: testTopicID, PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, }, }), - rectesting.WithPullSubscriptionLabels(map[string]string{ + rectesting.WithPubSubPullSubscriptionLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithPullSubscriptionAnnotations(map[string]string{ + rectesting.WithPubSubPullSubscriptionAnnotations(map[string]string{ "metrics-resource-group": resourceGroup, }), - rectesting.WithPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithPullSubscriptionFailed(), + rectesting.WithPubSubPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubPullSubscriptionFailed(), ), expectedErr: fmt.Sprintf("%s: the status of PullSubscription %q is False", failedToPropagatePullSubscriptionStatusMsg, name), }, { name: "topic exists and is ready, pullsubscription exists and the status is unknown", objects: []runtime.Object{ - rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicProjectID(testProjectID), - rectesting.WithTopicReady(testTopicID), - rectesting.WithTopicAddress(testTopicURI), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicProjectID(testProjectID), + rectesting.WithPubSubTopicReady(testTopicID), + rectesting.WithPubSubTopicAddress(testTopicURI), ), - rectesting.NewPullSubscriptionWithNoDefaults(name, testNS, - rectesting.WithPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ + rectesting.NewPubSubPullSubscriptionWithNoDefaults(name, testNS, + rectesting.WithPubSubPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ Topic: testTopicID, PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, }, }), - rectesting.WithPullSubscriptionLabels(map[string]string{ + rectesting.WithPubSubPullSubscriptionLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithPullSubscriptionAnnotations(map[string]string{ + rectesting.WithPubSubPullSubscriptionAnnotations(map[string]string{ "metrics-resource-group": resourceGroup, }), - rectesting.WithPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithPullSubscriptionUnknown(), + rectesting.WithPubSubPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubPullSubscriptionUnknown(), ), }, - expectedTopic: rectesting.NewTopic(name, testNS, - rectesting.WithTopicSpec(pubsubsourcev1alpha1.TopicSpec{ + expectedTopic: rectesting.NewPubSubTopic(name, testNS, + rectesting.WithPubSubTopicSpec(pubsubsourcev1alpha1.TopicSpec{ Secret: &secret, Topic: testTopicID, PropagationPolicy: "CreateDelete", }), - rectesting.WithTopicLabels(map[string]string{ + rectesting.WithPubSubTopicLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithTopicReady(testTopicID), - rectesting.WithTopicProjectID(testProjectID), - rectesting.WithTopicAddress(testTopicURI), - rectesting.WithTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubTopicReady(testTopicID), + rectesting.WithPubSubTopicProjectID(testProjectID), + rectesting.WithPubSubTopicAddress(testTopicURI), + rectesting.WithPubSubTopicOwnerReferences([]metav1.OwnerReference{ownerRef()}), ), - expectedPS: rectesting.NewPullSubscriptionWithNoDefaults(name, testNS, - rectesting.WithPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ + expectedPS: rectesting.NewPubSubPullSubscriptionWithNoDefaults(name, testNS, + rectesting.WithPubSubPullSubscriptionSpecWithNoDefaults(pubsubsourcev1alpha1.PullSubscriptionSpec{ Topic: testTopicID, PubSubSpec: v1alpha1.PubSubSpec{ Secret: &secret, }, }), - rectesting.WithPullSubscriptionLabels(map[string]string{ + rectesting.WithPubSubPullSubscriptionLabels(map[string]string{ "receive-adapter": receiveAdapterName, "events.cloud.google.com/source-name": name, }), - rectesting.WithPullSubscriptionAnnotations(map[string]string{ + rectesting.WithPubSubPullSubscriptionAnnotations(map[string]string{ "metrics-resource-group": resourceGroup, }), - rectesting.WithPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), - rectesting.WithPullSubscriptionUnknown(), + rectesting.WithPubSubPullSubscriptionOwnerReferences([]metav1.OwnerReference{ownerRef()}), + rectesting.WithPubSubPullSubscriptionUnknown(), ), expectedErr: fmt.Sprintf("%s: the status of PullSubscription %q is Unknown", failedToPropagatePullSubscriptionStatusMsg, name), }} diff --git a/pkg/reconciler/pubsub/topic/topic_test.go b/pkg/reconciler/pubsub/topic/topic_test.go index 7516f94f46..8a82542070 100644 --- a/pkg/reconciler/pubsub/topic/topic_test.go +++ b/pkg/reconciler/pubsub/topic/topic_test.go @@ -132,14 +132,14 @@ func TestAllCases(t *testing.T) { }, { Name: "create client fails", Objects: []runtime.Object{ - NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("NoCreateNoDelete"), + WithPubSubTopicPropagationPolicy("NoCreateNoDelete"), ), newSink(), newSecret(), @@ -158,30 +158,30 @@ func TestAllCases(t *testing.T) { patchFinalizers(testNS, topicName, resourceGroup), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicProjectID(testProject), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + Object: NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicProjectID(testProject), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("NoCreateNoDelete"), + WithPubSubTopicPropagationPolicy("NoCreateNoDelete"), // Updates - WithInitTopicConditions, - WithTopicNoTopic("TopicReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileTopicMsg, "create-client-induced-error"))), + WithPubSubInitTopicConditions, + WithPubSubTopicNoTopic("TopicReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileTopicMsg, "create-client-induced-error"))), }}, }, { Name: "verify topic exists fails", Objects: []runtime.Object{ - NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("NoCreateNoDelete"), + WithPubSubTopicPropagationPolicy("NoCreateNoDelete"), ), newSink(), newSecret(), @@ -202,30 +202,30 @@ func TestAllCases(t *testing.T) { patchFinalizers(testNS, topicName, resourceGroup), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicProjectID(testProject), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + Object: NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicProjectID(testProject), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("NoCreateNoDelete"), + WithPubSubTopicPropagationPolicy("NoCreateNoDelete"), // Updates - WithInitTopicConditions, - WithTopicNoTopic("TopicReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileTopicMsg, "topic-exists-induced-error"))), + WithPubSubInitTopicConditions, + WithPubSubTopicNoTopic("TopicReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileTopicMsg, "topic-exists-induced-error"))), }}, }, { Name: "topic does not exist and propagation policy is NoCreateNoDelete", Objects: []runtime.Object{ - NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("NoCreateNoDelete"), + WithPubSubTopicPropagationPolicy("NoCreateNoDelete"), ), newSink(), newSecret(), @@ -239,30 +239,30 @@ func TestAllCases(t *testing.T) { patchFinalizers(testNS, topicName, resourceGroup), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicProjectID(testProject), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + Object: NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicProjectID(testProject), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("NoCreateNoDelete"), + WithPubSubTopicPropagationPolicy("NoCreateNoDelete"), // Updates - WithInitTopicConditions, - WithTopicNoTopic("TopicReconcileFailed", fmt.Sprintf("%s: Topic %q does not exist and the topic policy doesn't allow creation", failedToReconcileTopicMsg, testTopicID))), + WithPubSubInitTopicConditions, + WithPubSubTopicNoTopic("TopicReconcileFailed", fmt.Sprintf("%s: Topic %q does not exist and the topic policy doesn't allow creation", failedToReconcileTopicMsg, testTopicID))), }}, }, { Name: "create topic fails", Objects: []runtime.Object{ - NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("CreateNoDelete"), + WithPubSubTopicPropagationPolicy("CreateNoDelete"), ), newSink(), newSecret(), @@ -281,30 +281,30 @@ func TestAllCases(t *testing.T) { patchFinalizers(testNS, topicName, resourceGroup), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicProjectID(testProject), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + Object: NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicProjectID(testProject), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("CreateNoDelete"), + WithPubSubTopicPropagationPolicy("CreateNoDelete"), // Updates - WithInitTopicConditions, - WithTopicNoTopic("TopicReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileTopicMsg, "create-topic-induced-error"))), + WithPubSubInitTopicConditions, + WithPubSubTopicNoTopic("TopicReconcileFailed", fmt.Sprintf("%s: %s", failedToReconcileTopicMsg, "create-topic-induced-error"))), }}, }, { Name: "publisher has not yet been reconciled", Objects: []runtime.Object{ - NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("CreateNoDelete"), + WithPubSubTopicPropagationPolicy("CreateNoDelete"), ), newSink(), newSecret(), @@ -321,32 +321,32 @@ func TestAllCases(t *testing.T) { newPublisher(), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicProjectID(testProject), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + Object: NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicProjectID(testProject), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("CreateNoDelete"), + WithPubSubTopicPropagationPolicy("CreateNoDelete"), // Updates - WithInitTopicConditions, - WithTopicReady(testTopicID), - WithTopicPublisherNotConfigured()), + WithPubSubInitTopicConditions, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicPublisherNotConfigured()), }}, }, { Name: "the status of publisher is false", Objects: []runtime.Object{ - NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("CreateNoDelete"), + WithPubSubTopicPropagationPolicy("CreateNoDelete"), ), newSink(), newSecret(), @@ -366,31 +366,31 @@ func TestAllCases(t *testing.T) { newPublisher(), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicProjectID(testProject), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + Object: NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicProjectID(testProject), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("CreateNoDelete"), + WithPubSubTopicPropagationPolicy("CreateNoDelete"), // Updates - WithInitTopicConditions, - WithTopicReady(testTopicID), - WithTopicPublisherNotDeployed("PublisherNotDeployed", "PublisherNotDeployed")), + WithPubSubInitTopicConditions, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicPublisherNotDeployed("PublisherNotDeployed", "PublisherNotDeployed")), }}, }, { Name: "the status of publisher is unknown", Objects: []runtime.Object{ - NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("CreateNoDelete"), + WithPubSubTopicPropagationPolicy("CreateNoDelete"), ), newSink(), newSecret(), @@ -410,31 +410,31 @@ func TestAllCases(t *testing.T) { newPublisher(), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicProjectID(testProject), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + Object: NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicProjectID(testProject), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("CreateNoDelete"), + WithPubSubTopicPropagationPolicy("CreateNoDelete"), // Updates - WithInitTopicConditions, - WithTopicReady(testTopicID), - WithTopicPublisherUnknown("PublisherUnknown", "PublisherUnknown")), + WithPubSubInitTopicConditions, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicPublisherUnknown("PublisherUnknown", "PublisherUnknown")), }}, }, { Name: "topic successfully reconciles and is ready", Objects: []runtime.Object{ - NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("CreateNoDelete"), + WithPubSubTopicPropagationPolicy("CreateNoDelete"), ), newSink(), newSecret(), @@ -454,32 +454,32 @@ func TestAllCases(t *testing.T) { newPublisher(), }, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicProjectID(testProject), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + Object: NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicProjectID(testProject), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("CreateNoDelete"), + WithPubSubTopicPropagationPolicy("CreateNoDelete"), // Updates - WithInitTopicConditions, - WithTopicReady(testTopicID), - WithTopicPublisherDeployed, - WithTopicAddress(testTopicURI)), + WithPubSubInitTopicConditions, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicPublisherDeployed, + WithPubSubTopicAddress(testTopicURI)), }}, }, { Name: "topic successfully reconciles and reuses existing publisher", Objects: []runtime.Object{ - NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("CreateNoDelete"), + WithPubSubTopicPropagationPolicy("CreateNoDelete"), ), newSink(), newSecret(), @@ -499,33 +499,33 @@ func TestAllCases(t *testing.T) { }, WithReactors: []clientgotesting.ReactionFunc{}, WantStatusUpdates: []clientgotesting.UpdateActionImpl{{ - Object: NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicProjectID(testProject), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + Object: NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicProjectID(testProject), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("CreateNoDelete"), + WithPubSubTopicPropagationPolicy("CreateNoDelete"), // Updates - WithInitTopicConditions, - WithTopicReady(testTopicID), - WithTopicPublisherDeployed, - WithTopicAddress(testTopicURI)), + WithPubSubInitTopicConditions, + WithPubSubTopicReady(testTopicID), + WithPubSubTopicPublisherDeployed, + WithPubSubTopicAddress(testTopicURI)), }}, }, { Name: "delete topic - policy CreateNoDelete", Objects: []runtime.Object{ - NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("CreateNoDelete"), - WithTopicDeleted, + WithPubSubTopicPropagationPolicy("CreateNoDelete"), + WithPubSubTopicDeleted, ), newSink(), newSecret(), @@ -536,16 +536,16 @@ func TestAllCases(t *testing.T) { }, { Name: "delete topic - policy CreateDelete", Objects: []runtime.Object{ - NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("CreateDelete"), - WithTopicTopicID(topicName), - WithTopicDeleted, + WithPubSubTopicPropagationPolicy("CreateDelete"), + WithPubSubTopicTopicID(topicName), + WithPubSubTopicDeleted, ), newSink(), newSecret(), @@ -556,16 +556,16 @@ func TestAllCases(t *testing.T) { }, { Name: "fail to delete - policy CreateDelete", Objects: []runtime.Object{ - NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, }), - WithTopicPropagationPolicy("CreateDelete"), - WithTopicTopicID(topicName), - WithTopicDeleted, + WithPubSubTopicPropagationPolicy("CreateDelete"), + WithPubSubTopicTopicID(topicName), + WithPubSubTopicDeleted, ), newSink(), newSecret(), @@ -592,12 +592,12 @@ func TestAllCases(t *testing.T) { } r := &Reconciler{ PubSubBase: pubsubBase, - topicLister: listers.GetTopicLister(), + topicLister: listers.GetPubSubTopicLister(), serviceLister: listers.GetV1ServiceLister(), publisherImage: testImage, createClientFn: gpubsub.TestClientCreator(testData["topic"]), } - return topic.NewReconciler(ctx, r.Logger, r.RunClientSet, listers.GetTopicLister(), r.Recorder, r) + return topic.NewReconciler(ctx, r.Logger, r.RunClientSet, listers.GetPubSubTopicLister(), r.Recorder, r) })) } @@ -689,9 +689,9 @@ func makeFalseStatusPublisher(reason, message string) *servingv1.Service { } func newPublisher() *servingv1.Service { - topic := NewTopic(topicName, testNS, - WithTopicUID(topicUID), - WithTopicSpec(pubsubv1alpha1.TopicSpec{ + topic := NewPubSubTopic(topicName, testNS, + WithPubSubTopicUID(topicUID), + WithPubSubTopicSpec(pubsubv1alpha1.TopicSpec{ Project: testProject, Topic: testTopicID, Secret: &secret, diff --git a/pkg/reconciler/testing/listers.go b/pkg/reconciler/testing/listers.go index 29324df038..ae36a25d28 100644 --- a/pkg/reconciler/testing/listers.go +++ b/pkg/reconciler/testing/listers.go @@ -125,11 +125,11 @@ func (l *Listers) GetIstioObjects() []runtime.Object { return l.sorter.ObjectsForSchemeFunc(fakeistioclientset.AddToScheme) } -func (l *Listers) GetPullSubscriptionLister() pubsublisters.PullSubscriptionLister { +func (l *Listers) GetPubSubPullSubscriptionLister() pubsublisters.PullSubscriptionLister { return pubsublisters.NewPullSubscriptionLister(l.indexerFor(&pubsubv1alpha1.PullSubscription{})) } -func (l *Listers) GetTopicLister() pubsublisters.TopicLister { +func (l *Listers) GetPubSubTopicLister() pubsublisters.TopicLister { return pubsublisters.NewTopicLister(l.indexerFor(&pubsubv1alpha1.Topic{})) } diff --git a/pkg/reconciler/testing/pullsubscription.go b/pkg/reconciler/testing/pubsub_pullsubscription.go similarity index 54% rename from pkg/reconciler/testing/pullsubscription.go rename to pkg/reconciler/testing/pubsub_pullsubscription.go index 0ac7ef7f48..3772f4878a 100644 --- a/pkg/reconciler/testing/pullsubscription.go +++ b/pkg/reconciler/testing/pubsub_pullsubscription.go @@ -30,11 +30,11 @@ import ( "github.com/google/knative-gcp/pkg/apis/pubsub/v1alpha1" ) -// PullSubscriptionOption enables further configuration of a PullSubscription. -type PullSubscriptionOption func(*v1alpha1.PullSubscription) +// PubSubPullSubscriptionOption enables further configuration of a PullSubscription. +type PubSubPullSubscriptionOption func(*v1alpha1.PullSubscription) -// NewPullSubscription creates a PullSubscription with PullSubscriptionOptions -func NewPullSubscription(name, namespace string, so ...PullSubscriptionOption) *v1alpha1.PullSubscription { +// NewPubSubPullSubscription creates a PullSubscription with PullSubscriptionOptions +func NewPubSubPullSubscription(name, namespace string, so ...PubSubPullSubscriptionOption) *v1alpha1.PullSubscription { s := &v1alpha1.PullSubscription{ ObjectMeta: metav1.ObjectMeta{ Name: name, @@ -48,9 +48,9 @@ func NewPullSubscription(name, namespace string, so ...PullSubscriptionOption) * return s } -// NewPullSubscriptionWithNoDefaults creates a PullSubscription with +// NewPubSubPullSubscriptionWithNoDefaults creates a PullSubscription with // PullSubscriptionOptions but does not set defaults. -func NewPullSubscriptionWithNoDefaults(name, namespace string, so ...PullSubscriptionOption) *v1alpha1.PullSubscription { +func NewPubSubPullSubscriptionWithNoDefaults(name, namespace string, so ...PubSubPullSubscriptionOption) *v1alpha1.PullSubscription { s := &v1alpha1.PullSubscription{ ObjectMeta: metav1.ObjectMeta{ Name: name, @@ -63,38 +63,18 @@ func NewPullSubscriptionWithNoDefaults(name, namespace string, so ...PullSubscri return s } -// NewPullSubscriptionWithoutNamespace creates a PullSubscription with PullSubscriptionOptions but without a specific namespace -func NewPullSubscriptionWithoutNamespace(name string, so ...PullSubscriptionOption) *v1alpha1.PullSubscription { - s := &v1alpha1.PullSubscription{ - ObjectMeta: metav1.ObjectMeta{ - Name: name, - }, - } - for _, opt := range so { - opt(s) - } - s.SetDefaults(context.Background()) - return s -} - -func WithPullSubscriptionUID(uid types.UID) PullSubscriptionOption { +func WithPubSubPullSubscriptionUID(uid types.UID) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.UID = uid } } -func WithPullSubscriptionGenerateName(generateName string) PullSubscriptionOption { - return func(c *v1alpha1.PullSubscription) { - c.ObjectMeta.GenerateName = generateName - } -} - -// WithInitPullSubscriptionConditions initializes the PullSubscriptions's conditions. -func WithInitPullSubscriptionConditions(s *v1alpha1.PullSubscription) { +// WithPubSubInitPullSubscriptionConditions initializes the PullSubscriptions's conditions. +func WithPubSubInitPullSubscriptionConditions(s *v1alpha1.PullSubscription) { s.Status.InitializeConditions() } -func WithPullSubscriptionSink(gvk metav1.GroupVersionKind, name string) PullSubscriptionOption { +func WithPubSubPullSubscriptionSink(gvk metav1.GroupVersionKind, name string) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Spec.Sink = duckv1.Destination{ Ref: &duckv1.KReference{ @@ -106,7 +86,7 @@ func WithPullSubscriptionSink(gvk metav1.GroupVersionKind, name string) PullSubs } } -func WithPullSubscriptionTransformer(gvk metav1.GroupVersionKind, name string) PullSubscriptionOption { +func WithPubSubPullSubscriptionTransformer(gvk metav1.GroupVersionKind, name string) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Spec.Transformer = &duckv1.Destination{ Ref: &duckv1.KReference{ @@ -118,59 +98,59 @@ func WithPullSubscriptionTransformer(gvk metav1.GroupVersionKind, name string) P } } -func WithPullSubscriptionMarkSink(uri *apis.URL) PullSubscriptionOption { +func WithPubSubPullSubscriptionMarkSink(uri *apis.URL) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Status.MarkSink(uri) } } -func WithPullSubscriptionMarkTransformer(uri *apis.URL) PullSubscriptionOption { +func WithPubSubPullSubscriptionMarkTransformer(uri *apis.URL) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Status.MarkTransformer(uri) } } -func WithPullSubscriptionMarkNoTransformer(reason, message string) PullSubscriptionOption { +func WithPubSubPullSubscriptionMarkNoTransformer(reason, message string) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Status.MarkNoTransformer(reason, message) } } -func WithPullSubscriptionMarkSubscribed(subscriptionID string) PullSubscriptionOption { +func WithPubSubPullSubscriptionMarkSubscribed(subscriptionID string) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Status.MarkSubscribed(subscriptionID) } } -func WithPullSubscriptionSubscriptionID(subscriptionID string) PullSubscriptionOption { +func WithPubSubPullSubscriptionSubscriptionID(subscriptionID string) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Status.SubscriptionID = subscriptionID } } -func WithPullSubscriptionProjectID(projectID string) PullSubscriptionOption { +func WithPubSubPullSubscriptionProjectID(projectID string) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Status.ProjectID = projectID } } -func WithPullSubscriptionTransformerURI(uri *apis.URL) PullSubscriptionOption { +func WithPubSubPullSubscriptionTransformerURI(uri *apis.URL) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Status.TransformerURI = uri } } -func WithPullSubscriptionMarkNoSubscription(reason, message string) PullSubscriptionOption { +func WithPubSubPullSubscriptionMarkNoSubscription(reason, message string) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Status.MarkNoSubscription(reason, message) } } -func WithPullSubscriptionMarkDeployed(ps *v1alpha1.PullSubscription) { +func WithPubSubPullSubscriptionMarkDeployed(ps *v1alpha1.PullSubscription) { ps.Status.MarkDeployed() } -func WithPullSubscriptionSpec(spec v1alpha1.PullSubscriptionSpec) PullSubscriptionOption { +func WithPubSubPullSubscriptionSpec(spec v1alpha1.PullSubscriptionSpec) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Spec = spec s.Spec.SetDefaults(context.Background()) @@ -178,13 +158,13 @@ func WithPullSubscriptionSpec(spec v1alpha1.PullSubscriptionSpec) PullSubscripti } // Same as withPullSubscriptionSpec but does not set defaults -func WithPullSubscriptionSpecWithNoDefaults(spec v1alpha1.PullSubscriptionSpec) PullSubscriptionOption { +func WithPubSubPullSubscriptionSpecWithNoDefaults(spec v1alpha1.PullSubscriptionSpec) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Spec = spec } } -func WithPullSubscriptionReady(sink *apis.URL) PullSubscriptionOption { +func WithPubSubPullSubscriptionReady(sink *apis.URL) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Status.InitializeConditions() s.Status.MarkSink(sink) @@ -193,7 +173,7 @@ func WithPullSubscriptionReady(sink *apis.URL) PullSubscriptionOption { } } -func WithPullSubscriptionFailed() PullSubscriptionOption { +func WithPubSubPullSubscriptionFailed() PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Status.InitializeConditions() s.Status.MarkNoSink("InvalidSink", @@ -202,68 +182,55 @@ func WithPullSubscriptionFailed() PullSubscriptionOption { } } -func WithPullSubscriptionUnknown() PullSubscriptionOption { +func WithPubSubPullSubscriptionUnknown() PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Status.InitializeConditions() } } -func WithPullSubscriptionJobFailure(subscriptionID, reason, message string) PullSubscriptionOption { - return func(s *v1alpha1.PullSubscription) { - s.Status.SubscriptionID = subscriptionID - s.Status.MarkNoSubscription(reason, message) - } -} - -func WithPullSubscriptionSinkNotFound() PullSubscriptionOption { +func WithPubSubPullSubscriptionSinkNotFound() PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Status.MarkNoSink("InvalidSink", `failed to get ref &ObjectReference{Kind:Sink,Namespace:testnamespace,Name:sink,UID:,APIVersion:testing.cloud.google.com/v1alpha1,ResourceVersion:,FieldPath:,}: sinks.testing.cloud.google.com "sink" not found`) } } -func WithPullSubscriptionDeleted(s *v1alpha1.PullSubscription) { +func WithPubSubPullSubscriptionDeleted(s *v1alpha1.PullSubscription) { t := metav1.NewTime(time.Unix(1e9, 0)) s.ObjectMeta.SetDeletionTimestamp(&t) } -func WithPullSubscriptionOwnerReferences(ownerReferences []metav1.OwnerReference) PullSubscriptionOption { +func WithPubSubPullSubscriptionOwnerReferences(ownerReferences []metav1.OwnerReference) PubSubPullSubscriptionOption { return func(c *v1alpha1.PullSubscription) { c.ObjectMeta.OwnerReferences = ownerReferences } } -func WithPullSubscriptionLabels(labels map[string]string) PullSubscriptionOption { +func WithPubSubPullSubscriptionLabels(labels map[string]string) PubSubPullSubscriptionOption { return func(c *v1alpha1.PullSubscription) { c.ObjectMeta.Labels = labels } } -func WithPullSubscriptionAnnotations(annotations map[string]string) PullSubscriptionOption { +func WithPubSubPullSubscriptionAnnotations(annotations map[string]string) PubSubPullSubscriptionOption { return func(c *v1alpha1.PullSubscription) { c.ObjectMeta.Annotations = annotations } } -func WithPullSubscriptionFinalizers(finalizers ...string) PullSubscriptionOption { - return func(s *v1alpha1.PullSubscription) { - s.Finalizers = finalizers - } -} - -func WithPullSubscriptionStatusObservedGeneration(generation int64) PullSubscriptionOption { +func WithPubSubPullSubscriptionStatusObservedGeneration(generation int64) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Status.Status.ObservedGeneration = generation } } -func WithPullSubscriptionObjectMetaGeneration(generation int64) PullSubscriptionOption { +func WithPubSubPullSubscriptionObjectMetaGeneration(generation int64) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.ObjectMeta.Generation = generation } } -func WithPullSubscriptionReadyStatus(status corev1.ConditionStatus, reason, message string) PullSubscriptionOption { +func WithPubSubPullSubscriptionReadyStatus(status corev1.ConditionStatus, reason, message string) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Status.Conditions = []apis.Condition{{ Type: apis.ConditionReady, @@ -274,7 +241,7 @@ func WithPullSubscriptionReadyStatus(status corev1.ConditionStatus, reason, mess } } -func WithPullSubscriptionMode(mode v1alpha1.ModeType) PullSubscriptionOption { +func WithPubSubPullSubscriptionMode(mode v1alpha1.ModeType) PubSubPullSubscriptionOption { return func(s *v1alpha1.PullSubscription) { s.Spec.Mode = mode } diff --git a/pkg/reconciler/testing/topic.go b/pkg/reconciler/testing/pubsub_topic.go similarity index 59% rename from pkg/reconciler/testing/topic.go rename to pkg/reconciler/testing/pubsub_topic.go index d38bc549a8..36abf1d57b 100644 --- a/pkg/reconciler/testing/topic.go +++ b/pkg/reconciler/testing/pubsub_topic.go @@ -28,11 +28,11 @@ import ( "github.com/google/knative-gcp/pkg/apis/pubsub/v1alpha1" ) -// TopicOption enables further configuration of a Topic. -type TopicOption func(*v1alpha1.Topic) +// PubSubTopicOption enables further configuration of a Topic. +type PubSubTopicOption func(*v1alpha1.Topic) -// NewTopic creates a Topic with TopicOptions -func NewTopic(name, namespace string, so ...TopicOption) *v1alpha1.Topic { +// NewPubSubTopic creates a Topic with TopicOptions +func NewPubSubTopic(name, namespace string, so ...PubSubTopicOption) *v1alpha1.Topic { s := &v1alpha1.Topic{ ObjectMeta: metav1.ObjectMeta{ Name: name, @@ -46,45 +46,31 @@ func NewTopic(name, namespace string, so ...TopicOption) *v1alpha1.Topic { return s } -func WithTopicUID(uid types.UID) TopicOption { +func WithPubSubTopicUID(uid types.UID) PubSubTopicOption { return func(s *v1alpha1.Topic) { s.UID = uid } } -// WithInitTopicConditions initializes the Topics's conditions. -func WithInitTopicConditions(s *v1alpha1.Topic) { +// WithPubSubInitTopicConditions initializes the Topics's conditions. +func WithPubSubInitTopicConditions(s *v1alpha1.Topic) { s.Status.InitializeConditions() } -func WithTopicTopicID(topicID string) TopicOption { +func WithPubSubTopicTopicID(topicID string) PubSubTopicOption { return func(s *v1alpha1.Topic) { s.Status.MarkTopicReady() s.Status.TopicID = topicID } } -func WithTopicPropagationPolicy(policy string) TopicOption { +func WithPubSubTopicPropagationPolicy(policy string) PubSubTopicOption { return func(s *v1alpha1.Topic) { s.Spec.PropagationPolicy = v1alpha1.PropagationPolicyType(policy) } } -func WithTopicTopicDeleted(topicID string) TopicOption { - return func(s *v1alpha1.Topic) { - s.Status.MarkNoTopic("Deleted", "Successfully deleted topic %q.", topicID) - s.Status.TopicID = "" - } -} - -func WithTopicJobFailure(topicID, reason, message string) TopicOption { - return func(s *v1alpha1.Topic) { - s.Status.TopicID = topicID - s.Status.MarkNoTopic(reason, message) - } -} - -func WithTopicAddress(uri string) TopicOption { +func WithPubSubTopicAddress(uri string) PubSubTopicOption { return func(s *v1alpha1.Topic) { if uri != "" { u, _ := apis.ParseURL(uri) @@ -95,41 +81,41 @@ func WithTopicAddress(uri string) TopicOption { } } -func WithTopicSpec(spec v1alpha1.TopicSpec) TopicOption { +func WithPubSubTopicSpec(spec v1alpha1.TopicSpec) PubSubTopicOption { return func(s *v1alpha1.Topic) { s.Spec = spec } } -func WithTopicPublisherDeployed(s *v1alpha1.Topic) { +func WithPubSubTopicPublisherDeployed(s *v1alpha1.Topic) { s.Status.MarkPublisherDeployed() } -func WithTopicPublisherNotDeployed(reason, message string) TopicOption { +func WithPubSubTopicPublisherNotDeployed(reason, message string) PubSubTopicOption { return func(t *v1alpha1.Topic) { t.Status.MarkPublisherNotDeployed(reason, message) } } -func WithTopicPublisherUnknown(reason, message string) TopicOption { +func WithPubSubTopicPublisherUnknown(reason, message string) PubSubTopicOption { return func(t *v1alpha1.Topic) { t.Status.MarkPublisherUnknown(reason, message) } } -func WithTopicPublisherNotConfigured() TopicOption { +func WithPubSubTopicPublisherNotConfigured() PubSubTopicOption { return func(t *v1alpha1.Topic) { t.Status.MarkPublisherNotConfigured() } } -func WithTopicProjectID(projectID string) TopicOption { +func WithPubSubTopicProjectID(projectID string) PubSubTopicOption { return func(s *v1alpha1.Topic) { s.Status.ProjectID = projectID } } -func WithTopicReady(topicID string) TopicOption { +func WithPubSubTopicReady(topicID string) PubSubTopicOption { return func(s *v1alpha1.Topic) { s.Status.InitializeConditions() s.Status.MarkPublisherDeployed() @@ -138,44 +124,38 @@ func WithTopicReady(topicID string) TopicOption { } } -func WithTopicFailed() TopicOption { +func WithPubSubTopicFailed() PubSubTopicOption { return func(s *v1alpha1.Topic) { s.Status.InitializeConditions() s.Status.MarkPublisherNotDeployed("PublisherStatus", "Publisher has no Ready type status") } } -func WithTopicUnknown() TopicOption { +func WithPubSubTopicUnknown() PubSubTopicOption { return func(s *v1alpha1.Topic) { s.Status.InitializeConditions() } } -func WithTopicDeleted(t *v1alpha1.Topic) { +func WithPubSubTopicDeleted(t *v1alpha1.Topic) { tt := metav1.NewTime(time.Unix(1e9, 0)) t.ObjectMeta.SetDeletionTimestamp(&tt) } -func WithTopicOwnerReferences(ownerReferences []metav1.OwnerReference) TopicOption { +func WithPubSubTopicOwnerReferences(ownerReferences []metav1.OwnerReference) PubSubTopicOption { return func(c *v1alpha1.Topic) { c.ObjectMeta.OwnerReferences = ownerReferences } } -func WithTopicLabels(labels map[string]string) TopicOption { +func WithPubSubTopicLabels(labels map[string]string) PubSubTopicOption { return func(c *v1alpha1.Topic) { c.ObjectMeta.Labels = labels } } -func WithTopicNoTopic(reason, message string) TopicOption { +func WithPubSubTopicNoTopic(reason, message string) PubSubTopicOption { return func(t *v1alpha1.Topic) { t.Status.MarkNoTopic(reason, message) } } - -func WithTopicFinalizers(finalizers ...string) TopicOption { - return func(s *v1alpha1.Topic) { - s.Finalizers = finalizers - } -} diff --git a/test/e2e/test_pullsubscription.go b/test/e2e/test_pullsubscription.go index 24c83310d5..cc41a31946 100644 --- a/test/e2e/test_pullsubscription.go +++ b/test/e2e/test_pullsubscription.go @@ -46,8 +46,8 @@ func SmokePullSubscriptionTestImpl(t *testing.T, authConfig lib.AuthConfig) { defer lib.TearDown(client) // Create PullSubscription. - pullsubscription := kngcptesting.NewPullSubscription(psName, client.Namespace, - kngcptesting.WithPullSubscriptionSpec(v1alpha1.PullSubscriptionSpec{ + pullsubscription := kngcptesting.NewPubSubPullSubscription(psName, client.Namespace, + kngcptesting.WithPubSubPullSubscriptionSpec(v1alpha1.PullSubscriptionSpec{ Topic: topic, PubSubSpec: duckv1alpha1.PubSubSpec{ IdentitySpec: duckv1alpha1.IdentitySpec{ @@ -55,7 +55,7 @@ func SmokePullSubscriptionTestImpl(t *testing.T, authConfig lib.AuthConfig) { }, }, }), - kngcptesting.WithPullSubscriptionSink(lib.ServiceGVK, svcName)) + kngcptesting.WithPubSubPullSubscriptionSink(lib.ServiceGVK, svcName)) client.CreatePullSubscriptionOrFail(pullsubscription) client.Core.WaitForResourceReadyOrFail(psName, lib.PullSubscriptionTypeMeta) @@ -80,15 +80,15 @@ func PullSubscriptionWithTargetTestImpl(t *testing.T, authConfig lib.AuthConfig) client.CreateJobOrFail(job, lib.WithServiceForJob(targetName)) // Create PullSubscription. - pullsubscription := kngcptesting.NewPullSubscription(psName, client.Namespace, - kngcptesting.WithPullSubscriptionSpec(v1alpha1.PullSubscriptionSpec{ + pullsubscription := kngcptesting.NewPubSubPullSubscription(psName, client.Namespace, + kngcptesting.WithPubSubPullSubscriptionSpec(v1alpha1.PullSubscriptionSpec{ Topic: topicName, PubSubSpec: duckv1alpha1.PubSubSpec{ IdentitySpec: duckv1alpha1.IdentitySpec{ authConfig.PubsubServiceAccount, }, }, - }), kngcptesting.WithPullSubscriptionSink(lib.ServiceGVK, targetName)) + }), kngcptesting.WithPubSubPullSubscriptionSink(lib.ServiceGVK, targetName)) client.CreatePullSubscriptionOrFail(pullsubscription) client.Core.WaitForResourceReadyOrFail(psName, lib.PullSubscriptionTypeMeta) From c06fdff4be04cf734f3dfb2af6766b39d1aa0f10 Mon Sep 17 00:00:00 2001 From: capri-xiyue <52932582+capri-xiyue@users.noreply.github.com> Date: Tue, 5 May 2020 18:23:44 -0700 Subject: [PATCH 09/12] update the docs of how to run e2e tests (#892) * update the docs of how to run e2e tests with existing cluster * modified the e2e test docs with how to run tests with new cluster * added how to run e2e tests with workload identity in existing cluster * modified the code based on comments * fixed format * fixed typo * fixed format --- docs/examples/cloudschedulersource/README.md | 14 +- ...-config-tracing-configmap-with-zipkin.yaml | 18 ++ test/e2e/README.md | 178 +++++++++++++++--- 3 files changed, 178 insertions(+), 32 deletions(-) create mode 100644 docs/install/patch-config-tracing-configmap-with-zipkin.yaml mode change 100644 => 100755 test/e2e/README.md diff --git a/docs/examples/cloudschedulersource/README.md b/docs/examples/cloudschedulersource/README.md index 0ca469309e..bb38db63dc 100644 --- a/docs/examples/cloudschedulersource/README.md +++ b/docs/examples/cloudschedulersource/README.md @@ -8,11 +8,17 @@ scheduled events from ## Prerequisites -1. [Install Knative-GCP](../../install/install-knative-gcp.md). Note that your - project needs to be created with an App Engine application. Refer to this - [guide](https://cloud.google.com/scheduler/docs/quickstart#create_a_project_with_an_app_engine_app) - for more details. +1. [Install Knative-GCP](../../install/install-knative-gcp.md). +1. Create with an App Engine application in your project. Refer to this + [guide](https://cloud.google.com/scheduler/docs/quickstart#create_a_project_with_an_app_engine_app) + for more details. You can change the APP_ENGINE_LOCATION, + but please make sure you also update the spec.location in [`CloudSchedulerSource`](cloudschedulersource.yaml) + + ```shell + export APP_ENGINE_LOCATION=us-central1 + gcloud app create --region=$APP_ENGINE_LOCATION + ``` 1. [Create a Pub/Sub enabled Service Account](../../install/pubsub-service-account.md) 1. Enable the `Cloud Scheduler API` on your project: diff --git a/docs/install/patch-config-tracing-configmap-with-zipkin.yaml b/docs/install/patch-config-tracing-configmap-with-zipkin.yaml new file mode 100644 index 0000000000..75213c7563 --- /dev/null +++ b/docs/install/patch-config-tracing-configmap-with-zipkin.yaml @@ -0,0 +1,18 @@ +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +data: + # Configuration for tracing to use the Zipkin backend + backend: "zipkin" + zipkin-endpoint: "http://zipkin.istio-system.svc.cluster.local:9411/api/v2/spans" diff --git a/test/e2e/README.md b/test/e2e/README.md old mode 100644 new mode 100755 index ac565e0a51..ea3e1b00fa --- a/test/e2e/README.md +++ b/test/e2e/README.md @@ -1,6 +1,8 @@ # E2E Tests -Prow will run `./e2e-tests.sh`. +Prow will run `./e2e-tests.sh` with authentication mechanism using Kubernetes +Secrets and `./e2e-wi-tests.sh` with authentication mechanism using Workload +Identity. ## Adding E2E Tests @@ -24,53 +26,171 @@ knative-gcp should be added under [knative-gcp e2e test lib](lib). ## Running E2E Tests on an existing cluster -To run [the e2e tests](../e2e) with `go test` command, you need to have a -running environment that meets -[the e2e test environment requirements](#environment-requirements), and you need -to specify the build tag `e2e`. +### Prerequisites + +There are two ways to set up authentication mechanism. + +- If you want to run E2E tests with authentication mechanism using + **[Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity)**, + please follow below instructions to configure the authentication mechanism + with **[Workload + Identity]**(https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity). + **[Workload + Identity]**(https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) + is GKE specific. +- If you want to run E2E tests with authentication mechanism using **Kubernetes + Secrets**, please follow below instructions to configure the authentication + mechanism with **Kubernetes Secrets**. + +1. A running Kubernetes cluster with + [knative-gcp](../../docs/install/install-knative-gcp.md) installed and + configured. +1. [Pub/Sub Enabled Service Account](../../docs/install/pubsub-service-account.md) + installed. +1. [GCP Broker Deployment](../../docs/install/install-gcp-broker.md#deployment) + and + [GCP Broker Authentication Setup](../../docs/install/install-gcp-broker.md#authentication-setup-for-gcp-broker). +1. [Broker with Pub/Sub Channel](../../docs/install/install-broker-with-pubsub-channel.md) + installed. +1. [CloudSchedulerSource Prerequisites](../../docs/examples/cloudschedulersource/README.md#prerequisites). + Note that you only need to: + 1. Create with an App Engine application in your project. +1. [CloudStorageSource Prerequisites](../../docs/examples/cloudstoragesource/README.md#prerequisites). + Note that you only need to: + 1. Enable the Cloud Storage API on your project. + 1. Give Google Cloud Storage permissions to publish to GCP Pub/Sub. +1. A docker repo containing [the test images](#test-images). Remember to + specify the build tag `e2e`. +1. (Optional) Note that if you plan on running metrics-related E2E tests using + the StackDriver backend, you need to give your + [Service Account](../../docs/install/pubsub-service-account.md) the + `Monitoring Editor` role on your Google Cloud project: + + ```shell + gcloud projects add-iam-policy-binding $PROJECT_ID \ + --member=serviceAccount:cloudrunevents-pullsub@$PROJECT_ID.iam.gserviceaccount.com \ + --role roles/monitoring.editor + ``` + +1. (Optional) Note that if plan on running tracing-related E2E tests using the + Zipkin backend, you need to install + [zipkin-in-mem](https://github.com/knative/serving/tree/master/config/monitoring/tracing/zipkin-in-mem) + and patch the configmap `config-tracing` in the `knative-eventing` namespace + to use the Zipkin backend as the with + [patch-config-tracing-configmap-with-zipkin.yaml](../../docs/install/patch-config-tracing-configmap-with-zipkin.yaml). + + ```shell + kubectl patch configmap config-tracing -n knative-eventing --patch "\$(cat patch-config-tracing-configmap-with-zipkin.yaml)" + ``` + +### Running E2E tests + +### Running E2E tests with authentication mechanism using Kubernetes Secrets ```shell -go test --tags=e2e ./test/e2e/... +E2E_PROJECT_ID= \ + go test --tags=e2e ./test/e2e/... ``` And count is supported too: ```shell -go test --tags=e2e ./test/e2e/... --count=3 +E2E_PROJECT_ID= \ + go test --tags=e2e ./test/e2e/... --count=3 ``` If you want to run a specific test: ```shell -go test --tags=e2e ./test/e2e/... -run NameOfTest +E2E_PROJECT_ID= \ + go test --tags=e2e ./test/e2e/... -run NameOfTest ``` For example, to run TestPullSubscription: ```shell -GOOGLE_APPLICATION_CREDENTIALS= \ E2E_PROJECT_ID= \ go test --tags=e2e ./test/e2e/... -run TestPullSubscription ``` -Note that if you plan on running metrics-related E2E tests using the StackDriver -backend, you need to give your -[Service Account](../../docs/install/pubsub-service-account.md) the -`Monitoring Editor` role on your Google Cloud project: +### Running E2E tests with authentication mechanism using Workload Identity. + +`-pubsubServiceAccount=$PUBSUB_SERVICE_ACCOUNT@$PROJECT_ID.iam.gserviceaccount.com` +where `$PUBSUB_SERVICE_ACCOUNT@$PROJECT_ID.iam.gserviceaccount.com` is the +Pub/Sub enabled Google Cloud Service Account. + +```shell +E2E_PROJECT_ID= go test --tags=e2e \ + -workloadIndentity=true \ + -pubsubServiceAccount=cre-pubsub@$PROJECT_ID.iam.gserviceaccount.com \ + ./test/e2e/... +``` + +And count is supported too: ```shell -gcloud projects add-iam-policy-binding $PROJECT_ID \ - --member=serviceAccount:cloudrunevents-pullsub@$PROJECT_ID.iam.gserviceaccount.com \ - --role roles/monitoring.editor +E2E_PROJECT_ID= go test --tags=e2e \ + -workloadIndentity=true \ + -pubsubServiceAccount=cre-pubsub@$PROJECT_ID.iam.gserviceaccount.com \ + ./test/e2e/... --count=3 ``` -## Environment requirements +If you want to run a specific test: + +```shell +E2E_PROJECT_ID= go test --tags=e2e \ + -workloadIndentity=true \ + -pubsubServiceAccount=cre-pubsub@$PROJECT_ID.iam.gserviceaccount.com \ + ./test/e2e/... -run NameOfTest +``` + +For example, to run TestPullSubscription: + +```shell +E2E_PROJECT_ID= go test --tags=e2e \ + -workloadIndentity=true \ + -pubsubServiceAccount=cre-pubsub@$PROJECT_ID.iam.gserviceaccount.com \ + ./test/e2e/... -run TestPullSubscription +``` + +## Running E2E Tests on an new cluster + +### Prerequisites + +1. Enable necessary APIs: + + ```shell + gcloud services enable compute.googleapis.com + gcloud services enable container.googleapis.com + ``` + +1. Install + [kubetest](https://github.com/kubernetes/test-infra/issues/15700#issuecomment-571114504). + (Note this is just a workaround because of + [kubernetes issue](https://github.com/kubernetes/test-infra/issues/15700) -There's couple of things you need to install before running e2e tests locally. +1. Set the project you want to run E2E tests to be the default one with: -1. A running Kubernetes cluster with [knative-gcp](../../docs/install) installed - and configured -1. A docker repo containing [the test images](#test-images) + ```shell + export PROJECT= + gcloud config set core/project $PROJECT + ``` + +### Running E2E tests + +If you want to run E2E tests with authentication mechanism using **Kubernetes +Secrets**: + +```shell +./test/e2e-tests.sh +``` + +If you want to run E2E tests with authentication mechanism using **Workload +Identity**: + +```shell +./test/e2e-wi-tests.sh +``` ## Test images @@ -89,21 +209,23 @@ build and push the test images used by the e2e tests. It requires: [authenticated with your `KO_DOCKER_REPO`](https://github.com/knative/serving/blob/master/DEVELOPMENT.md#environment-setup) - [`docker`](https://docs.docker.com/install/) to be installed -To run the script for all end to end test images: - -```bash -./test/upload-test-images.sh ./test/test_images -./test/upload-test-images.sh ./vendor/knative.dev/eventing/test/test_images/ -``` - For images deployed in GCR, a docker tag is mandatory to avoid issues with using `latest` tag: ```bash ./test/upload-test-images.sh ./test/test_images e2e +sed -i 's@ko://knative.dev/eventing/test/test_images@ko://github.com/google/knative-gcp/vendor/knative.dev/eventing/test/test_images@g' vendor/knative.dev/eventing/test/test_images/*/*.yaml ./test/upload-test-images.sh ./vendor/knative.dev/eventing/test/test_images/ e2e ``` +To run the script for all end to end test images: + +```bash +./test/upload-test-images.sh ./test/test_images +sed -i 's@ko://knative.dev/eventing/test/test_images@ko://github.com/google/knative-gcp/vendor/knative.dev/eventing/test/test_images@g' vendor/knative.dev/eventing/test/test_images/*/*.yaml +./test/upload-test-images.sh ./vendor/knative.dev/eventing/test/test_images/ +``` + ### Adding new test images New test images should be placed in `./test/test_images`. For each image create From 284d8487f266fb4ddebfdf250ae65b3a5a0ac33d Mon Sep 17 00:00:00 2001 From: Ian Milligan Date: Tue, 5 May 2020 20:04:44 -0700 Subject: [PATCH 10/12] Unpin prometheus lib versions (#1002) Upgrades github.com/prometheus/client_model to v0.2.0, github.com/prometheus/common to v0.9.1, and github.com/prometheus/procfs to v0.0.11. --- go.mod | 6 - go.sum | 30 +- .../prometheus/client_model/go/metrics.pb.go | 268 ++++-- .../prometheus/common/expfmt/encode.go | 124 ++- .../prometheus/common/expfmt/expfmt.go | 11 +- .../common/expfmt/openmetrics_create.go | 527 +++++++++++ .../prometheus/common/expfmt/text_create.go | 3 +- .../prometheus/procfs/.golangci.yml | 6 +- .../prometheus/procfs/CONTRIBUTING.md | 109 ++- .../prometheus/procfs/Makefile.common | 32 +- vendor/github.com/prometheus/procfs/README.md | 16 +- .../github.com/prometheus/procfs/cpuinfo.go | 5 +- vendor/github.com/prometheus/procfs/crypto.go | 160 ++-- .../prometheus/procfs/fixtures.ttar | 837 +++++++++++++++++- vendor/github.com/prometheus/procfs/go.mod | 7 +- vendor/github.com/prometheus/procfs/go.sum | 10 +- .../prometheus/procfs/internal/fs/fs.go | 2 +- .../procfs/internal/util/readfile.go | 38 + .../procfs/internal/util/sysreadfile.go | 5 +- .../procfs/internal/util/valueparser.go | 18 +- vendor/github.com/prometheus/procfs/ipvs.go | 12 +- .../github.com/prometheus/procfs/loadavg.go | 62 ++ .../github.com/prometheus/procfs/meminfo.go | 277 ++++++ .../github.com/prometheus/procfs/mountinfo.go | 122 +-- .../prometheus/procfs/net_conntrackstat.go | 153 ++++ .../github.com/prometheus/procfs/net_dev.go | 1 - .../prometheus/procfs/net_sockstat.go | 163 ++++ .../prometheus/procfs/net_softnet.go | 107 ++- .../github.com/prometheus/procfs/net_udp.go | 229 +++++ .../github.com/prometheus/procfs/net_unix.go | 224 +++-- vendor/github.com/prometheus/procfs/proc.go | 23 +- .../prometheus/procfs/proc_environ.go | 12 +- .../prometheus/procfs/proc_fdinfo.go | 49 +- .../github.com/prometheus/procfs/proc_io.go | 12 +- .../github.com/prometheus/procfs/proc_maps.go | 208 +++++ .../github.com/prometheus/procfs/proc_psi.go | 21 +- .../github.com/prometheus/procfs/proc_stat.go | 10 +- .../prometheus/procfs/proc_status.go | 49 +- vendor/github.com/prometheus/procfs/stat.go | 12 +- vendor/github.com/prometheus/procfs/swaps.go | 89 ++ vendor/modules.txt | 6 +- 41 files changed, 3475 insertions(+), 580 deletions(-) create mode 100644 vendor/github.com/prometheus/common/expfmt/openmetrics_create.go create mode 100644 vendor/github.com/prometheus/procfs/internal/util/readfile.go create mode 100644 vendor/github.com/prometheus/procfs/loadavg.go create mode 100644 vendor/github.com/prometheus/procfs/meminfo.go create mode 100644 vendor/github.com/prometheus/procfs/net_conntrackstat.go create mode 100644 vendor/github.com/prometheus/procfs/net_sockstat.go create mode 100644 vendor/github.com/prometheus/procfs/net_udp.go create mode 100644 vendor/github.com/prometheus/procfs/proc_maps.go create mode 100644 vendor/github.com/prometheus/procfs/swaps.go diff --git a/go.mod b/go.mod index 4ffc6112d8..bf791616e2 100644 --- a/go.mod +++ b/go.mod @@ -71,12 +71,6 @@ replace github.com/modern-go/reflect2 => github.com/modern-go/reflect2 v0.0.0-20 replace github.com/pkg/errors => github.com/pkg/errors v0.8.1 -replace github.com/prometheus/client_model => github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 - -replace github.com/prometheus/common => github.com/prometheus/common v0.7.0 - -replace github.com/prometheus/procfs => github.com/prometheus/procfs v0.0.5 - replace github.com/robfig/cron/v3 => github.com/robfig/cron/v3 v3.0.0 replace go.uber.org/zap => go.uber.org/zap v1.9.2-0.20180814183419-67bc79d13d15 diff --git a/go.sum b/go.sum index 57a04e6910..3082526c06 100644 --- a/go.sum +++ b/go.sum @@ -92,7 +92,9 @@ github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo= github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI= +github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= +github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/andygrunwald/go-gerrit v0.0.0-20190120104749-174420ebee6c/go.mod h1:0iuRQp6WJ44ts+iihy5E/WlPqfg5RNeQxOmzRkxCdtk= github.com/antihax/optional v0.0.0-20180407024304-ca021399b1a6/go.mod h1:V8iCPQYkqmusNa815XgQio277wI47sdRh1dUOLdyC6Q= @@ -197,7 +199,9 @@ github.com/globalsign/mgo v0.0.0-20181015135952-eeefdecb41b8/go.mod h1:xkRDCp4j0 github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= +github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= +github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-logr/logr v0.1.0 h1:M1Tv3VzNlEHg6uyACnRdtrploV2P7wZqH8BoQMtz0cg= github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas= @@ -256,6 +260,7 @@ github.com/go-openapi/validate v0.18.0/go.mod h1:Uh4HdOzKt19xGIGm1qHf/ofbX1YQ4Y+ github.com/go-openapi/validate v0.19.2/go.mod h1:1tRCw7m3jtI8eNWEEliiAqUIcBztB2KDnRCRMUi7GTA= github.com/go-sql-driver/mysql v0.0.0-20160411075031-7ebe0a500653/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w= github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= +github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/go-test/deep v1.0.4/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA= github.com/go-yaml/yaml v2.1.0+incompatible/go.mod h1:w2MrLa16VYP0jy6N7M5kHaCkaLENm+P+Tv+MfurjSw0= github.com/gobuffalo/envy v1.6.5/go.mod h1:N+GkhhZ/93bGZc6ZKhJLP6+m+tCNPKwgSpH9kaifseQ= @@ -508,16 +513,33 @@ github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZN github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA= github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= +github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.2/go.mod h1:OsXs2jCmiKlQ1lTBmv21f2mNfw4xf/QclQDMrYNZzcM= github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= github.com/prometheus/client_golang v1.1.0 h1:BQ53HtBmfOitExawJ6LokA4x8ov/z0SYYb0+HxJfRI8= github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g= +github.com/prometheus/client_model v0.0.0-20170216185247-6f3806018612/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= +github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= +github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 h1:gQz4mCbXsO+nc9n1hCxHcGA3Zx3Eo+UHZoInFGUIXNM= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/prometheus/common v0.7.0 h1:L+1lyG48J1zAQXA3RBX/nG/B3gjlHq0zTt2tlbJLyCY= -github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA= -github.com/prometheus/procfs v0.0.5 h1:3+auTFlqw+ZaQYJARz6ArODtkaIwtvBTx3N2NehQlL8= -github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ= +github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M= +github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= +github.com/prometheus/common v0.0.0-20180518154759-7600349dcfe1/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= +github.com/prometheus/common v0.0.0-20181020173914-7e9e6cabbd39/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= +github.com/prometheus/common v0.0.0-20181126121408-4724e9255275/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= +github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= +github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc= +github.com/prometheus/common v0.9.1 h1:KOMtN28tlbam3/7ZKEYKHhKoJZYYj3gMH4uc62x7X7U= +github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4= +github.com/prometheus/procfs v0.0.0-20180612222113-7d6f385de8be/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= +github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= +github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= +github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= +github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ= +github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A= +github.com/prometheus/procfs v0.0.11 h1:DhHlBtkHWPYi8O2y31JkK0TF+DGM+51OopZjH/Ia5qI= +github.com/prometheus/procfs v0.0.11/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4= github.com/remyoudompheng/bigfft v0.0.0-20170806203942-52369c62f446/go.mod h1:uYEyJGbgTkfkS4+E/PavXkNJcbFIpEtjt2B0KDQ5+9M= github.com/robfig/cron/v3 v3.0.0 h1:kQ6Cb7aHOHTSzNVNEhmp8EcWKLb4CbiMW9h9VyIhO4E= diff --git a/vendor/github.com/prometheus/client_model/go/metrics.pb.go b/vendor/github.com/prometheus/client_model/go/metrics.pb.go index 9805432c2a..2f4930d9dd 100644 --- a/vendor/github.com/prometheus/client_model/go/metrics.pb.go +++ b/vendor/github.com/prometheus/client_model/go/metrics.pb.go @@ -1,11 +1,14 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // source: metrics.proto -package io_prometheus_client // import "github.com/prometheus/client_model/go" +package io_prometheus_client -import proto "github.com/golang/protobuf/proto" -import fmt "fmt" -import math "math" +import ( + fmt "fmt" + proto "github.com/golang/protobuf/proto" + timestamp "github.com/golang/protobuf/ptypes/timestamp" + math "math" +) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal @@ -16,7 +19,7 @@ var _ = math.Inf // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. -const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package +const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package type MetricType int32 @@ -35,6 +38,7 @@ var MetricType_name = map[int32]string{ 3: "UNTYPED", 4: "HISTOGRAM", } + var MetricType_value = map[string]int32{ "COUNTER": 0, "GAUGE": 1, @@ -48,9 +52,11 @@ func (x MetricType) Enum() *MetricType { *p = x return p } + func (x MetricType) String() string { return proto.EnumName(MetricType_name, int32(x)) } + func (x *MetricType) UnmarshalJSON(data []byte) error { value, err := proto.UnmarshalJSONEnum(MetricType_value, data, "MetricType") if err != nil { @@ -59,8 +65,9 @@ func (x *MetricType) UnmarshalJSON(data []byte) error { *x = MetricType(value) return nil } + func (MetricType) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_metrics_c97c9a2b9560cb8f, []int{0} + return fileDescriptor_6039342a2ba47b72, []int{0} } type LabelPair struct { @@ -75,16 +82,17 @@ func (m *LabelPair) Reset() { *m = LabelPair{} } func (m *LabelPair) String() string { return proto.CompactTextString(m) } func (*LabelPair) ProtoMessage() {} func (*LabelPair) Descriptor() ([]byte, []int) { - return fileDescriptor_metrics_c97c9a2b9560cb8f, []int{0} + return fileDescriptor_6039342a2ba47b72, []int{0} } + func (m *LabelPair) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_LabelPair.Unmarshal(m, b) } func (m *LabelPair) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_LabelPair.Marshal(b, m, deterministic) } -func (dst *LabelPair) XXX_Merge(src proto.Message) { - xxx_messageInfo_LabelPair.Merge(dst, src) +func (m *LabelPair) XXX_Merge(src proto.Message) { + xxx_messageInfo_LabelPair.Merge(m, src) } func (m *LabelPair) XXX_Size() int { return xxx_messageInfo_LabelPair.Size(m) @@ -120,16 +128,17 @@ func (m *Gauge) Reset() { *m = Gauge{} } func (m *Gauge) String() string { return proto.CompactTextString(m) } func (*Gauge) ProtoMessage() {} func (*Gauge) Descriptor() ([]byte, []int) { - return fileDescriptor_metrics_c97c9a2b9560cb8f, []int{1} + return fileDescriptor_6039342a2ba47b72, []int{1} } + func (m *Gauge) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Gauge.Unmarshal(m, b) } func (m *Gauge) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Gauge.Marshal(b, m, deterministic) } -func (dst *Gauge) XXX_Merge(src proto.Message) { - xxx_messageInfo_Gauge.Merge(dst, src) +func (m *Gauge) XXX_Merge(src proto.Message) { + xxx_messageInfo_Gauge.Merge(m, src) } func (m *Gauge) XXX_Size() int { return xxx_messageInfo_Gauge.Size(m) @@ -148,26 +157,28 @@ func (m *Gauge) GetValue() float64 { } type Counter struct { - Value *float64 `protobuf:"fixed64,1,opt,name=value" json:"value,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Value *float64 `protobuf:"fixed64,1,opt,name=value" json:"value,omitempty"` + Exemplar *Exemplar `protobuf:"bytes,2,opt,name=exemplar" json:"exemplar,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *Counter) Reset() { *m = Counter{} } func (m *Counter) String() string { return proto.CompactTextString(m) } func (*Counter) ProtoMessage() {} func (*Counter) Descriptor() ([]byte, []int) { - return fileDescriptor_metrics_c97c9a2b9560cb8f, []int{2} + return fileDescriptor_6039342a2ba47b72, []int{2} } + func (m *Counter) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Counter.Unmarshal(m, b) } func (m *Counter) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Counter.Marshal(b, m, deterministic) } -func (dst *Counter) XXX_Merge(src proto.Message) { - xxx_messageInfo_Counter.Merge(dst, src) +func (m *Counter) XXX_Merge(src proto.Message) { + xxx_messageInfo_Counter.Merge(m, src) } func (m *Counter) XXX_Size() int { return xxx_messageInfo_Counter.Size(m) @@ -185,6 +196,13 @@ func (m *Counter) GetValue() float64 { return 0 } +func (m *Counter) GetExemplar() *Exemplar { + if m != nil { + return m.Exemplar + } + return nil +} + type Quantile struct { Quantile *float64 `protobuf:"fixed64,1,opt,name=quantile" json:"quantile,omitempty"` Value *float64 `protobuf:"fixed64,2,opt,name=value" json:"value,omitempty"` @@ -197,16 +215,17 @@ func (m *Quantile) Reset() { *m = Quantile{} } func (m *Quantile) String() string { return proto.CompactTextString(m) } func (*Quantile) ProtoMessage() {} func (*Quantile) Descriptor() ([]byte, []int) { - return fileDescriptor_metrics_c97c9a2b9560cb8f, []int{3} + return fileDescriptor_6039342a2ba47b72, []int{3} } + func (m *Quantile) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Quantile.Unmarshal(m, b) } func (m *Quantile) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Quantile.Marshal(b, m, deterministic) } -func (dst *Quantile) XXX_Merge(src proto.Message) { - xxx_messageInfo_Quantile.Merge(dst, src) +func (m *Quantile) XXX_Merge(src proto.Message) { + xxx_messageInfo_Quantile.Merge(m, src) } func (m *Quantile) XXX_Size() int { return xxx_messageInfo_Quantile.Size(m) @@ -244,16 +263,17 @@ func (m *Summary) Reset() { *m = Summary{} } func (m *Summary) String() string { return proto.CompactTextString(m) } func (*Summary) ProtoMessage() {} func (*Summary) Descriptor() ([]byte, []int) { - return fileDescriptor_metrics_c97c9a2b9560cb8f, []int{4} + return fileDescriptor_6039342a2ba47b72, []int{4} } + func (m *Summary) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Summary.Unmarshal(m, b) } func (m *Summary) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Summary.Marshal(b, m, deterministic) } -func (dst *Summary) XXX_Merge(src proto.Message) { - xxx_messageInfo_Summary.Merge(dst, src) +func (m *Summary) XXX_Merge(src proto.Message) { + xxx_messageInfo_Summary.Merge(m, src) } func (m *Summary) XXX_Size() int { return xxx_messageInfo_Summary.Size(m) @@ -296,16 +316,17 @@ func (m *Untyped) Reset() { *m = Untyped{} } func (m *Untyped) String() string { return proto.CompactTextString(m) } func (*Untyped) ProtoMessage() {} func (*Untyped) Descriptor() ([]byte, []int) { - return fileDescriptor_metrics_c97c9a2b9560cb8f, []int{5} + return fileDescriptor_6039342a2ba47b72, []int{5} } + func (m *Untyped) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Untyped.Unmarshal(m, b) } func (m *Untyped) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Untyped.Marshal(b, m, deterministic) } -func (dst *Untyped) XXX_Merge(src proto.Message) { - xxx_messageInfo_Untyped.Merge(dst, src) +func (m *Untyped) XXX_Merge(src proto.Message) { + xxx_messageInfo_Untyped.Merge(m, src) } func (m *Untyped) XXX_Size() int { return xxx_messageInfo_Untyped.Size(m) @@ -336,16 +357,17 @@ func (m *Histogram) Reset() { *m = Histogram{} } func (m *Histogram) String() string { return proto.CompactTextString(m) } func (*Histogram) ProtoMessage() {} func (*Histogram) Descriptor() ([]byte, []int) { - return fileDescriptor_metrics_c97c9a2b9560cb8f, []int{6} + return fileDescriptor_6039342a2ba47b72, []int{6} } + func (m *Histogram) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Histogram.Unmarshal(m, b) } func (m *Histogram) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Histogram.Marshal(b, m, deterministic) } -func (dst *Histogram) XXX_Merge(src proto.Message) { - xxx_messageInfo_Histogram.Merge(dst, src) +func (m *Histogram) XXX_Merge(src proto.Message) { + xxx_messageInfo_Histogram.Merge(m, src) } func (m *Histogram) XXX_Size() int { return xxx_messageInfo_Histogram.Size(m) @@ -378,27 +400,29 @@ func (m *Histogram) GetBucket() []*Bucket { } type Bucket struct { - CumulativeCount *uint64 `protobuf:"varint,1,opt,name=cumulative_count,json=cumulativeCount" json:"cumulative_count,omitempty"` - UpperBound *float64 `protobuf:"fixed64,2,opt,name=upper_bound,json=upperBound" json:"upper_bound,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + CumulativeCount *uint64 `protobuf:"varint,1,opt,name=cumulative_count,json=cumulativeCount" json:"cumulative_count,omitempty"` + UpperBound *float64 `protobuf:"fixed64,2,opt,name=upper_bound,json=upperBound" json:"upper_bound,omitempty"` + Exemplar *Exemplar `protobuf:"bytes,3,opt,name=exemplar" json:"exemplar,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *Bucket) Reset() { *m = Bucket{} } func (m *Bucket) String() string { return proto.CompactTextString(m) } func (*Bucket) ProtoMessage() {} func (*Bucket) Descriptor() ([]byte, []int) { - return fileDescriptor_metrics_c97c9a2b9560cb8f, []int{7} + return fileDescriptor_6039342a2ba47b72, []int{7} } + func (m *Bucket) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Bucket.Unmarshal(m, b) } func (m *Bucket) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Bucket.Marshal(b, m, deterministic) } -func (dst *Bucket) XXX_Merge(src proto.Message) { - xxx_messageInfo_Bucket.Merge(dst, src) +func (m *Bucket) XXX_Merge(src proto.Message) { + xxx_messageInfo_Bucket.Merge(m, src) } func (m *Bucket) XXX_Size() int { return xxx_messageInfo_Bucket.Size(m) @@ -423,6 +447,68 @@ func (m *Bucket) GetUpperBound() float64 { return 0 } +func (m *Bucket) GetExemplar() *Exemplar { + if m != nil { + return m.Exemplar + } + return nil +} + +type Exemplar struct { + Label []*LabelPair `protobuf:"bytes,1,rep,name=label" json:"label,omitempty"` + Value *float64 `protobuf:"fixed64,2,opt,name=value" json:"value,omitempty"` + Timestamp *timestamp.Timestamp `protobuf:"bytes,3,opt,name=timestamp" json:"timestamp,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *Exemplar) Reset() { *m = Exemplar{} } +func (m *Exemplar) String() string { return proto.CompactTextString(m) } +func (*Exemplar) ProtoMessage() {} +func (*Exemplar) Descriptor() ([]byte, []int) { + return fileDescriptor_6039342a2ba47b72, []int{8} +} + +func (m *Exemplar) XXX_Unmarshal(b []byte) error { + return xxx_messageInfo_Exemplar.Unmarshal(m, b) +} +func (m *Exemplar) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + return xxx_messageInfo_Exemplar.Marshal(b, m, deterministic) +} +func (m *Exemplar) XXX_Merge(src proto.Message) { + xxx_messageInfo_Exemplar.Merge(m, src) +} +func (m *Exemplar) XXX_Size() int { + return xxx_messageInfo_Exemplar.Size(m) +} +func (m *Exemplar) XXX_DiscardUnknown() { + xxx_messageInfo_Exemplar.DiscardUnknown(m) +} + +var xxx_messageInfo_Exemplar proto.InternalMessageInfo + +func (m *Exemplar) GetLabel() []*LabelPair { + if m != nil { + return m.Label + } + return nil +} + +func (m *Exemplar) GetValue() float64 { + if m != nil && m.Value != nil { + return *m.Value + } + return 0 +} + +func (m *Exemplar) GetTimestamp() *timestamp.Timestamp { + if m != nil { + return m.Timestamp + } + return nil +} + type Metric struct { Label []*LabelPair `protobuf:"bytes,1,rep,name=label" json:"label,omitempty"` Gauge *Gauge `protobuf:"bytes,2,opt,name=gauge" json:"gauge,omitempty"` @@ -440,16 +526,17 @@ func (m *Metric) Reset() { *m = Metric{} } func (m *Metric) String() string { return proto.CompactTextString(m) } func (*Metric) ProtoMessage() {} func (*Metric) Descriptor() ([]byte, []int) { - return fileDescriptor_metrics_c97c9a2b9560cb8f, []int{8} + return fileDescriptor_6039342a2ba47b72, []int{9} } + func (m *Metric) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Metric.Unmarshal(m, b) } func (m *Metric) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Metric.Marshal(b, m, deterministic) } -func (dst *Metric) XXX_Merge(src proto.Message) { - xxx_messageInfo_Metric.Merge(dst, src) +func (m *Metric) XXX_Merge(src proto.Message) { + xxx_messageInfo_Metric.Merge(m, src) } func (m *Metric) XXX_Size() int { return xxx_messageInfo_Metric.Size(m) @@ -523,16 +610,17 @@ func (m *MetricFamily) Reset() { *m = MetricFamily{} } func (m *MetricFamily) String() string { return proto.CompactTextString(m) } func (*MetricFamily) ProtoMessage() {} func (*MetricFamily) Descriptor() ([]byte, []int) { - return fileDescriptor_metrics_c97c9a2b9560cb8f, []int{9} + return fileDescriptor_6039342a2ba47b72, []int{10} } + func (m *MetricFamily) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_MetricFamily.Unmarshal(m, b) } func (m *MetricFamily) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_MetricFamily.Marshal(b, m, deterministic) } -func (dst *MetricFamily) XXX_Merge(src proto.Message) { - xxx_messageInfo_MetricFamily.Merge(dst, src) +func (m *MetricFamily) XXX_Merge(src proto.Message) { + xxx_messageInfo_MetricFamily.Merge(m, src) } func (m *MetricFamily) XXX_Size() int { return xxx_messageInfo_MetricFamily.Size(m) @@ -572,6 +660,7 @@ func (m *MetricFamily) GetMetric() []*Metric { } func init() { + proto.RegisterEnum("io.prometheus.client.MetricType", MetricType_name, MetricType_value) proto.RegisterType((*LabelPair)(nil), "io.prometheus.client.LabelPair") proto.RegisterType((*Gauge)(nil), "io.prometheus.client.Gauge") proto.RegisterType((*Counter)(nil), "io.prometheus.client.Counter") @@ -580,50 +669,55 @@ func init() { proto.RegisterType((*Untyped)(nil), "io.prometheus.client.Untyped") proto.RegisterType((*Histogram)(nil), "io.prometheus.client.Histogram") proto.RegisterType((*Bucket)(nil), "io.prometheus.client.Bucket") + proto.RegisterType((*Exemplar)(nil), "io.prometheus.client.Exemplar") proto.RegisterType((*Metric)(nil), "io.prometheus.client.Metric") proto.RegisterType((*MetricFamily)(nil), "io.prometheus.client.MetricFamily") - proto.RegisterEnum("io.prometheus.client.MetricType", MetricType_name, MetricType_value) } -func init() { proto.RegisterFile("metrics.proto", fileDescriptor_metrics_c97c9a2b9560cb8f) } - -var fileDescriptor_metrics_c97c9a2b9560cb8f = []byte{ - // 591 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x54, 0x4f, 0x4f, 0xdb, 0x4e, - 0x14, 0xfc, 0x99, 0xd8, 0x09, 0x7e, 0x86, 0x5f, 0xad, 0x15, 0x07, 0xab, 0x2d, 0x25, 0xcd, 0x89, - 0xf6, 0x10, 0x54, 0x04, 0xaa, 0x44, 0xdb, 0x03, 0x50, 0x1a, 0x2a, 0xd5, 0x40, 0x37, 0xc9, 0x81, - 0x5e, 0xac, 0x8d, 0x59, 0x25, 0x56, 0xbd, 0xb6, 0x6b, 0xef, 0x22, 0xe5, 0xdc, 0x43, 0xbf, 0x47, - 0xbf, 0x68, 0xab, 0xfd, 0xe3, 0x18, 0x24, 0xc3, 0xa9, 0xb7, 0xb7, 0xf3, 0x66, 0xde, 0x8e, 0x77, - 0xc7, 0x0b, 0x9b, 0x8c, 0xf2, 0x32, 0x89, 0xab, 0x61, 0x51, 0xe6, 0x3c, 0x47, 0x5b, 0x49, 0x2e, - 0x2b, 0x46, 0xf9, 0x82, 0x8a, 0x6a, 0x18, 0xa7, 0x09, 0xcd, 0xf8, 0xe0, 0x10, 0xdc, 0x2f, 0x64, - 0x46, 0xd3, 0x2b, 0x92, 0x94, 0x08, 0x81, 0x9d, 0x11, 0x46, 0x03, 0xab, 0x6f, 0xed, 0xba, 0x58, - 0xd5, 0x68, 0x0b, 0x9c, 0x5b, 0x92, 0x0a, 0x1a, 0xac, 0x29, 0x50, 0x2f, 0x06, 0xdb, 0xe0, 0x8c, - 0x88, 0x98, 0xdf, 0x69, 0x4b, 0x8d, 0x55, 0xb7, 0x77, 0xa0, 0x77, 0x9a, 0x8b, 0x8c, 0xd3, 0xf2, - 0x01, 0xc2, 0x7b, 0x58, 0xff, 0x2a, 0x48, 0xc6, 0x93, 0x94, 0xa2, 0xa7, 0xb0, 0xfe, 0xc3, 0xd4, - 0x86, 0xb4, 0x5a, 0xdf, 0xdf, 0x7d, 0xa5, 0xfe, 0x65, 0x41, 0x6f, 0x2c, 0x18, 0x23, 0xe5, 0x12, - 0xbd, 0x84, 0x8d, 0x8a, 0xb0, 0x22, 0xa5, 0x51, 0x2c, 0x77, 0x54, 0x13, 0x6c, 0xec, 0x69, 0x4c, - 0x99, 0x40, 0xdb, 0x00, 0x86, 0x52, 0x09, 0x66, 0x26, 0xb9, 0x1a, 0x19, 0x0b, 0x86, 0x8e, 0xee, - 0xec, 0xdf, 0xe9, 0x77, 0x76, 0xbd, 0xfd, 0x17, 0xc3, 0xb6, 0xb3, 0x1a, 0xd6, 0x8e, 0x1b, 0x7f, - 0xf2, 0x43, 0xa7, 0x19, 0x5f, 0x16, 0xf4, 0xe6, 0x81, 0x0f, 0xfd, 0x69, 0x81, 0x7b, 0x9e, 0x54, - 0x3c, 0x9f, 0x97, 0x84, 0xfd, 0x03, 0xb3, 0x07, 0xd0, 0x9d, 0x89, 0xf8, 0x3b, 0xe5, 0xc6, 0xea, - 0xf3, 0x76, 0xab, 0x27, 0x8a, 0x83, 0x0d, 0x77, 0x30, 0x81, 0xae, 0x46, 0xd0, 0x2b, 0xf0, 0x63, - 0xc1, 0x44, 0x4a, 0x78, 0x72, 0x7b, 0xdf, 0xc5, 0x93, 0x06, 0xd7, 0x4e, 0x76, 0xc0, 0x13, 0x45, - 0x41, 0xcb, 0x68, 0x96, 0x8b, 0xec, 0xc6, 0x58, 0x01, 0x05, 0x9d, 0x48, 0x64, 0xf0, 0x67, 0x0d, - 0xba, 0xa1, 0xca, 0x18, 0x3a, 0x04, 0x27, 0x95, 0x31, 0x0a, 0x2c, 0xe5, 0x6a, 0xa7, 0xdd, 0xd5, - 0x2a, 0x69, 0x58, 0xb3, 0xd1, 0x1b, 0x70, 0xe6, 0x32, 0x46, 0x6a, 0xb8, 0xb7, 0xff, 0xac, 0x5d, - 0xa6, 0x92, 0x86, 0x35, 0x13, 0xbd, 0x85, 0x5e, 0xac, 0xa3, 0x15, 0x74, 0x94, 0x68, 0xbb, 0x5d, - 0x64, 0xf2, 0x87, 0x6b, 0xb6, 0x14, 0x56, 0x3a, 0x33, 0x81, 0xfd, 0x98, 0xd0, 0x04, 0x0b, 0xd7, - 0x6c, 0x29, 0x14, 0xfa, 0x8e, 0x03, 0xe7, 0x31, 0xa1, 0x09, 0x02, 0xae, 0xd9, 0xe8, 0x03, 0xb8, - 0x8b, 0xfa, 0xea, 0x83, 0x9e, 0x92, 0x3e, 0x70, 0x30, 0xab, 0x84, 0xe0, 0x46, 0x21, 0xc3, 0xc2, - 0x13, 0x46, 0x2b, 0x4e, 0x58, 0x11, 0xb1, 0x2a, 0xe8, 0xf6, 0xad, 0xdd, 0x0e, 0xf6, 0x56, 0x58, - 0x58, 0x0d, 0x7e, 0x5b, 0xb0, 0xa1, 0x6f, 0xe0, 0x13, 0x61, 0x49, 0xba, 0x6c, 0xfd, 0x83, 0x11, - 0xd8, 0x0b, 0x9a, 0x16, 0xe6, 0x07, 0x56, 0x35, 0x3a, 0x00, 0x5b, 0x7a, 0x54, 0x47, 0xf8, 0xff, - 0x7e, 0xbf, 0xdd, 0x95, 0x9e, 0x3c, 0x59, 0x16, 0x14, 0x2b, 0xb6, 0x0c, 0x9f, 0x7e, 0x53, 0x02, - 0xfb, 0xb1, 0xf0, 0x69, 0x1d, 0x36, 0xdc, 0xd7, 0x21, 0x40, 0x33, 0x09, 0x79, 0xd0, 0x3b, 0xbd, - 0x9c, 0x5e, 0x4c, 0xce, 0xb0, 0xff, 0x1f, 0x72, 0xc1, 0x19, 0x1d, 0x4f, 0x47, 0x67, 0xbe, 0x25, - 0xf1, 0xf1, 0x34, 0x0c, 0x8f, 0xf1, 0xb5, 0xbf, 0x26, 0x17, 0xd3, 0x8b, 0xc9, 0xf5, 0xd5, 0xd9, - 0x47, 0xbf, 0x83, 0x36, 0xc1, 0x3d, 0xff, 0x3c, 0x9e, 0x5c, 0x8e, 0xf0, 0x71, 0xe8, 0xdb, 0x27, - 0x18, 0x5a, 0x5f, 0xb2, 0x6f, 0x47, 0xf3, 0x84, 0x2f, 0xc4, 0x6c, 0x18, 0xe7, 0x6c, 0xaf, 0xe9, - 0xee, 0xe9, 0x6e, 0xc4, 0xf2, 0x1b, 0x9a, 0xee, 0xcd, 0xf3, 0x77, 0x49, 0x1e, 0x35, 0xdd, 0x48, - 0x77, 0xff, 0x06, 0x00, 0x00, 0xff, 0xff, 0x45, 0x21, 0x7f, 0x64, 0x2b, 0x05, 0x00, 0x00, +func init() { proto.RegisterFile("metrics.proto", fileDescriptor_6039342a2ba47b72) } + +var fileDescriptor_6039342a2ba47b72 = []byte{ + // 665 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x54, 0xcd, 0x6e, 0xd3, 0x4c, + 0x14, 0xfd, 0xdc, 0x38, 0x3f, 0xbe, 0x69, 0x3f, 0xa2, 0x51, 0x17, 0x56, 0xa1, 0x24, 0x78, 0x55, + 0x58, 0x38, 0xa2, 0x6a, 0x05, 0x2a, 0xb0, 0x68, 0x4b, 0x48, 0x91, 0x48, 0x5b, 0x26, 0xc9, 0xa2, + 0xb0, 0x88, 0x1c, 0x77, 0x70, 0x2c, 0x3c, 0xb1, 0xb1, 0x67, 0x2a, 0xb2, 0x66, 0xc1, 0x16, 0x5e, + 0x81, 0x17, 0x05, 0xcd, 0x8f, 0x6d, 0x2a, 0xb9, 0x95, 0x40, 0xec, 0x66, 0xee, 0x3d, 0xe7, 0xfa, + 0xcc, 0xf8, 0x9c, 0x81, 0x0d, 0x4a, 0x58, 0x1a, 0xfa, 0x99, 0x9b, 0xa4, 0x31, 0x8b, 0xd1, 0x66, + 0x18, 0x8b, 0x15, 0x25, 0x6c, 0x41, 0x78, 0xe6, 0xfa, 0x51, 0x48, 0x96, 0x6c, 0xab, 0x1b, 0xc4, + 0x71, 0x10, 0x91, 0xbe, 0xc4, 0xcc, 0xf9, 0x87, 0x3e, 0x0b, 0x29, 0xc9, 0x98, 0x47, 0x13, 0x45, + 0x73, 0xf6, 0xc1, 0x7a, 0xe3, 0xcd, 0x49, 0x74, 0xee, 0x85, 0x29, 0x42, 0x60, 0x2e, 0x3d, 0x4a, + 0x6c, 0xa3, 0x67, 0xec, 0x58, 0x58, 0xae, 0xd1, 0x26, 0xd4, 0xaf, 0xbc, 0x88, 0x13, 0x7b, 0x4d, + 0x16, 0xd5, 0xc6, 0xd9, 0x86, 0xfa, 0xd0, 0xe3, 0xc1, 0x6f, 0x6d, 0xc1, 0x31, 0xf2, 0xf6, 0x7b, + 0x68, 0x1e, 0xc7, 0x7c, 0xc9, 0x48, 0x5a, 0x0d, 0x40, 0x07, 0xd0, 0x22, 0x9f, 0x09, 0x4d, 0x22, + 0x2f, 0x95, 0x83, 0xdb, 0xbb, 0xf7, 0xdd, 0xaa, 0x03, 0xb8, 0x03, 0x8d, 0xc2, 0x05, 0xde, 0x79, + 0x0e, 0xad, 0xb7, 0xdc, 0x5b, 0xb2, 0x30, 0x22, 0x68, 0x0b, 0x5a, 0x9f, 0xf4, 0x5a, 0x7f, 0xa0, + 0xd8, 0x5f, 0x57, 0x5e, 0x48, 0xfb, 0x6a, 0x40, 0x73, 0xcc, 0x29, 0xf5, 0xd2, 0x15, 0x7a, 0x00, + 0xeb, 0x99, 0x47, 0x93, 0x88, 0xcc, 0x7c, 0xa1, 0x56, 0x4e, 0x30, 0x71, 0x5b, 0xd5, 0xe4, 0x01, + 0xd0, 0x36, 0x80, 0x86, 0x64, 0x9c, 0xea, 0x49, 0x96, 0xaa, 0x8c, 0x39, 0x15, 0xe7, 0x28, 0xbe, + 0x5f, 0xeb, 0xd5, 0x6e, 0x3e, 0x47, 0xae, 0xb8, 0xd4, 0xe7, 0x74, 0xa1, 0x39, 0x5d, 0xb2, 0x55, + 0x42, 0x2e, 0x6f, 0xb8, 0xc5, 0x2f, 0x06, 0x58, 0x27, 0x61, 0xc6, 0xe2, 0x20, 0xf5, 0xe8, 0x3f, + 0x10, 0xbb, 0x07, 0x8d, 0x39, 0xf7, 0x3f, 0x12, 0xa6, 0xa5, 0xde, 0xab, 0x96, 0x7a, 0x24, 0x31, + 0x58, 0x63, 0x9d, 0x6f, 0x06, 0x34, 0x54, 0x09, 0x3d, 0x84, 0x8e, 0xcf, 0x29, 0x8f, 0x3c, 0x16, + 0x5e, 0x5d, 0x97, 0x71, 0xa7, 0xac, 0x2b, 0x29, 0x5d, 0x68, 0xf3, 0x24, 0x21, 0xe9, 0x6c, 0x1e, + 0xf3, 0xe5, 0xa5, 0xd6, 0x02, 0xb2, 0x74, 0x24, 0x2a, 0xd7, 0x1c, 0x50, 0xfb, 0x43, 0x07, 0x7c, + 0x37, 0xa0, 0x95, 0x97, 0xd1, 0x3e, 0xd4, 0x23, 0xe1, 0x60, 0xdb, 0x90, 0x87, 0xea, 0x56, 0x4f, + 0x29, 0x4c, 0x8e, 0x15, 0xba, 0xda, 0x1d, 0xe8, 0x29, 0x58, 0x45, 0x42, 0xb4, 0xac, 0x2d, 0x57, + 0x65, 0xc8, 0xcd, 0x33, 0xe4, 0x4e, 0x72, 0x04, 0x2e, 0xc1, 0xce, 0xcf, 0x35, 0x68, 0x8c, 0x64, + 0x22, 0xff, 0x56, 0xd1, 0x63, 0xa8, 0x07, 0x22, 0x53, 0x3a, 0x10, 0x77, 0xab, 0x69, 0x32, 0x76, + 0x58, 0x21, 0xd1, 0x13, 0x68, 0xfa, 0x2a, 0x67, 0x5a, 0xec, 0x76, 0x35, 0x49, 0x87, 0x11, 0xe7, + 0x68, 0x41, 0xcc, 0x54, 0x08, 0x6c, 0xf3, 0x36, 0xa2, 0x4e, 0x0a, 0xce, 0xd1, 0x82, 0xc8, 0x95, + 0x69, 0xed, 0xfa, 0x6d, 0x44, 0xed, 0x6c, 0x9c, 0xa3, 0xd1, 0x0b, 0xb0, 0x16, 0xb9, 0x97, 0xed, + 0xa6, 0xa4, 0xde, 0x70, 0x31, 0x85, 0xe5, 0x71, 0xc9, 0x10, 0xee, 0x2f, 0xee, 0x7a, 0x46, 0x33, + 0xbb, 0xd1, 0x33, 0x76, 0x6a, 0xb8, 0x5d, 0xd4, 0x46, 0x99, 0xf3, 0xc3, 0x80, 0x75, 0xf5, 0x07, + 0x5e, 0x79, 0x34, 0x8c, 0x56, 0x95, 0xcf, 0x19, 0x02, 0x73, 0x41, 0xa2, 0x44, 0xbf, 0x66, 0x72, + 0x8d, 0xf6, 0xc0, 0x14, 0x1a, 0xe5, 0x15, 0xfe, 0xbf, 0xdb, 0xab, 0x56, 0xa5, 0x26, 0x4f, 0x56, + 0x09, 0xc1, 0x12, 0x2d, 0xd2, 0xa4, 0x5e, 0x60, 0xdb, 0xbc, 0x2d, 0x4d, 0x8a, 0x87, 0x35, 0xf6, + 0xd1, 0x08, 0xa0, 0x9c, 0x84, 0xda, 0xd0, 0x3c, 0x3e, 0x9b, 0x9e, 0x4e, 0x06, 0xb8, 0xf3, 0x1f, + 0xb2, 0xa0, 0x3e, 0x3c, 0x9c, 0x0e, 0x07, 0x1d, 0x43, 0xd4, 0xc7, 0xd3, 0xd1, 0xe8, 0x10, 0x5f, + 0x74, 0xd6, 0xc4, 0x66, 0x7a, 0x3a, 0xb9, 0x38, 0x1f, 0xbc, 0xec, 0xd4, 0xd0, 0x06, 0x58, 0x27, + 0xaf, 0xc7, 0x93, 0xb3, 0x21, 0x3e, 0x1c, 0x75, 0xcc, 0x23, 0x0c, 0x95, 0xef, 0xfe, 0xbb, 0x83, + 0x20, 0x64, 0x0b, 0x3e, 0x77, 0xfd, 0x98, 0xf6, 0xcb, 0x6e, 0x5f, 0x75, 0x67, 0x34, 0xbe, 0x24, + 0x51, 0x3f, 0x88, 0x9f, 0x85, 0xf1, 0xac, 0xec, 0xce, 0x54, 0xf7, 0x57, 0x00, 0x00, 0x00, 0xff, + 0xff, 0xd0, 0x84, 0x91, 0x73, 0x59, 0x06, 0x00, 0x00, } diff --git a/vendor/github.com/prometheus/common/expfmt/encode.go b/vendor/github.com/prometheus/common/expfmt/encode.go index 11839ed65c..bd4e347454 100644 --- a/vendor/github.com/prometheus/common/expfmt/encode.go +++ b/vendor/github.com/prometheus/common/expfmt/encode.go @@ -30,17 +30,38 @@ type Encoder interface { Encode(*dto.MetricFamily) error } -type encoder func(*dto.MetricFamily) error +// Closer is implemented by Encoders that need to be closed to finalize +// encoding. (For example, OpenMetrics needs a final `# EOF` line.) +// +// Note that all Encoder implementations returned from this package implement +// Closer, too, even if the Close call is a no-op. This happens in preparation +// for adding a Close method to the Encoder interface directly in a (mildly +// breaking) release in the future. +type Closer interface { + Close() error +} + +type encoderCloser struct { + encode func(*dto.MetricFamily) error + close func() error +} -func (e encoder) Encode(v *dto.MetricFamily) error { - return e(v) +func (ec encoderCloser) Encode(v *dto.MetricFamily) error { + return ec.encode(v) } -// Negotiate returns the Content-Type based on the given Accept header. -// If no appropriate accepted type is found, FmtText is returned. +func (ec encoderCloser) Close() error { + return ec.close() +} + +// Negotiate returns the Content-Type based on the given Accept header. If no +// appropriate accepted type is found, FmtText is returned (which is the +// Prometheus text format). This function will never negotiate FmtOpenMetrics, +// as the support is still experimental. To include the option to negotiate +// FmtOpenMetrics, use NegotiateOpenMetrics. func Negotiate(h http.Header) Format { for _, ac := range goautoneg.ParseAccept(h.Get(hdrAccept)) { - // Check for protocol buffer + ver := ac.Params["version"] if ac.Type+"/"+ac.SubType == ProtoType && ac.Params["proto"] == ProtoProtocol { switch ac.Params["encoding"] { case "delimited": @@ -51,38 +72,91 @@ func Negotiate(h http.Header) Format { return FmtProtoCompact } } - // Check for text format. + if ac.Type == "text" && ac.SubType == "plain" && (ver == TextVersion || ver == "") { + return FmtText + } + } + return FmtText +} + +// NegotiateIncludingOpenMetrics works like Negotiate but includes +// FmtOpenMetrics as an option for the result. Note that this function is +// temporary and will disappear once FmtOpenMetrics is fully supported and as +// such may be negotiated by the normal Negotiate function. +func NegotiateIncludingOpenMetrics(h http.Header) Format { + for _, ac := range goautoneg.ParseAccept(h.Get(hdrAccept)) { ver := ac.Params["version"] + if ac.Type+"/"+ac.SubType == ProtoType && ac.Params["proto"] == ProtoProtocol { + switch ac.Params["encoding"] { + case "delimited": + return FmtProtoDelim + case "text": + return FmtProtoText + case "compact-text": + return FmtProtoCompact + } + } if ac.Type == "text" && ac.SubType == "plain" && (ver == TextVersion || ver == "") { return FmtText } + if ac.Type+"/"+ac.SubType == OpenMetricsType && (ver == OpenMetricsVersion || ver == "") { + return FmtOpenMetrics + } } return FmtText } -// NewEncoder returns a new encoder based on content type negotiation. +// NewEncoder returns a new encoder based on content type negotiation. All +// Encoder implementations returned by NewEncoder also implement Closer, and +// callers should always call the Close method. It is currently only required +// for FmtOpenMetrics, but a future (breaking) release will add the Close method +// to the Encoder interface directly. The current version of the Encoder +// interface is kept for backwards compatibility. func NewEncoder(w io.Writer, format Format) Encoder { switch format { case FmtProtoDelim: - return encoder(func(v *dto.MetricFamily) error { - _, err := pbutil.WriteDelimited(w, v) - return err - }) + return encoderCloser{ + encode: func(v *dto.MetricFamily) error { + _, err := pbutil.WriteDelimited(w, v) + return err + }, + close: func() error { return nil }, + } case FmtProtoCompact: - return encoder(func(v *dto.MetricFamily) error { - _, err := fmt.Fprintln(w, v.String()) - return err - }) + return encoderCloser{ + encode: func(v *dto.MetricFamily) error { + _, err := fmt.Fprintln(w, v.String()) + return err + }, + close: func() error { return nil }, + } case FmtProtoText: - return encoder(func(v *dto.MetricFamily) error { - _, err := fmt.Fprintln(w, proto.MarshalTextString(v)) - return err - }) + return encoderCloser{ + encode: func(v *dto.MetricFamily) error { + _, err := fmt.Fprintln(w, proto.MarshalTextString(v)) + return err + }, + close: func() error { return nil }, + } case FmtText: - return encoder(func(v *dto.MetricFamily) error { - _, err := MetricFamilyToText(w, v) - return err - }) + return encoderCloser{ + encode: func(v *dto.MetricFamily) error { + _, err := MetricFamilyToText(w, v) + return err + }, + close: func() error { return nil }, + } + case FmtOpenMetrics: + return encoderCloser{ + encode: func(v *dto.MetricFamily) error { + _, err := MetricFamilyToOpenMetrics(w, v) + return err + }, + close: func() error { + _, err := FinalizeOpenMetrics(w) + return err + }, + } } - panic("expfmt.NewEncoder: unknown format") + panic(fmt.Errorf("expfmt.NewEncoder: unknown format %q", format)) } diff --git a/vendor/github.com/prometheus/common/expfmt/expfmt.go b/vendor/github.com/prometheus/common/expfmt/expfmt.go index c71bcb9816..0f176fa64f 100644 --- a/vendor/github.com/prometheus/common/expfmt/expfmt.go +++ b/vendor/github.com/prometheus/common/expfmt/expfmt.go @@ -19,10 +19,12 @@ type Format string // Constants to assemble the Content-Type values for the different wire protocols. const ( - TextVersion = "0.0.4" - ProtoType = `application/vnd.google.protobuf` - ProtoProtocol = `io.prometheus.client.MetricFamily` - ProtoFmt = ProtoType + "; proto=" + ProtoProtocol + ";" + TextVersion = "0.0.4" + ProtoType = `application/vnd.google.protobuf` + ProtoProtocol = `io.prometheus.client.MetricFamily` + ProtoFmt = ProtoType + "; proto=" + ProtoProtocol + ";" + OpenMetricsType = `application/openmetrics-text` + OpenMetricsVersion = "0.0.1" // The Content-Type values for the different wire protocols. FmtUnknown Format = `` @@ -30,6 +32,7 @@ const ( FmtProtoDelim Format = ProtoFmt + ` encoding=delimited` FmtProtoText Format = ProtoFmt + ` encoding=text` FmtProtoCompact Format = ProtoFmt + ` encoding=compact-text` + FmtOpenMetrics Format = OpenMetricsType + `; version=` + OpenMetricsVersion + `; charset=utf-8` ) const ( diff --git a/vendor/github.com/prometheus/common/expfmt/openmetrics_create.go b/vendor/github.com/prometheus/common/expfmt/openmetrics_create.go new file mode 100644 index 0000000000..8a9313a3be --- /dev/null +++ b/vendor/github.com/prometheus/common/expfmt/openmetrics_create.go @@ -0,0 +1,527 @@ +// Copyright 2020 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package expfmt + +import ( + "bufio" + "bytes" + "fmt" + "io" + "math" + "strconv" + "strings" + + "github.com/golang/protobuf/ptypes" + "github.com/prometheus/common/model" + + dto "github.com/prometheus/client_model/go" +) + +// MetricFamilyToOpenMetrics converts a MetricFamily proto message into the +// OpenMetrics text format and writes the resulting lines to 'out'. It returns +// the number of bytes written and any error encountered. The output will have +// the same order as the input, no further sorting is performed. Furthermore, +// this function assumes the input is already sanitized and does not perform any +// sanity checks. If the input contains duplicate metrics or invalid metric or +// label names, the conversion will result in invalid text format output. +// +// This function fulfills the type 'expfmt.encoder'. +// +// Note that OpenMetrics requires a final `# EOF` line. Since this function acts +// on individual metric families, it is the responsibility of the caller to +// append this line to 'out' once all metric families have been written. +// Conveniently, this can be done by calling FinalizeOpenMetrics. +// +// The output should be fully OpenMetrics compliant. However, there are a few +// missing features and peculiarities to avoid complications when switching from +// Prometheus to OpenMetrics or vice versa: +// +// - Counters are expected to have the `_total` suffix in their metric name. In +// the output, the suffix will be truncated from the `# TYPE` and `# HELP` +// line. A counter with a missing `_total` suffix is not an error. However, +// its type will be set to `unknown` in that case to avoid invalid OpenMetrics +// output. +// +// - No support for the following (optional) features: `# UNIT` line, `_created` +// line, info type, stateset type, gaugehistogram type. +// +// - The size of exemplar labels is not checked (i.e. it's possible to create +// exemplars that are larger than allowed by the OpenMetrics specification). +// +// - The value of Counters is not checked. (OpenMetrics doesn't allow counters +// with a `NaN` value.) +func MetricFamilyToOpenMetrics(out io.Writer, in *dto.MetricFamily) (written int, err error) { + name := in.GetName() + if name == "" { + return 0, fmt.Errorf("MetricFamily has no name: %s", in) + } + + // Try the interface upgrade. If it doesn't work, we'll use a + // bufio.Writer from the sync.Pool. + w, ok := out.(enhancedWriter) + if !ok { + b := bufPool.Get().(*bufio.Writer) + b.Reset(out) + w = b + defer func() { + bErr := b.Flush() + if err == nil { + err = bErr + } + bufPool.Put(b) + }() + } + + var ( + n int + metricType = in.GetType() + shortName = name + ) + if metricType == dto.MetricType_COUNTER && strings.HasSuffix(shortName, "_total") { + shortName = name[:len(name)-6] + } + + // Comments, first HELP, then TYPE. + if in.Help != nil { + n, err = w.WriteString("# HELP ") + written += n + if err != nil { + return + } + n, err = w.WriteString(shortName) + written += n + if err != nil { + return + } + err = w.WriteByte(' ') + written++ + if err != nil { + return + } + n, err = writeEscapedString(w, *in.Help, true) + written += n + if err != nil { + return + } + err = w.WriteByte('\n') + written++ + if err != nil { + return + } + } + n, err = w.WriteString("# TYPE ") + written += n + if err != nil { + return + } + n, err = w.WriteString(shortName) + written += n + if err != nil { + return + } + switch metricType { + case dto.MetricType_COUNTER: + if strings.HasSuffix(name, "_total") { + n, err = w.WriteString(" counter\n") + } else { + n, err = w.WriteString(" unknown\n") + } + case dto.MetricType_GAUGE: + n, err = w.WriteString(" gauge\n") + case dto.MetricType_SUMMARY: + n, err = w.WriteString(" summary\n") + case dto.MetricType_UNTYPED: + n, err = w.WriteString(" unknown\n") + case dto.MetricType_HISTOGRAM: + n, err = w.WriteString(" histogram\n") + default: + return written, fmt.Errorf("unknown metric type %s", metricType.String()) + } + written += n + if err != nil { + return + } + + // Finally the samples, one line for each. + for _, metric := range in.Metric { + switch metricType { + case dto.MetricType_COUNTER: + if metric.Counter == nil { + return written, fmt.Errorf( + "expected counter in metric %s %s", name, metric, + ) + } + // Note that we have ensured above that either the name + // ends on `_total` or that the rendered type is + // `unknown`. Therefore, no `_total` must be added here. + n, err = writeOpenMetricsSample( + w, name, "", metric, "", 0, + metric.Counter.GetValue(), 0, false, + metric.Counter.Exemplar, + ) + case dto.MetricType_GAUGE: + if metric.Gauge == nil { + return written, fmt.Errorf( + "expected gauge in metric %s %s", name, metric, + ) + } + n, err = writeOpenMetricsSample( + w, name, "", metric, "", 0, + metric.Gauge.GetValue(), 0, false, + nil, + ) + case dto.MetricType_UNTYPED: + if metric.Untyped == nil { + return written, fmt.Errorf( + "expected untyped in metric %s %s", name, metric, + ) + } + n, err = writeOpenMetricsSample( + w, name, "", metric, "", 0, + metric.Untyped.GetValue(), 0, false, + nil, + ) + case dto.MetricType_SUMMARY: + if metric.Summary == nil { + return written, fmt.Errorf( + "expected summary in metric %s %s", name, metric, + ) + } + for _, q := range metric.Summary.Quantile { + n, err = writeOpenMetricsSample( + w, name, "", metric, + model.QuantileLabel, q.GetQuantile(), + q.GetValue(), 0, false, + nil, + ) + written += n + if err != nil { + return + } + } + n, err = writeOpenMetricsSample( + w, name, "_sum", metric, "", 0, + metric.Summary.GetSampleSum(), 0, false, + nil, + ) + written += n + if err != nil { + return + } + n, err = writeOpenMetricsSample( + w, name, "_count", metric, "", 0, + 0, metric.Summary.GetSampleCount(), true, + nil, + ) + case dto.MetricType_HISTOGRAM: + if metric.Histogram == nil { + return written, fmt.Errorf( + "expected histogram in metric %s %s", name, metric, + ) + } + infSeen := false + for _, b := range metric.Histogram.Bucket { + n, err = writeOpenMetricsSample( + w, name, "_bucket", metric, + model.BucketLabel, b.GetUpperBound(), + 0, b.GetCumulativeCount(), true, + b.Exemplar, + ) + written += n + if err != nil { + return + } + if math.IsInf(b.GetUpperBound(), +1) { + infSeen = true + } + } + if !infSeen { + n, err = writeOpenMetricsSample( + w, name, "_bucket", metric, + model.BucketLabel, math.Inf(+1), + 0, metric.Histogram.GetSampleCount(), true, + nil, + ) + written += n + if err != nil { + return + } + } + n, err = writeOpenMetricsSample( + w, name, "_sum", metric, "", 0, + metric.Histogram.GetSampleSum(), 0, false, + nil, + ) + written += n + if err != nil { + return + } + n, err = writeOpenMetricsSample( + w, name, "_count", metric, "", 0, + 0, metric.Histogram.GetSampleCount(), true, + nil, + ) + default: + return written, fmt.Errorf( + "unexpected type in metric %s %s", name, metric, + ) + } + written += n + if err != nil { + return + } + } + return +} + +// FinalizeOpenMetrics writes the final `# EOF\n` line required by OpenMetrics. +func FinalizeOpenMetrics(w io.Writer) (written int, err error) { + return w.Write([]byte("# EOF\n")) +} + +// writeOpenMetricsSample writes a single sample in OpenMetrics text format to +// w, given the metric name, the metric proto message itself, optionally an +// additional label name with a float64 value (use empty string as label name if +// not required), the value (optionally as float64 or uint64, determined by +// useIntValue), and optionally an exemplar (use nil if not required). The +// function returns the number of bytes written and any error encountered. +func writeOpenMetricsSample( + w enhancedWriter, + name, suffix string, + metric *dto.Metric, + additionalLabelName string, additionalLabelValue float64, + floatValue float64, intValue uint64, useIntValue bool, + exemplar *dto.Exemplar, +) (int, error) { + var written int + n, err := w.WriteString(name) + written += n + if err != nil { + return written, err + } + if suffix != "" { + n, err = w.WriteString(suffix) + written += n + if err != nil { + return written, err + } + } + n, err = writeOpenMetricsLabelPairs( + w, metric.Label, additionalLabelName, additionalLabelValue, + ) + written += n + if err != nil { + return written, err + } + err = w.WriteByte(' ') + written++ + if err != nil { + return written, err + } + if useIntValue { + n, err = writeUint(w, intValue) + } else { + n, err = writeOpenMetricsFloat(w, floatValue) + } + written += n + if err != nil { + return written, err + } + if metric.TimestampMs != nil { + err = w.WriteByte(' ') + written++ + if err != nil { + return written, err + } + // TODO(beorn7): Format this directly without converting to a float first. + n, err = writeOpenMetricsFloat(w, float64(*metric.TimestampMs)/1000) + written += n + if err != nil { + return written, err + } + } + if exemplar != nil { + n, err = writeExemplar(w, exemplar) + written += n + if err != nil { + return written, err + } + } + err = w.WriteByte('\n') + written++ + if err != nil { + return written, err + } + return written, nil +} + +// writeOpenMetricsLabelPairs works like writeOpenMetrics but formats the float +// in OpenMetrics style. +func writeOpenMetricsLabelPairs( + w enhancedWriter, + in []*dto.LabelPair, + additionalLabelName string, additionalLabelValue float64, +) (int, error) { + if len(in) == 0 && additionalLabelName == "" { + return 0, nil + } + var ( + written int + separator byte = '{' + ) + for _, lp := range in { + err := w.WriteByte(separator) + written++ + if err != nil { + return written, err + } + n, err := w.WriteString(lp.GetName()) + written += n + if err != nil { + return written, err + } + n, err = w.WriteString(`="`) + written += n + if err != nil { + return written, err + } + n, err = writeEscapedString(w, lp.GetValue(), true) + written += n + if err != nil { + return written, err + } + err = w.WriteByte('"') + written++ + if err != nil { + return written, err + } + separator = ',' + } + if additionalLabelName != "" { + err := w.WriteByte(separator) + written++ + if err != nil { + return written, err + } + n, err := w.WriteString(additionalLabelName) + written += n + if err != nil { + return written, err + } + n, err = w.WriteString(`="`) + written += n + if err != nil { + return written, err + } + n, err = writeOpenMetricsFloat(w, additionalLabelValue) + written += n + if err != nil { + return written, err + } + err = w.WriteByte('"') + written++ + if err != nil { + return written, err + } + } + err := w.WriteByte('}') + written++ + if err != nil { + return written, err + } + return written, nil +} + +// writeExemplar writes the provided exemplar in OpenMetrics format to w. The +// function returns the number of bytes written and any error encountered. +func writeExemplar(w enhancedWriter, e *dto.Exemplar) (int, error) { + written := 0 + n, err := w.WriteString(" # ") + written += n + if err != nil { + return written, err + } + n, err = writeOpenMetricsLabelPairs(w, e.Label, "", 0) + written += n + if err != nil { + return written, err + } + err = w.WriteByte(' ') + written++ + if err != nil { + return written, err + } + n, err = writeOpenMetricsFloat(w, e.GetValue()) + written += n + if err != nil { + return written, err + } + if e.Timestamp != nil { + err = w.WriteByte(' ') + written++ + if err != nil { + return written, err + } + ts, err := ptypes.Timestamp((*e).Timestamp) + if err != nil { + return written, err + } + // TODO(beorn7): Format this directly from components of ts to + // avoid overflow/underflow and precision issues of the float + // conversion. + n, err = writeOpenMetricsFloat(w, float64(ts.UnixNano())/1e9) + written += n + if err != nil { + return written, err + } + } + return written, nil +} + +// writeOpenMetricsFloat works like writeFloat but appends ".0" if the resulting +// number would otherwise contain neither a "." nor an "e". +func writeOpenMetricsFloat(w enhancedWriter, f float64) (int, error) { + switch { + case f == 1: + return w.WriteString("1.0") + case f == 0: + return w.WriteString("0.0") + case f == -1: + return w.WriteString("-1.0") + case math.IsNaN(f): + return w.WriteString("NaN") + case math.IsInf(f, +1): + return w.WriteString("+Inf") + case math.IsInf(f, -1): + return w.WriteString("-Inf") + default: + bp := numBufPool.Get().(*[]byte) + *bp = strconv.AppendFloat((*bp)[:0], f, 'g', -1, 64) + if !bytes.ContainsAny(*bp, "e.") { + *bp = append(*bp, '.', '0') + } + written, err := w.Write(*bp) + numBufPool.Put(bp) + return written, err + } +} + +// writeUint is like writeInt just for uint64. +func writeUint(w enhancedWriter, u uint64) (int, error) { + bp := numBufPool.Get().(*[]byte) + *bp = strconv.AppendUint((*bp)[:0], u, 10) + written, err := w.Write(*bp) + numBufPool.Put(bp) + return written, err +} diff --git a/vendor/github.com/prometheus/common/expfmt/text_create.go b/vendor/github.com/prometheus/common/expfmt/text_create.go index 0327865eee..5ba503b065 100644 --- a/vendor/github.com/prometheus/common/expfmt/text_create.go +++ b/vendor/github.com/prometheus/common/expfmt/text_create.go @@ -423,9 +423,8 @@ var ( func writeEscapedString(w enhancedWriter, v string, includeDoubleQuote bool) (int, error) { if includeDoubleQuote { return quotedEscaper.WriteString(w, v) - } else { - return escaper.WriteString(w, v) } + return escaper.WriteString(w, v) } // writeFloat is equivalent to fmt.Fprint with a float64 argument but hardcodes diff --git a/vendor/github.com/prometheus/procfs/.golangci.yml b/vendor/github.com/prometheus/procfs/.golangci.yml index 438ca92eca..0aa09edacb 100644 --- a/vendor/github.com/prometheus/procfs/.golangci.yml +++ b/vendor/github.com/prometheus/procfs/.golangci.yml @@ -1,6 +1,4 @@ -# Run only staticcheck for now. Additional linters will be enabled one-by-one. +--- linters: enable: - - staticcheck - - govet - disable-all: true + - golint diff --git a/vendor/github.com/prometheus/procfs/CONTRIBUTING.md b/vendor/github.com/prometheus/procfs/CONTRIBUTING.md index 40503edbf1..943de7615e 100644 --- a/vendor/github.com/prometheus/procfs/CONTRIBUTING.md +++ b/vendor/github.com/prometheus/procfs/CONTRIBUTING.md @@ -2,17 +2,120 @@ Prometheus uses GitHub to manage reviews of pull requests. +* If you are a new contributor see: [Steps to Contribute](#steps-to-contribute) + * If you have a trivial fix or improvement, go ahead and create a pull request, - addressing (with `@...`) the maintainer of this repository (see + addressing (with `@...`) a suitable maintainer of this repository (see [MAINTAINERS.md](MAINTAINERS.md)) in the description of the pull request. * If you plan to do something more involved, first discuss your ideas on our [mailing list](https://groups.google.com/forum/?fromgroups#!forum/prometheus-developers). This will avoid unnecessary work and surely give you and us a good deal - of inspiration. + of inspiration. Also please see our [non-goals issue](https://github.com/prometheus/docs/issues/149) on areas that the Prometheus community doesn't plan to work on. * Relevant coding style guidelines are the [Go Code Review Comments](https://code.google.com/p/go-wiki/wiki/CodeReviewComments) and the _Formatting and style_ section of Peter Bourgon's [Go: Best Practices for Production - Environments](http://peter.bourgon.org/go-in-production/#formatting-and-style). + Environments](https://peter.bourgon.org/go-in-production/#formatting-and-style). + +* Be sure to sign off on the [DCO](https://github.com/probot/dco#how-it-works) + +## Steps to Contribute + +Should you wish to work on an issue, please claim it first by commenting on the GitHub issue that you want to work on it. This is to prevent duplicated efforts from contributors on the same issue. + +Please check the [`help-wanted`](https://github.com/prometheus/procfs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) label to find issues that are good for getting started. If you have questions about one of the issues, with or without the tag, please comment on them and one of the maintainers will clarify it. For a quicker response, contact us over [IRC](https://prometheus.io/community). + +For quickly compiling and testing your changes do: +``` +make test # Make sure all the tests pass before you commit and push :) +``` + +We use [`golangci-lint`](https://github.com/golangci/golangci-lint) for linting the code. If it reports an issue and you think that the warning needs to be disregarded or is a false-positive, you can add a special comment `//nolint:linter1[,linter2,...]` before the offending line. Use this sparingly though, fixing the code to comply with the linter's recommendation is in general the preferred course of action. + +## Pull Request Checklist + +* Branch from the master branch and, if needed, rebase to the current master branch before submitting your pull request. If it doesn't merge cleanly with master you may be asked to rebase your changes. + +* Commits should be as small as possible, while ensuring that each commit is correct independently (i.e., each commit should compile and pass tests). + +* If your patch is not getting reviewed or you need a specific person to review it, you can @-reply a reviewer asking for a review in the pull request or a comment, or you can ask for a review on IRC channel [#prometheus](https://webchat.freenode.net/?channels=#prometheus) on irc.freenode.net (for the easiest start, [join via Riot](https://riot.im/app/#/room/#prometheus:matrix.org)). + +* Add tests relevant to the fixed bug or new feature. + +## Dependency management + +The Prometheus project uses [Go modules](https://golang.org/cmd/go/#hdr-Modules__module_versions__and_more) to manage dependencies on external packages. This requires a working Go environment with version 1.12 or greater installed. + +All dependencies are vendored in the `vendor/` directory. + +To add or update a new dependency, use the `go get` command: + +```bash +# Pick the latest tagged release. +go get example.com/some/module/pkg + +# Pick a specific version. +go get example.com/some/module/pkg@vX.Y.Z +``` + +Tidy up the `go.mod` and `go.sum` files and copy the new/updated dependency to the `vendor/` directory: + + +```bash +# The GO111MODULE variable can be omitted when the code isn't located in GOPATH. +GO111MODULE=on go mod tidy + +GO111MODULE=on go mod vendor +``` + +You have to commit the changes to `go.mod`, `go.sum` and the `vendor/` directory before submitting the pull request. + + +## API Implementation Guidelines + +### Naming and Documentation + +Public functions and structs should normally be named according to the file(s) being read and parsed. For example, +the `fs.BuddyInfo()` function reads the file `/proc/buddyinfo`. In addition, the godoc for each public function +should contain the path to the file(s) being read and a URL of the linux kernel documentation describing the file(s). + +### Reading vs. Parsing + +Most functionality in this library consists of reading files and then parsing the text into structured data. In most +cases reading and parsing should be separated into different functions/methods with a public `fs.Thing()` method and +a private `parseThing(r Reader)` function. This provides a logical separation and allows parsing to be tested +directly without the need to read from the filesystem. Using a `Reader` argument is preferred over other data types +such as `string` or `*File` because it provides the most flexibility regarding the data source. When a set of files +in a directory needs to be parsed, then a `path` string parameter to the parse function can be used instead. + +### /proc and /sys filesystem I/O + +The `proc` and `sys` filesystems are pseudo file systems and work a bit differently from standard disk I/O. +Many of the files are changing continuously and the data being read can in some cases change between subsequent +reads in the same file. Also, most of the files are relatively small (less than a few KBs), and system calls +to the `stat` function will often return the wrong size. Therefore, for most files it's recommended to read the +full file in a single operation using an internal utility function called `util.ReadFileNoStat`. +This function is similar to `ioutil.ReadFile`, but it avoids the system call to `stat` to get the current size of +the file. + +Note that parsing the file's contents can still be performed one line at a time. This is done by first reading +the full file, and then using a scanner on the `[]byte` or `string` containing the data. + +``` + data, err := util.ReadFileNoStat("/proc/cpuinfo") + if err != nil { + return err + } + reader := bytes.NewReader(data) + scanner := bufio.NewScanner(reader) +``` + +The `/sys` filesystem contains many very small files which contain only a single numeric or text value. These files +can be read using an internal function called `util.SysReadFile` which is similar to `ioutil.ReadFile` but does +not bother to check the size of the file before reading. +``` + data, err := util.SysReadFile("/sys/class/power_supply/BAT0/capacity") +``` + diff --git a/vendor/github.com/prometheus/procfs/Makefile.common b/vendor/github.com/prometheus/procfs/Makefile.common index d7aea1b86f..b978dfc50d 100644 --- a/vendor/github.com/prometheus/procfs/Makefile.common +++ b/vendor/github.com/prometheus/procfs/Makefile.common @@ -69,12 +69,21 @@ else GO_BUILD_PLATFORM ?= $(GOHOSTOS)-$(GOHOSTARCH) endif -PROMU_VERSION ?= 0.4.0 +GOTEST := $(GO) test +GOTEST_DIR := +ifneq ($(CIRCLE_JOB),) +ifneq ($(shell which gotestsum),) + GOTEST_DIR := test-results + GOTEST := gotestsum --junitfile $(GOTEST_DIR)/unit-tests.xml -- +endif +endif + +PROMU_VERSION ?= 0.5.0 PROMU_URL := https://github.com/prometheus/promu/releases/download/v$(PROMU_VERSION)/promu-$(PROMU_VERSION).$(GO_BUILD_PLATFORM).tar.gz GOLANGCI_LINT := GOLANGCI_LINT_OPTS ?= -GOLANGCI_LINT_VERSION ?= v1.16.0 +GOLANGCI_LINT_VERSION ?= v1.18.0 # golangci-lint only supports linux, darwin and windows platforms on i386/amd64. # windows isn't included here because of the path separator being different. ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux darwin)) @@ -86,7 +95,8 @@ endif PREFIX ?= $(shell pwd) BIN_DIR ?= $(shell pwd) DOCKER_IMAGE_TAG ?= $(subst /,-,$(shell git rev-parse --abbrev-ref HEAD)) -DOCKERFILE_PATH ?= ./ +DOCKERFILE_PATH ?= ./Dockerfile +DOCKERBUILD_CONTEXT ?= ./ DOCKER_REPO ?= prom DOCKER_ARCHS ?= amd64 @@ -141,14 +151,17 @@ else endif .PHONY: common-test-short -common-test-short: +common-test-short: $(GOTEST_DIR) @echo ">> running short tests" - GO111MODULE=$(GO111MODULE) $(GO) test -short $(GOOPTS) $(pkgs) + GO111MODULE=$(GO111MODULE) $(GOTEST) -short $(GOOPTS) $(pkgs) .PHONY: common-test -common-test: +common-test: $(GOTEST_DIR) @echo ">> running all tests" - GO111MODULE=$(GO111MODULE) $(GO) test $(test-flags) $(GOOPTS) $(pkgs) + GO111MODULE=$(GO111MODULE) $(GOTEST) $(test-flags) $(GOOPTS) $(pkgs) + +$(GOTEST_DIR): + @mkdir -p $@ .PHONY: common-format common-format: @@ -200,7 +213,7 @@ endif .PHONY: common-build common-build: promu @echo ">> building binaries" - GO111MODULE=$(GO111MODULE) $(PROMU) build --prefix $(PREFIX) + GO111MODULE=$(GO111MODULE) $(PROMU) build --prefix $(PREFIX) $(PROMU_BINARIES) .PHONY: common-tarball common-tarball: promu @@ -211,9 +224,10 @@ common-tarball: promu common-docker: $(BUILD_DOCKER_ARCHS) $(BUILD_DOCKER_ARCHS): common-docker-%: docker build -t "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" \ + -f $(DOCKERFILE_PATH) \ --build-arg ARCH="$*" \ --build-arg OS="linux" \ - $(DOCKERFILE_PATH) + $(DOCKERBUILD_CONTEXT) .PHONY: common-docker-publish $(PUBLISH_DOCKER_ARCHS) common-docker-publish: $(PUBLISH_DOCKER_ARCHS) diff --git a/vendor/github.com/prometheus/procfs/README.md b/vendor/github.com/prometheus/procfs/README.md index 6f8850feb6..55d1e3261c 100644 --- a/vendor/github.com/prometheus/procfs/README.md +++ b/vendor/github.com/prometheus/procfs/README.md @@ -1,6 +1,6 @@ # procfs -This procfs package provides functions to retrieve system, kernel and process +This package provides functions to retrieve system, kernel, and process metrics from the pseudo-filesystems /proc and /sys. *WARNING*: This package is a work in progress. Its API may still break in @@ -13,7 +13,8 @@ backwards-incompatible ways without warnings. Use it at your own risk. ## Usage The procfs library is organized by packages based on whether the gathered data is coming from -/proc, /sys, or both. Each package contains an `FS` type which represents the path to either /proc, /sys, or both. For example, current cpu statistics are gathered from +/proc, /sys, or both. Each package contains an `FS` type which represents the path to either /proc, +/sys, or both. For example, cpu statistics are gathered from `/proc/stat` and are available via the root procfs package. First, the proc filesystem mount point is initialized, and then the stat information is read. @@ -29,10 +30,17 @@ Some sub-packages such as `blockdevice`, require access to both the proc and sys stats, err := fs.ProcDiskstats() ``` +## Package Organization + +The packages in this project are organized according to (1) whether the data comes from the `/proc` or +`/sys` filesystem and (2) the type of information being retrieved. For example, most process information +can be gathered from the functions in the root `procfs` package. Information about block devices such as disk drives +is available in the `blockdevices` sub-package. + ## Building and Testing -The procfs library is normally built as part of another application. However, when making -changes to the library, the `make test` command can be used to run the API test suite. +The procfs library is intended to be built as part of another application, so there are no distributable binaries. +However, most of the API includes unit tests which can be run with `make test`. ### Updating Test Fixtures diff --git a/vendor/github.com/prometheus/procfs/cpuinfo.go b/vendor/github.com/prometheus/procfs/cpuinfo.go index 16491d6abb..2e02215528 100644 --- a/vendor/github.com/prometheus/procfs/cpuinfo.go +++ b/vendor/github.com/prometheus/procfs/cpuinfo.go @@ -16,9 +16,10 @@ package procfs import ( "bufio" "bytes" - "io/ioutil" "strconv" "strings" + + "github.com/prometheus/procfs/internal/util" ) // CPUInfo contains general information about a system CPU found in /proc/cpuinfo @@ -54,7 +55,7 @@ type CPUInfo struct { // CPUInfo returns information about current system CPUs. // See https://www.kernel.org/doc/Documentation/filesystems/proc.txt func (fs FS) CPUInfo() ([]CPUInfo, error) { - data, err := ioutil.ReadFile(fs.proc.Path("cpuinfo")) + data, err := util.ReadFileNoStat(fs.proc.Path("cpuinfo")) if err != nil { return nil, err } diff --git a/vendor/github.com/prometheus/procfs/crypto.go b/vendor/github.com/prometheus/procfs/crypto.go index 19d4041b29..a958933757 100644 --- a/vendor/github.com/prometheus/procfs/crypto.go +++ b/vendor/github.com/prometheus/procfs/crypto.go @@ -14,10 +14,10 @@ package procfs import ( + "bufio" "bytes" "fmt" - "io/ioutil" - "strconv" + "io" "strings" "github.com/prometheus/procfs/internal/util" @@ -52,80 +52,102 @@ type Crypto struct { // structs containing the relevant info. More information available here: // https://kernel.readthedocs.io/en/sphinx-samples/crypto-API.html func (fs FS) Crypto() ([]Crypto, error) { - data, err := ioutil.ReadFile(fs.proc.Path("crypto")) + path := fs.proc.Path("crypto") + b, err := util.ReadFileNoStat(path) if err != nil { - return nil, fmt.Errorf("error parsing crypto %s: %s", fs.proc.Path("crypto"), err) + return nil, fmt.Errorf("error reading crypto %s: %s", path, err) } - crypto, err := parseCrypto(data) + + crypto, err := parseCrypto(bytes.NewReader(b)) if err != nil { - return nil, fmt.Errorf("error parsing crypto %s: %s", fs.proc.Path("crypto"), err) + return nil, fmt.Errorf("error parsing crypto %s: %s", path, err) } + return crypto, nil } -func parseCrypto(cryptoData []byte) ([]Crypto, error) { - crypto := []Crypto{} - - cryptoBlocks := bytes.Split(cryptoData, []byte("\n\n")) - - for _, block := range cryptoBlocks { - var newCryptoElem Crypto - - lines := strings.Split(string(block), "\n") - for _, line := range lines { - if strings.TrimSpace(line) == "" || line[0] == ' ' { - continue - } - fields := strings.Split(line, ":") - key := strings.TrimSpace(fields[0]) - value := strings.TrimSpace(fields[1]) - vp := util.NewValueParser(value) - - switch strings.TrimSpace(key) { - case "async": - b, err := strconv.ParseBool(value) - if err == nil { - newCryptoElem.Async = b - } - case "blocksize": - newCryptoElem.Blocksize = vp.PUInt64() - case "chunksize": - newCryptoElem.Chunksize = vp.PUInt64() - case "digestsize": - newCryptoElem.Digestsize = vp.PUInt64() - case "driver": - newCryptoElem.Driver = value - case "geniv": - newCryptoElem.Geniv = value - case "internal": - newCryptoElem.Internal = value - case "ivsize": - newCryptoElem.Ivsize = vp.PUInt64() - case "maxauthsize": - newCryptoElem.Maxauthsize = vp.PUInt64() - case "max keysize": - newCryptoElem.MaxKeysize = vp.PUInt64() - case "min keysize": - newCryptoElem.MinKeysize = vp.PUInt64() - case "module": - newCryptoElem.Module = value - case "name": - newCryptoElem.Name = value - case "priority": - newCryptoElem.Priority = vp.PInt64() - case "refcnt": - newCryptoElem.Refcnt = vp.PInt64() - case "seedsize": - newCryptoElem.Seedsize = vp.PUInt64() - case "selftest": - newCryptoElem.Selftest = value - case "type": - newCryptoElem.Type = value - case "walksize": - newCryptoElem.Walksize = vp.PUInt64() - } +// parseCrypto parses a /proc/crypto stream into Crypto elements. +func parseCrypto(r io.Reader) ([]Crypto, error) { + var out []Crypto + + s := bufio.NewScanner(r) + for s.Scan() { + text := s.Text() + switch { + case strings.HasPrefix(text, "name"): + // Each crypto element begins with its name. + out = append(out, Crypto{}) + case text == "": + continue + } + + kv := strings.Split(text, ":") + if len(kv) != 2 { + return nil, fmt.Errorf("malformed crypto line: %q", text) + } + + k := strings.TrimSpace(kv[0]) + v := strings.TrimSpace(kv[1]) + + // Parse the key/value pair into the currently focused element. + c := &out[len(out)-1] + if err := c.parseKV(k, v); err != nil { + return nil, err } - crypto = append(crypto, newCryptoElem) } - return crypto, nil + + if err := s.Err(); err != nil { + return nil, err + } + + return out, nil +} + +// parseKV parses a key/value pair into the appropriate field of c. +func (c *Crypto) parseKV(k, v string) error { + vp := util.NewValueParser(v) + + switch k { + case "async": + // Interpret literal yes as true. + c.Async = v == "yes" + case "blocksize": + c.Blocksize = vp.PUInt64() + case "chunksize": + c.Chunksize = vp.PUInt64() + case "digestsize": + c.Digestsize = vp.PUInt64() + case "driver": + c.Driver = v + case "geniv": + c.Geniv = v + case "internal": + c.Internal = v + case "ivsize": + c.Ivsize = vp.PUInt64() + case "maxauthsize": + c.Maxauthsize = vp.PUInt64() + case "max keysize": + c.MaxKeysize = vp.PUInt64() + case "min keysize": + c.MinKeysize = vp.PUInt64() + case "module": + c.Module = v + case "name": + c.Name = v + case "priority": + c.Priority = vp.PInt64() + case "refcnt": + c.Refcnt = vp.PInt64() + case "seedsize": + c.Seedsize = vp.PUInt64() + case "selftest": + c.Selftest = v + case "type": + c.Type = v + case "walksize": + c.Walksize = vp.PUInt64() + } + + return vp.Err() } diff --git a/vendor/github.com/prometheus/procfs/fixtures.ttar b/vendor/github.com/prometheus/procfs/fixtures.ttar index 0b29055447..45a7321558 100644 --- a/vendor/github.com/prometheus/procfs/fixtures.ttar +++ b/vendor/github.com/prometheus/procfs/fixtures.ttar @@ -189,7 +189,7 @@ Ngid: 0 Pid: 26231 PPid: 1 TracerPid: 0 -Uid: 0 0 0 0 +Uid: 1000 1000 1000 0 Gid: 0 0 0 0 FDSize: 128 Groups: @@ -289,6 +289,19 @@ Max realtime priority 0 0 Max realtime timeout unlimited unlimited us Mode: 644 # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/proc/26232/maps +Lines: 9 +55680ae1e000-55680ae20000 r--p 00000000 fd:01 47316994 /bin/cat +55680ae29000-55680ae2a000 rwxs 0000a000 fd:01 47316994 /bin/cat +55680bed6000-55680bef7000 rw-p 00000000 00:00 0 [heap] +7fdf964fc000-7fdf973f2000 r--p 00000000 fd:01 17432624 /usr/lib/locale/locale-archive +7fdf973f2000-7fdf97417000 r--p 00000000 fd:01 60571062 /lib/x86_64-linux-gnu/libc-2.29.so +7ffe9215c000-7ffe9217f000 rw-p 00000000 00:00 0 [stack] +7ffe921da000-7ffe921dd000 r--p 00000000 00:00 0 [vvar] +7ffe921dd000-7ffe921de000 r-xp 00000000 00:00 0 [vdso] +ffffffffff600000-ffffffffff601000 --xp 00000000 00:00 0 [vsyscall] +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Path: fixtures/proc/26232/root SymlinkTo: /does/not/exist # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - @@ -317,6 +330,17 @@ Lines: 8 || || Mode: 644 # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/proc/26234 +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/proc/26234/maps +Lines: 4 +08048000-08089000 r-xp 00000000 03:01 104219 /bin/tcsh +08089000-0808c000 rw-p 00041000 03:01 104219 /bin/tcsh +0808c000-08146000 rwxp 00000000 00:00 0 +40000000-40015000 r-xp 00000000 03:01 61874 /lib/ld-2.3.2.so +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Directory: fixtures/proc/584 Mode: 755 # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - @@ -554,7 +578,7 @@ power management: Mode: 444 # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Path: fixtures/proc/crypto -Lines: 971 +Lines: 972 name : ccm(aes) driver : ccm_base(ctr(aes-aesni),cbcmac(aes-aesni)) module : ccm @@ -588,6 +612,7 @@ refcnt : 1 selftest : passed internal : no type : kpp +async : yes name : ecb(arc4) driver : ecb(arc4)-generic @@ -1614,6 +1639,11 @@ xpc 399724544 92823103 86219234 debug 0 Mode: 644 # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/proc/loadavg +Lines: 1 +0.02 0.04 0.05 1/497 11947 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Path: fixtures/proc/mdstat Lines: 56 Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] @@ -1674,6 +1704,52 @@ md101 : active (read-only) raid0 sdb[2] sdd[1] sdc[0] unused devices: Mode: 644 # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/proc/meminfo +Lines: 42 +MemTotal: 15666184 kB +MemFree: 440324 kB +Buffers: 1020128 kB +Cached: 12007640 kB +SwapCached: 0 kB +Active: 6761276 kB +Inactive: 6532708 kB +Active(anon): 267256 kB +Inactive(anon): 268 kB +Active(file): 6494020 kB +Inactive(file): 6532440 kB +Unevictable: 0 kB +Mlocked: 0 kB +SwapTotal: 0 kB +SwapFree: 0 kB +Dirty: 768 kB +Writeback: 0 kB +AnonPages: 266216 kB +Mapped: 44204 kB +Shmem: 1308 kB +Slab: 1807264 kB +SReclaimable: 1738124 kB +SUnreclaim: 69140 kB +KernelStack: 1616 kB +PageTables: 5288 kB +NFS_Unstable: 0 kB +Bounce: 0 kB +WritebackTmp: 0 kB +CommitLimit: 7833092 kB +Committed_AS: 530844 kB +VmallocTotal: 34359738367 kB +VmallocUsed: 36596 kB +VmallocChunk: 34359637840 kB +HardwareCorrupted: 0 kB +AnonHugePages: 12288 kB +HugePages_Total: 0 +HugePages_Free: 0 +HugePages_Rsvd: 0 +HugePages_Surp: 0 +Hugepagesize: 2048 kB +DirectMap4k: 91136 kB +DirectMap2M: 16039936 kB +Mode: 664 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Directory: fixtures/proc/net Mode: 755 # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - @@ -1755,9 +1831,55 @@ proc4 2 2 10853 proc4ops 72 0 0 0 1098 2 0 0 0 0 8179 5896 0 0 0 0 5900 0 0 2 0 2 0 9609 0 2 150 1272 0 0 0 1236 0 0 0 0 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Mode: 644 # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/proc/net/sockstat +Lines: 6 +sockets: used 1602 +TCP: inuse 35 orphan 0 tw 4 alloc 59 mem 22 +UDP: inuse 12 mem 62 +UDPLITE: inuse 0 +RAW: inuse 0 +FRAG: inuse 0 memory 0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/proc/net/sockstat6 +Lines: 5 +TCP6: inuse 17 +UDP6: inuse 9 +UDPLITE6: inuse 0 +RAW6: inuse 1 +FRAG6: inuse 0 memory 0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Path: fixtures/proc/net/softnet_stat -Lines: 1 +Lines: 2 00015c73 00020e76 F0000769 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 +01663fb2 00000000 000109a4 00000000 00000000 00000000 00000000 00000000 00000000 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/proc/net/softnet_stat.broken +Lines: 1 +00015c73 00020e76 F0000769 00000000 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/proc/net/udp +Lines: 4 + sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode + 0: 0A000005:0016 00000000:0000 0A 00000000:00000001 00:00000000 00000000 0 0 2740 1 ffff88003d3af3c0 100 0 0 10 0 + 1: 00000000:0016 00000000:0000 0A 00000001:00000000 00:00000000 00000000 0 0 2740 1 ffff88003d3af3c0 100 0 0 10 0 + 2: 00000000:0016 00000000:0000 0A 00000001:00000001 00:00000000 00000000 0 0 2740 1 ffff88003d3af3c0 100 0 0 10 0 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/proc/net/udp6 +Lines: 3 + sl local_address remote_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode ref pointer drops + 1315: 00000000000000000000000000000000:14EB 00000000000000000000000000000000:0000 07 00000000:00000000 00:00000000 00000000 981 0 21040 2 0000000013726323 0 + 6073: 000080FE00000000FFADE15609667CFE:C781 00000000000000000000000000000000:0000 07 00000000:00000000 00:00000000 00000000 1000 0 11337031 2 00000000b9256fdd 0 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/proc/net/udp_broken +Lines: 2 + sl local_address rem_address st + 1: 00000000:0016 00000000:0000 0A Mode: 644 # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Path: fixtures/proc/net/unix @@ -1865,6 +1987,12 @@ procs_blocked 1 softirq 5057579 250191 1481983 1647 211099 186066 0 1783454 622196 12499 508444 Mode: 644 # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/proc/swaps +Lines: 2 +Filename Type Size Used Priority +/dev/dm-2 partition 131068 176 -2 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Directory: fixtures/proc/symlinktargets Mode: 755 # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - @@ -2776,6 +2904,134 @@ SymlinkTo: ../../devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00 Path: fixtures/sys/class/power_supply/BAT0 SymlinkTo: ../../devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0 # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/class/powercap +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/class/powercap/intel-rapl +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl/enabled +Lines: 1 +1 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl/uevent +Lines: 0 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/class/powercap/intel-rapl:0 +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_0_max_power_uw +Lines: 1 +95000000 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_0_name +Lines: 1 +long_term +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_0_power_limit_uw +Lines: 1 +4090000000 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_0_time_window_us +Lines: 1 +999424 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_1_max_power_uw +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_1_name +Lines: 1 +short_term +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_1_power_limit_uw +Lines: 1 +4090000000 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_1_time_window_us +Lines: 1 +2440 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0/enabled +Lines: 1 +1 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0/energy_uj +Lines: 1 +240422366267 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0/max_energy_range_uj +Lines: 1 +262143328850 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0/name +Lines: 1 +package-0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0/uevent +Lines: 0 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/class/powercap/intel-rapl:0:0 +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0:0/constraint_0_max_power_uw +Lines: 0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0:0/constraint_0_name +Lines: 1 +long_term +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0:0/constraint_0_power_limit_uw +Lines: 1 +0 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0:0/constraint_0_time_window_us +Lines: 1 +976 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0:0/enabled +Lines: 1 +0 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0:0/energy_uj +Lines: 1 +118821284256 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0:0/max_energy_range_uj +Lines: 1 +262143328850 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0:0/name +Lines: 1 +core +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/class/powercap/intel-rapl:0:0/uevent +Lines: 0 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Directory: fixtures/sys/class/thermal Mode: 775 # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - @@ -4278,6 +4534,581 @@ Lines: 1 0 Mode: 644 # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/bytes_may_use +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/bytes_pinned +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/bytes_readonly +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/bytes_reserved +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/bytes_used +Lines: 1 +808189952 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/disk_total +Lines: 1 +2147483648 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/disk_used +Lines: 1 +808189952 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/flags +Lines: 1 +1 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/raid0 +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/raid0/total_bytes +Lines: 1 +2147483648 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/raid0/used_bytes +Lines: 1 +808189952 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/total_bytes +Lines: 1 +2147483648 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/total_bytes_pinned +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/global_rsv_reserved +Lines: 1 +16777216 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/global_rsv_size +Lines: 1 +16777216 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/bytes_may_use +Lines: 1 +16777216 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/bytes_pinned +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/bytes_readonly +Lines: 1 +131072 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/bytes_reserved +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/bytes_used +Lines: 1 +933888 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/disk_total +Lines: 1 +2147483648 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/disk_used +Lines: 1 +1867776 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/flags +Lines: 1 +4 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/raid1 +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/raid1/total_bytes +Lines: 1 +1073741824 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/raid1/used_bytes +Lines: 1 +933888 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/total_bytes +Lines: 1 +1073741824 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/total_bytes_pinned +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/bytes_may_use +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/bytes_pinned +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/bytes_readonly +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/bytes_reserved +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/bytes_used +Lines: 1 +16384 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/disk_total +Lines: 1 +16777216 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/disk_used +Lines: 1 +32768 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/flags +Lines: 1 +2 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/raid1 +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/raid1/total_bytes +Lines: 1 +8388608 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/raid1/used_bytes +Lines: 1 +16384 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/total_bytes +Lines: 1 +8388608 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/total_bytes_pinned +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/clone_alignment +Lines: 1 +4096 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/devices +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/devices/loop25 +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/devices/loop25/size +Lines: 1 +20971520 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/devices/loop26 +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/devices/loop26/size +Lines: 1 +20971520 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/features +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/features/big_metadata +Lines: 1 +1 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/features/extended_iref +Lines: 1 +1 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/features/mixed_backref +Lines: 1 +1 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/features/skinny_metadata +Lines: 1 +1 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/label +Lines: 1 +fixture +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/metadata_uuid +Lines: 1 +0abb23a9-579b-43e6-ad30-227ef47fcb9d +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/nodesize +Lines: 1 +16384 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/quota_override +Lines: 1 +0 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/sectorsize +Lines: 1 +4096 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/bytes_may_use +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/bytes_pinned +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/bytes_readonly +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/bytes_reserved +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/bytes_used +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/disk_total +Lines: 1 +644087808 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/disk_used +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/flags +Lines: 1 +1 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/raid5 +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/raid5/total_bytes +Lines: 1 +644087808 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/raid5/used_bytes +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/total_bytes +Lines: 1 +644087808 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/total_bytes_pinned +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/global_rsv_reserved +Lines: 1 +16777216 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/global_rsv_size +Lines: 1 +16777216 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/bytes_may_use +Lines: 1 +16777216 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/bytes_pinned +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/bytes_readonly +Lines: 1 +262144 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/bytes_reserved +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/bytes_used +Lines: 1 +114688 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/disk_total +Lines: 1 +429391872 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/disk_used +Lines: 1 +114688 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/flags +Lines: 1 +4 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/raid6 +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/raid6/total_bytes +Lines: 1 +429391872 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/raid6/used_bytes +Lines: 1 +114688 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/total_bytes +Lines: 1 +429391872 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/total_bytes_pinned +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/bytes_may_use +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/bytes_pinned +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/bytes_readonly +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/bytes_reserved +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/bytes_used +Lines: 1 +16384 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/disk_total +Lines: 1 +16777216 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/disk_used +Lines: 1 +16384 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/flags +Lines: 1 +2 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/raid6 +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/raid6/total_bytes +Lines: 1 +16777216 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/raid6/used_bytes +Lines: 1 +16384 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/total_bytes +Lines: 1 +16777216 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/total_bytes_pinned +Lines: 1 +0 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/clone_alignment +Lines: 1 +4096 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/devices +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/devices/loop22 +SymlinkTo: ../../../../devices/virtual/block/loop22 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/devices/loop23 +SymlinkTo: ../../../../devices/virtual/block/loop23 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/devices/loop24 +SymlinkTo: ../../../../devices/virtual/block/loop24 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/devices/loop25 +SymlinkTo: ../../../../devices/virtual/block/loop25 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/features +Mode: 755 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/features/big_metadata +Lines: 1 +1 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/features/extended_iref +Lines: 1 +1 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/features/mixed_backref +Lines: 1 +1 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/features/raid56 +Lines: 1 +1 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/features/skinny_metadata +Lines: 1 +1 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/label +Lines: 0 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/metadata_uuid +Lines: 1 +7f07c59f-6136-449c-ab87-e1cf2328731b +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/nodesize +Lines: 1 +16384 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/quota_override +Lines: 1 +0 +Mode: 644 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/sectorsize +Lines: 1 +4096 +Mode: 444 +# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Directory: fixtures/sys/fs/xfs Mode: 755 # ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/vendor/github.com/prometheus/procfs/go.mod b/vendor/github.com/prometheus/procfs/go.mod index b2f8cca933..ded48253cd 100644 --- a/vendor/github.com/prometheus/procfs/go.mod +++ b/vendor/github.com/prometheus/procfs/go.mod @@ -1,6 +1,9 @@ module github.com/prometheus/procfs +go 1.12 + require ( - github.com/google/go-cmp v0.3.0 - golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4 + github.com/google/go-cmp v0.3.1 + golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e + golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e ) diff --git a/vendor/github.com/prometheus/procfs/go.sum b/vendor/github.com/prometheus/procfs/go.sum index db54133d7c..54b5f33033 100644 --- a/vendor/github.com/prometheus/procfs/go.sum +++ b/vendor/github.com/prometheus/procfs/go.sum @@ -1,4 +1,6 @@ -github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4 h1:YUO/7uOKsKeq9UokNS62b8FYywz3ker1l1vDZRCRefw= -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg= +github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= +golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e h1:vcxGaoTs7kV8m5Np9uUNQin4BrLOthgV7252N8V+FwY= +golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e h1:LwyF2AFISC9nVbS6MgzsaQNSUsRXI49GS+YQ5KX/QH0= +golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= diff --git a/vendor/github.com/prometheus/procfs/internal/fs/fs.go b/vendor/github.com/prometheus/procfs/internal/fs/fs.go index 7ddfd6b6ed..565e89e42c 100644 --- a/vendor/github.com/prometheus/procfs/internal/fs/fs.go +++ b/vendor/github.com/prometheus/procfs/internal/fs/fs.go @@ -26,7 +26,7 @@ const ( // DefaultSysMountPoint is the common mount point of the sys filesystem. DefaultSysMountPoint = "/sys" - // DefaultConfigfsMountPoint is the commont mount point of the configfs + // DefaultConfigfsMountPoint is the common mount point of the configfs DefaultConfigfsMountPoint = "/sys/kernel/config" ) diff --git a/vendor/github.com/prometheus/procfs/internal/util/readfile.go b/vendor/github.com/prometheus/procfs/internal/util/readfile.go new file mode 100644 index 0000000000..8051161b2a --- /dev/null +++ b/vendor/github.com/prometheus/procfs/internal/util/readfile.go @@ -0,0 +1,38 @@ +// Copyright 2019 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package util + +import ( + "io" + "io/ioutil" + "os" +) + +// ReadFileNoStat uses ioutil.ReadAll to read contents of entire file. +// This is similar to ioutil.ReadFile but without the call to os.Stat, because +// many files in /proc and /sys report incorrect file sizes (either 0 or 4096). +// Reads a max file size of 512kB. For files larger than this, a scanner +// should be used. +func ReadFileNoStat(filename string) ([]byte, error) { + const maxBufferSize = 1024 * 512 + + f, err := os.Open(filename) + if err != nil { + return nil, err + } + defer f.Close() + + reader := io.LimitReader(f, maxBufferSize) + return ioutil.ReadAll(reader) +} diff --git a/vendor/github.com/prometheus/procfs/internal/util/sysreadfile.go b/vendor/github.com/prometheus/procfs/internal/util/sysreadfile.go index 68b37c4b3c..c07de0b6c9 100644 --- a/vendor/github.com/prometheus/procfs/internal/util/sysreadfile.go +++ b/vendor/github.com/prometheus/procfs/internal/util/sysreadfile.go @@ -23,6 +23,8 @@ import ( // SysReadFile is a simplified ioutil.ReadFile that invokes syscall.Read directly. // https://github.com/prometheus/node_exporter/pull/728/files +// +// Note that this function will not read files larger than 128 bytes. func SysReadFile(file string) (string, error) { f, err := os.Open(file) if err != nil { @@ -35,7 +37,8 @@ func SysReadFile(file string) (string, error) { // // Since we either want to read data or bail immediately, do the simplest // possible read using syscall directly. - b := make([]byte, 128) + const sysFileBufferSize = 128 + b := make([]byte, sysFileBufferSize) n, err := syscall.Read(int(f.Fd()), b) if err != nil { return "", err diff --git a/vendor/github.com/prometheus/procfs/internal/util/valueparser.go b/vendor/github.com/prometheus/procfs/internal/util/valueparser.go index ac93cb42d2..fe2355d3c6 100644 --- a/vendor/github.com/prometheus/procfs/internal/util/valueparser.go +++ b/vendor/github.com/prometheus/procfs/internal/util/valueparser.go @@ -33,6 +33,9 @@ func NewValueParser(v string) *ValueParser { return &ValueParser{v: v} } +// Int interprets the underlying value as an int and returns that value. +func (vp *ValueParser) Int() int { return int(vp.int64()) } + // PInt64 interprets the underlying value as an int64 and returns a pointer to // that value. func (vp *ValueParser) PInt64() *int64 { @@ -40,16 +43,27 @@ func (vp *ValueParser) PInt64() *int64 { return nil } + v := vp.int64() + return &v +} + +// int64 interprets the underlying value as an int64 and returns that value. +// TODO: export if/when necessary. +func (vp *ValueParser) int64() int64 { + if vp.err != nil { + return 0 + } + // A base value of zero makes ParseInt infer the correct base using the // string's prefix, if any. const base = 0 v, err := strconv.ParseInt(vp.v, base, 64) if err != nil { vp.err = err - return nil + return 0 } - return &v + return v } // PUInt64 interprets the underlying value as an uint64 and returns a pointer to diff --git a/vendor/github.com/prometheus/procfs/ipvs.go b/vendor/github.com/prometheus/procfs/ipvs.go index 2d6cb8d1c6..89e447746c 100644 --- a/vendor/github.com/prometheus/procfs/ipvs.go +++ b/vendor/github.com/prometheus/procfs/ipvs.go @@ -15,6 +15,7 @@ package procfs import ( "bufio" + "bytes" "encoding/hex" "errors" "fmt" @@ -24,6 +25,8 @@ import ( "os" "strconv" "strings" + + "github.com/prometheus/procfs/internal/util" ) // IPVSStats holds IPVS statistics, as exposed by the kernel in `/proc/net/ip_vs_stats`. @@ -64,17 +67,16 @@ type IPVSBackendStatus struct { // IPVSStats reads the IPVS statistics from the specified `proc` filesystem. func (fs FS) IPVSStats() (IPVSStats, error) { - file, err := os.Open(fs.proc.Path("net/ip_vs_stats")) + data, err := util.ReadFileNoStat(fs.proc.Path("net/ip_vs_stats")) if err != nil { return IPVSStats{}, err } - defer file.Close() - return parseIPVSStats(file) + return parseIPVSStats(bytes.NewReader(data)) } // parseIPVSStats performs the actual parsing of `ip_vs_stats`. -func parseIPVSStats(file io.Reader) (IPVSStats, error) { +func parseIPVSStats(r io.Reader) (IPVSStats, error) { var ( statContent []byte statLines []string @@ -82,7 +84,7 @@ func parseIPVSStats(file io.Reader) (IPVSStats, error) { stats IPVSStats ) - statContent, err := ioutil.ReadAll(file) + statContent, err := ioutil.ReadAll(r) if err != nil { return IPVSStats{}, err } diff --git a/vendor/github.com/prometheus/procfs/loadavg.go b/vendor/github.com/prometheus/procfs/loadavg.go new file mode 100644 index 0000000000..00bbe14417 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/loadavg.go @@ -0,0 +1,62 @@ +// Copyright 2019 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "fmt" + "strconv" + "strings" + + "github.com/prometheus/procfs/internal/util" +) + +// LoadAvg represents an entry in /proc/loadavg +type LoadAvg struct { + Load1 float64 + Load5 float64 + Load15 float64 +} + +// LoadAvg returns loadavg from /proc. +func (fs FS) LoadAvg() (*LoadAvg, error) { + path := fs.proc.Path("loadavg") + + data, err := util.ReadFileNoStat(path) + if err != nil { + return nil, err + } + return parseLoad(data) +} + +// Parse /proc loadavg and return 1m, 5m and 15m. +func parseLoad(loadavgBytes []byte) (*LoadAvg, error) { + loads := make([]float64, 3) + parts := strings.Fields(string(loadavgBytes)) + if len(parts) < 3 { + return nil, fmt.Errorf("malformed loadavg line: too few fields in loadavg string: %s", string(loadavgBytes)) + } + + var err error + for i, load := range parts[0:3] { + loads[i], err = strconv.ParseFloat(load, 64) + if err != nil { + return nil, fmt.Errorf("could not parse load '%s': %s", load, err) + } + } + return &LoadAvg{ + Load1: loads[0], + Load5: loads[1], + Load15: loads[2], + }, nil +} diff --git a/vendor/github.com/prometheus/procfs/meminfo.go b/vendor/github.com/prometheus/procfs/meminfo.go new file mode 100644 index 0000000000..50dab4bcd5 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/meminfo.go @@ -0,0 +1,277 @@ +// Copyright 2019 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "bufio" + "bytes" + "fmt" + "io" + "strconv" + "strings" + + "github.com/prometheus/procfs/internal/util" +) + +// Meminfo represents memory statistics. +type Meminfo struct { + // Total usable ram (i.e. physical ram minus a few reserved + // bits and the kernel binary code) + MemTotal uint64 + // The sum of LowFree+HighFree + MemFree uint64 + // An estimate of how much memory is available for starting + // new applications, without swapping. Calculated from + // MemFree, SReclaimable, the size of the file LRU lists, and + // the low watermarks in each zone. The estimate takes into + // account that the system needs some page cache to function + // well, and that not all reclaimable slab will be + // reclaimable, due to items being in use. The impact of those + // factors will vary from system to system. + MemAvailable uint64 + // Relatively temporary storage for raw disk blocks shouldn't + // get tremendously large (20MB or so) + Buffers uint64 + Cached uint64 + // Memory that once was swapped out, is swapped back in but + // still also is in the swapfile (if memory is needed it + // doesn't need to be swapped out AGAIN because it is already + // in the swapfile. This saves I/O) + SwapCached uint64 + // Memory that has been used more recently and usually not + // reclaimed unless absolutely necessary. + Active uint64 + // Memory which has been less recently used. It is more + // eligible to be reclaimed for other purposes + Inactive uint64 + ActiveAnon uint64 + InactiveAnon uint64 + ActiveFile uint64 + InactiveFile uint64 + Unevictable uint64 + Mlocked uint64 + // total amount of swap space available + SwapTotal uint64 + // Memory which has been evicted from RAM, and is temporarily + // on the disk + SwapFree uint64 + // Memory which is waiting to get written back to the disk + Dirty uint64 + // Memory which is actively being written back to the disk + Writeback uint64 + // Non-file backed pages mapped into userspace page tables + AnonPages uint64 + // files which have been mapped, such as libraries + Mapped uint64 + Shmem uint64 + // in-kernel data structures cache + Slab uint64 + // Part of Slab, that might be reclaimed, such as caches + SReclaimable uint64 + // Part of Slab, that cannot be reclaimed on memory pressure + SUnreclaim uint64 + KernelStack uint64 + // amount of memory dedicated to the lowest level of page + // tables. + PageTables uint64 + // NFS pages sent to the server, but not yet committed to + // stable storage + NFSUnstable uint64 + // Memory used for block device "bounce buffers" + Bounce uint64 + // Memory used by FUSE for temporary writeback buffers + WritebackTmp uint64 + // Based on the overcommit ratio ('vm.overcommit_ratio'), + // this is the total amount of memory currently available to + // be allocated on the system. This limit is only adhered to + // if strict overcommit accounting is enabled (mode 2 in + // 'vm.overcommit_memory'). + // The CommitLimit is calculated with the following formula: + // CommitLimit = ([total RAM pages] - [total huge TLB pages]) * + // overcommit_ratio / 100 + [total swap pages] + // For example, on a system with 1G of physical RAM and 7G + // of swap with a `vm.overcommit_ratio` of 30 it would + // yield a CommitLimit of 7.3G. + // For more details, see the memory overcommit documentation + // in vm/overcommit-accounting. + CommitLimit uint64 + // The amount of memory presently allocated on the system. + // The committed memory is a sum of all of the memory which + // has been allocated by processes, even if it has not been + // "used" by them as of yet. A process which malloc()'s 1G + // of memory, but only touches 300M of it will show up as + // using 1G. This 1G is memory which has been "committed" to + // by the VM and can be used at any time by the allocating + // application. With strict overcommit enabled on the system + // (mode 2 in 'vm.overcommit_memory'),allocations which would + // exceed the CommitLimit (detailed above) will not be permitted. + // This is useful if one needs to guarantee that processes will + // not fail due to lack of memory once that memory has been + // successfully allocated. + CommittedAS uint64 + // total size of vmalloc memory area + VmallocTotal uint64 + // amount of vmalloc area which is used + VmallocUsed uint64 + // largest contiguous block of vmalloc area which is free + VmallocChunk uint64 + HardwareCorrupted uint64 + AnonHugePages uint64 + ShmemHugePages uint64 + ShmemPmdMapped uint64 + CmaTotal uint64 + CmaFree uint64 + HugePagesTotal uint64 + HugePagesFree uint64 + HugePagesRsvd uint64 + HugePagesSurp uint64 + Hugepagesize uint64 + DirectMap4k uint64 + DirectMap2M uint64 + DirectMap1G uint64 +} + +// Meminfo returns an information about current kernel/system memory statistics. +// See https://www.kernel.org/doc/Documentation/filesystems/proc.txt +func (fs FS) Meminfo() (Meminfo, error) { + b, err := util.ReadFileNoStat(fs.proc.Path("meminfo")) + if err != nil { + return Meminfo{}, err + } + + m, err := parseMemInfo(bytes.NewReader(b)) + if err != nil { + return Meminfo{}, fmt.Errorf("failed to parse meminfo: %v", err) + } + + return *m, nil +} + +func parseMemInfo(r io.Reader) (*Meminfo, error) { + var m Meminfo + s := bufio.NewScanner(r) + for s.Scan() { + // Each line has at least a name and value; we ignore the unit. + fields := strings.Fields(s.Text()) + if len(fields) < 2 { + return nil, fmt.Errorf("malformed meminfo line: %q", s.Text()) + } + + v, err := strconv.ParseUint(fields[1], 0, 64) + if err != nil { + return nil, err + } + + switch fields[0] { + case "MemTotal:": + m.MemTotal = v + case "MemFree:": + m.MemFree = v + case "MemAvailable:": + m.MemAvailable = v + case "Buffers:": + m.Buffers = v + case "Cached:": + m.Cached = v + case "SwapCached:": + m.SwapCached = v + case "Active:": + m.Active = v + case "Inactive:": + m.Inactive = v + case "Active(anon):": + m.ActiveAnon = v + case "Inactive(anon):": + m.InactiveAnon = v + case "Active(file):": + m.ActiveFile = v + case "Inactive(file):": + m.InactiveFile = v + case "Unevictable:": + m.Unevictable = v + case "Mlocked:": + m.Mlocked = v + case "SwapTotal:": + m.SwapTotal = v + case "SwapFree:": + m.SwapFree = v + case "Dirty:": + m.Dirty = v + case "Writeback:": + m.Writeback = v + case "AnonPages:": + m.AnonPages = v + case "Mapped:": + m.Mapped = v + case "Shmem:": + m.Shmem = v + case "Slab:": + m.Slab = v + case "SReclaimable:": + m.SReclaimable = v + case "SUnreclaim:": + m.SUnreclaim = v + case "KernelStack:": + m.KernelStack = v + case "PageTables:": + m.PageTables = v + case "NFS_Unstable:": + m.NFSUnstable = v + case "Bounce:": + m.Bounce = v + case "WritebackTmp:": + m.WritebackTmp = v + case "CommitLimit:": + m.CommitLimit = v + case "Committed_AS:": + m.CommittedAS = v + case "VmallocTotal:": + m.VmallocTotal = v + case "VmallocUsed:": + m.VmallocUsed = v + case "VmallocChunk:": + m.VmallocChunk = v + case "HardwareCorrupted:": + m.HardwareCorrupted = v + case "AnonHugePages:": + m.AnonHugePages = v + case "ShmemHugePages:": + m.ShmemHugePages = v + case "ShmemPmdMapped:": + m.ShmemPmdMapped = v + case "CmaTotal:": + m.CmaTotal = v + case "CmaFree:": + m.CmaFree = v + case "HugePages_Total:": + m.HugePagesTotal = v + case "HugePages_Free:": + m.HugePagesFree = v + case "HugePages_Rsvd:": + m.HugePagesRsvd = v + case "HugePages_Surp:": + m.HugePagesSurp = v + case "Hugepagesize:": + m.Hugepagesize = v + case "DirectMap4k:": + m.DirectMap4k = v + case "DirectMap2M:": + m.DirectMap2M = v + case "DirectMap1G:": + m.DirectMap1G = v + } + } + + return &m, nil +} diff --git a/vendor/github.com/prometheus/procfs/mountinfo.go b/vendor/github.com/prometheus/procfs/mountinfo.go index 61fa618874..9471136101 100644 --- a/vendor/github.com/prometheus/procfs/mountinfo.go +++ b/vendor/github.com/prometheus/procfs/mountinfo.go @@ -15,19 +15,13 @@ package procfs import ( "bufio" + "bytes" "fmt" - "io" - "os" "strconv" "strings" -) -var validOptionalFields = map[string]bool{ - "shared": true, - "master": true, - "propagate_from": true, - "unbindable": true, -} + "github.com/prometheus/procfs/internal/util" +) // A MountInfo is a type that describes the details, options // for each mount, parsed from /proc/self/mountinfo. @@ -35,10 +29,10 @@ var validOptionalFields = map[string]bool{ // is described in the following man page. // http://man7.org/linux/man-pages/man5/proc.5.html type MountInfo struct { - // Unique Id for the mount - MountId int - // The Id of the parent mount - ParentId int + // Unique ID for the mount + MountID int + // The ID of the parent mount + ParentID int // The value of `st_dev` for the files on this FS MajorMinorVer string // The pathname of the directory in the FS that forms @@ -58,18 +52,10 @@ type MountInfo struct { SuperOptions map[string]string } -// Returns part of the mountinfo line, if it exists, else an empty string. -func getStringSliceElement(parts []string, idx int, defaultValue string) string { - if idx >= len(parts) { - return defaultValue - } - return parts[idx] -} - // Reads each line of the mountinfo file, and returns a list of formatted MountInfo structs. -func parseMountInfo(r io.Reader) ([]*MountInfo, error) { +func parseMountInfo(info []byte) ([]*MountInfo, error) { mounts := []*MountInfo{} - scanner := bufio.NewScanner(r) + scanner := bufio.NewScanner(bytes.NewReader(info)) for scanner.Scan() { mountString := scanner.Text() parsedMounts, err := parseMountInfoString(mountString) @@ -89,57 +75,75 @@ func parseMountInfo(r io.Reader) ([]*MountInfo, error) { func parseMountInfoString(mountString string) (*MountInfo, error) { var err error - // OptionalFields can be zero, hence these checks to ensure we do not populate the wrong values in the wrong spots - separatorIndex := strings.Index(mountString, "-") - if separatorIndex == -1 { - return nil, fmt.Errorf("no separator found in mountinfo string: %s", mountString) + mountInfo := strings.Split(mountString, " ") + mountInfoLength := len(mountInfo) + if mountInfoLength < 11 { + return nil, fmt.Errorf("couldn't find enough fields in mount string: %s", mountString) } - beforeFields := strings.Fields(mountString[:separatorIndex]) - afterFields := strings.Fields(mountString[separatorIndex+1:]) - if (len(beforeFields) + len(afterFields)) < 7 { - return nil, fmt.Errorf("too few fields") + + if mountInfo[mountInfoLength-4] != "-" { + return nil, fmt.Errorf("couldn't find separator in expected field: %s", mountInfo[mountInfoLength-4]) } mount := &MountInfo{ - MajorMinorVer: getStringSliceElement(beforeFields, 2, ""), - Root: getStringSliceElement(beforeFields, 3, ""), - MountPoint: getStringSliceElement(beforeFields, 4, ""), - Options: mountOptionsParser(getStringSliceElement(beforeFields, 5, "")), + MajorMinorVer: mountInfo[2], + Root: mountInfo[3], + MountPoint: mountInfo[4], + Options: mountOptionsParser(mountInfo[5]), OptionalFields: nil, - FSType: getStringSliceElement(afterFields, 0, ""), - Source: getStringSliceElement(afterFields, 1, ""), - SuperOptions: mountOptionsParser(getStringSliceElement(afterFields, 2, "")), + FSType: mountInfo[mountInfoLength-3], + Source: mountInfo[mountInfoLength-2], + SuperOptions: mountOptionsParser(mountInfo[mountInfoLength-1]), } - mount.MountId, err = strconv.Atoi(getStringSliceElement(beforeFields, 0, "")) + mount.MountID, err = strconv.Atoi(mountInfo[0]) if err != nil { return nil, fmt.Errorf("failed to parse mount ID") } - mount.ParentId, err = strconv.Atoi(getStringSliceElement(beforeFields, 1, "")) + mount.ParentID, err = strconv.Atoi(mountInfo[1]) if err != nil { return nil, fmt.Errorf("failed to parse parent ID") } // Has optional fields, which is a space separated list of values. // Example: shared:2 master:7 - if len(beforeFields) > 6 { - mount.OptionalFields = make(map[string]string) - optionalFields := beforeFields[6:] - for _, field := range optionalFields { - optionSplit := strings.Split(field, ":") - target, value := optionSplit[0], "" - if len(optionSplit) == 2 { - value = optionSplit[1] - } - // Checks if the 'keys' in the optional fields in the mountinfo line are acceptable. - // Allowed 'keys' are shared, master, propagate_from, unbindable. - if _, ok := validOptionalFields[target]; ok { - mount.OptionalFields[target] = value - } + if mountInfo[6] != "" { + mount.OptionalFields, err = mountOptionsParseOptionalFields(mountInfo[6 : mountInfoLength-4]) + if err != nil { + return nil, err } } return mount, nil } +// mountOptionsIsValidField checks a string against a valid list of optional fields keys. +func mountOptionsIsValidField(s string) bool { + switch s { + case + "shared", + "master", + "propagate_from", + "unbindable": + return true + } + return false +} + +// mountOptionsParseOptionalFields parses a list of optional fields strings into a double map of strings. +func mountOptionsParseOptionalFields(o []string) (map[string]string, error) { + optionalFields := make(map[string]string) + for _, field := range o { + optionSplit := strings.SplitN(field, ":", 2) + value := "" + if len(optionSplit) == 2 { + value = optionSplit[1] + } + if mountOptionsIsValidField(optionSplit[0]) { + optionalFields[optionSplit[0]] = value + } + } + return optionalFields, nil +} + // Parses the mount options, superblock options. func mountOptionsParser(mountOptions string) map[string]string { opts := make(map[string]string) @@ -159,20 +163,18 @@ func mountOptionsParser(mountOptions string) map[string]string { // Retrieves mountinfo information from `/proc/self/mountinfo`. func GetMounts() ([]*MountInfo, error) { - f, err := os.Open("/proc/self/mountinfo") + data, err := util.ReadFileNoStat("/proc/self/mountinfo") if err != nil { return nil, err } - defer f.Close() - return parseMountInfo(f) + return parseMountInfo(data) } // Retrieves mountinfo information from a processes' `/proc//mountinfo`. func GetProcMounts(pid int) ([]*MountInfo, error) { - f, err := os.Open(fmt.Sprintf("/proc/%d/mountinfo", pid)) + data, err := util.ReadFileNoStat(fmt.Sprintf("/proc/%d/mountinfo", pid)) if err != nil { return nil, err } - defer f.Close() - return parseMountInfo(f) + return parseMountInfo(data) } diff --git a/vendor/github.com/prometheus/procfs/net_conntrackstat.go b/vendor/github.com/prometheus/procfs/net_conntrackstat.go new file mode 100644 index 0000000000..1e27c83d50 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/net_conntrackstat.go @@ -0,0 +1,153 @@ +// Copyright 2020 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "bufio" + "bytes" + "fmt" + "io" + "strconv" + "strings" + + "github.com/prometheus/procfs/internal/util" +) + +// A ConntrackStatEntry represents one line from net/stat/nf_conntrack +// and contains netfilter conntrack statistics at one CPU core +type ConntrackStatEntry struct { + Entries uint64 + Found uint64 + Invalid uint64 + Ignore uint64 + Insert uint64 + InsertFailed uint64 + Drop uint64 + EarlyDrop uint64 + SearchRestart uint64 +} + +// Retrieves netfilter's conntrack statistics, split by CPU cores +func (fs FS) ConntrackStat() ([]ConntrackStatEntry, error) { + return readConntrackStat(fs.proc.Path("net", "stat", "nf_conntrack")) +} + +// Parses a slice of ConntrackStatEntries from the given filepath +func readConntrackStat(path string) ([]ConntrackStatEntry, error) { + // This file is small and can be read with one syscall. + b, err := util.ReadFileNoStat(path) + if err != nil { + // Do not wrap this error so the caller can detect os.IsNotExist and + // similar conditions. + return nil, err + } + + stat, err := parseConntrackStat(bytes.NewReader(b)) + if err != nil { + return nil, fmt.Errorf("failed to read conntrack stats from %q: %v", path, err) + } + + return stat, nil +} + +// Reads the contents of a conntrack statistics file and parses a slice of ConntrackStatEntries +func parseConntrackStat(r io.Reader) ([]ConntrackStatEntry, error) { + var entries []ConntrackStatEntry + + scanner := bufio.NewScanner(r) + scanner.Scan() + for scanner.Scan() { + fields := strings.Fields(scanner.Text()) + conntrackEntry, err := parseConntrackStatEntry(fields) + if err != nil { + return nil, err + } + entries = append(entries, *conntrackEntry) + } + + return entries, nil +} + +// Parses a ConntrackStatEntry from given array of fields +func parseConntrackStatEntry(fields []string) (*ConntrackStatEntry, error) { + if len(fields) != 17 { + return nil, fmt.Errorf("invalid conntrackstat entry, missing fields") + } + entry := &ConntrackStatEntry{} + + entries, err := parseConntrackStatField(fields[0]) + if err != nil { + return nil, err + } + entry.Entries = entries + + found, err := parseConntrackStatField(fields[2]) + if err != nil { + return nil, err + } + entry.Found = found + + invalid, err := parseConntrackStatField(fields[4]) + if err != nil { + return nil, err + } + entry.Invalid = invalid + + ignore, err := parseConntrackStatField(fields[5]) + if err != nil { + return nil, err + } + entry.Ignore = ignore + + insert, err := parseConntrackStatField(fields[8]) + if err != nil { + return nil, err + } + entry.Insert = insert + + insertFailed, err := parseConntrackStatField(fields[9]) + if err != nil { + return nil, err + } + entry.InsertFailed = insertFailed + + drop, err := parseConntrackStatField(fields[10]) + if err != nil { + return nil, err + } + entry.Drop = drop + + earlyDrop, err := parseConntrackStatField(fields[11]) + if err != nil { + return nil, err + } + entry.EarlyDrop = earlyDrop + + searchRestart, err := parseConntrackStatField(fields[16]) + if err != nil { + return nil, err + } + entry.SearchRestart = searchRestart + + return entry, nil +} + +// Parses a uint64 from given hex in string +func parseConntrackStatField(field string) (uint64, error) { + val, err := strconv.ParseUint(field, 16, 64) + if err != nil { + return 0, fmt.Errorf("couldn't parse \"%s\" field: %s", field, err) + } + return val, err +} diff --git a/vendor/github.com/prometheus/procfs/net_dev.go b/vendor/github.com/prometheus/procfs/net_dev.go index a0b7a01196..47a710befb 100644 --- a/vendor/github.com/prometheus/procfs/net_dev.go +++ b/vendor/github.com/prometheus/procfs/net_dev.go @@ -183,7 +183,6 @@ func (netDev NetDev) Total() NetDevLine { names = append(names, ifc.Name) total.RxBytes += ifc.RxBytes total.RxPackets += ifc.RxPackets - total.RxPackets += ifc.RxPackets total.RxErrors += ifc.RxErrors total.RxDropped += ifc.RxDropped total.RxFIFO += ifc.RxFIFO diff --git a/vendor/github.com/prometheus/procfs/net_sockstat.go b/vendor/github.com/prometheus/procfs/net_sockstat.go new file mode 100644 index 0000000000..f91ef55237 --- /dev/null +++ b/vendor/github.com/prometheus/procfs/net_sockstat.go @@ -0,0 +1,163 @@ +// Copyright 2019 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "bufio" + "bytes" + "errors" + "fmt" + "io" + "strings" + + "github.com/prometheus/procfs/internal/util" +) + +// A NetSockstat contains the output of /proc/net/sockstat{,6} for IPv4 or IPv6, +// respectively. +type NetSockstat struct { + // Used is non-nil for IPv4 sockstat results, but nil for IPv6. + Used *int + Protocols []NetSockstatProtocol +} + +// A NetSockstatProtocol contains statistics about a given socket protocol. +// Pointer fields indicate that the value may or may not be present on any +// given protocol. +type NetSockstatProtocol struct { + Protocol string + InUse int + Orphan *int + TW *int + Alloc *int + Mem *int + Memory *int +} + +// NetSockstat retrieves IPv4 socket statistics. +func (fs FS) NetSockstat() (*NetSockstat, error) { + return readSockstat(fs.proc.Path("net", "sockstat")) +} + +// NetSockstat6 retrieves IPv6 socket statistics. +// +// If IPv6 is disabled on this kernel, the returned error can be checked with +// os.IsNotExist. +func (fs FS) NetSockstat6() (*NetSockstat, error) { + return readSockstat(fs.proc.Path("net", "sockstat6")) +} + +// readSockstat opens and parses a NetSockstat from the input file. +func readSockstat(name string) (*NetSockstat, error) { + // This file is small and can be read with one syscall. + b, err := util.ReadFileNoStat(name) + if err != nil { + // Do not wrap this error so the caller can detect os.IsNotExist and + // similar conditions. + return nil, err + } + + stat, err := parseSockstat(bytes.NewReader(b)) + if err != nil { + return nil, fmt.Errorf("failed to read sockstats from %q: %v", name, err) + } + + return stat, nil +} + +// parseSockstat reads the contents of a sockstat file and parses a NetSockstat. +func parseSockstat(r io.Reader) (*NetSockstat, error) { + var stat NetSockstat + s := bufio.NewScanner(r) + for s.Scan() { + // Expect a minimum of a protocol and one key/value pair. + fields := strings.Split(s.Text(), " ") + if len(fields) < 3 { + return nil, fmt.Errorf("malformed sockstat line: %q", s.Text()) + } + + // The remaining fields are key/value pairs. + kvs, err := parseSockstatKVs(fields[1:]) + if err != nil { + return nil, fmt.Errorf("error parsing sockstat key/value pairs from %q: %v", s.Text(), err) + } + + // The first field is the protocol. We must trim its colon suffix. + proto := strings.TrimSuffix(fields[0], ":") + switch proto { + case "sockets": + // Special case: IPv4 has a sockets "used" key/value pair that we + // embed at the top level of the structure. + used := kvs["used"] + stat.Used = &used + default: + // Parse all other lines as individual protocols. + nsp := parseSockstatProtocol(kvs) + nsp.Protocol = proto + stat.Protocols = append(stat.Protocols, nsp) + } + } + + if err := s.Err(); err != nil { + return nil, err + } + + return &stat, nil +} + +// parseSockstatKVs parses a string slice into a map of key/value pairs. +func parseSockstatKVs(kvs []string) (map[string]int, error) { + if len(kvs)%2 != 0 { + return nil, errors.New("odd number of fields in key/value pairs") + } + + // Iterate two values at a time to gather key/value pairs. + out := make(map[string]int, len(kvs)/2) + for i := 0; i < len(kvs); i += 2 { + vp := util.NewValueParser(kvs[i+1]) + out[kvs[i]] = vp.Int() + + if err := vp.Err(); err != nil { + return nil, err + } + } + + return out, nil +} + +// parseSockstatProtocol parses a NetSockstatProtocol from the input kvs map. +func parseSockstatProtocol(kvs map[string]int) NetSockstatProtocol { + var nsp NetSockstatProtocol + for k, v := range kvs { + // Capture the range variable to ensure we get unique pointers for + // each of the optional fields. + v := v + switch k { + case "inuse": + nsp.InUse = v + case "orphan": + nsp.Orphan = &v + case "tw": + nsp.TW = &v + case "alloc": + nsp.Alloc = &v + case "mem": + nsp.Mem = &v + case "memory": + nsp.Memory = &v + } + } + + return nsp +} diff --git a/vendor/github.com/prometheus/procfs/net_softnet.go b/vendor/github.com/prometheus/procfs/net_softnet.go index 6fcad20afc..db5debdf4a 100644 --- a/vendor/github.com/prometheus/procfs/net_softnet.go +++ b/vendor/github.com/prometheus/procfs/net_softnet.go @@ -14,78 +14,89 @@ package procfs import ( + "bufio" + "bytes" "fmt" - "io/ioutil" + "io" "strconv" "strings" + + "github.com/prometheus/procfs/internal/util" ) // For the proc file format details, -// see https://elixir.bootlin.com/linux/v4.17/source/net/core/net-procfs.c#L162 +// See: +// * Linux 2.6.23 https://elixir.bootlin.com/linux/v2.6.23/source/net/core/dev.c#L2343 +// * Linux 4.17 https://elixir.bootlin.com/linux/v4.17/source/net/core/net-procfs.c#L162 // and https://elixir.bootlin.com/linux/v4.17/source/include/linux/netdevice.h#L2810. -// SoftnetEntry contains a single row of data from /proc/net/softnet_stat -type SoftnetEntry struct { +// SoftnetStat contains a single row of data from /proc/net/softnet_stat +type SoftnetStat struct { // Number of processed packets - Processed uint + Processed uint32 // Number of dropped packets - Dropped uint + Dropped uint32 // Number of times processing packets ran out of quota - TimeSqueezed uint + TimeSqueezed uint32 } -// GatherSoftnetStats reads /proc/net/softnet_stat, parse the relevant columns, -// and then return a slice of SoftnetEntry's. -func (fs FS) GatherSoftnetStats() ([]SoftnetEntry, error) { - data, err := ioutil.ReadFile(fs.proc.Path("net/softnet_stat")) +var softNetProcFile = "net/softnet_stat" + +// NetSoftnetStat reads data from /proc/net/softnet_stat. +func (fs FS) NetSoftnetStat() ([]SoftnetStat, error) { + b, err := util.ReadFileNoStat(fs.proc.Path(softNetProcFile)) + if err != nil { + return nil, err + } + + entries, err := parseSoftnet(bytes.NewReader(b)) if err != nil { - return nil, fmt.Errorf("error reading softnet %s: %s", fs.proc.Path("net/softnet_stat"), err) + return nil, fmt.Errorf("failed to parse /proc/net/softnet_stat: %v", err) } - return parseSoftnetEntries(data) + return entries, nil } -func parseSoftnetEntries(data []byte) ([]SoftnetEntry, error) { - lines := strings.Split(string(data), "\n") - entries := make([]SoftnetEntry, 0) - var err error - const ( - expectedColumns = 11 - ) - for _, line := range lines { - columns := strings.Fields(line) +func parseSoftnet(r io.Reader) ([]SoftnetStat, error) { + const minColumns = 9 + + s := bufio.NewScanner(r) + + var stats []SoftnetStat + for s.Scan() { + columns := strings.Fields(s.Text()) width := len(columns) - if width == 0 { - continue - } - if width != expectedColumns { - return []SoftnetEntry{}, fmt.Errorf("%d columns were detected, but %d were expected", width, expectedColumns) + + if width < minColumns { + return nil, fmt.Errorf("%d columns were detected, but at least %d were expected", width, minColumns) } - var entry SoftnetEntry - if entry, err = parseSoftnetEntry(columns); err != nil { - return []SoftnetEntry{}, err + + // We only parse the first three columns at the moment. + us, err := parseHexUint32s(columns[0:3]) + if err != nil { + return nil, err } - entries = append(entries, entry) + + stats = append(stats, SoftnetStat{ + Processed: us[0], + Dropped: us[1], + TimeSqueezed: us[2], + }) } - return entries, nil + return stats, nil } -func parseSoftnetEntry(columns []string) (SoftnetEntry, error) { - var err error - var processed, dropped, timeSqueezed uint64 - if processed, err = strconv.ParseUint(columns[0], 16, 32); err != nil { - return SoftnetEntry{}, fmt.Errorf("Unable to parse column 0: %s", err) - } - if dropped, err = strconv.ParseUint(columns[1], 16, 32); err != nil { - return SoftnetEntry{}, fmt.Errorf("Unable to parse column 1: %s", err) - } - if timeSqueezed, err = strconv.ParseUint(columns[2], 16, 32); err != nil { - return SoftnetEntry{}, fmt.Errorf("Unable to parse column 2: %s", err) +func parseHexUint32s(ss []string) ([]uint32, error) { + us := make([]uint32, 0, len(ss)) + for _, s := range ss { + u, err := strconv.ParseUint(s, 16, 32) + if err != nil { + return nil, err + } + + us = append(us, uint32(u)) } - return SoftnetEntry{ - Processed: uint(processed), - Dropped: uint(dropped), - TimeSqueezed: uint(timeSqueezed), - }, nil + + return us, nil } diff --git a/vendor/github.com/prometheus/procfs/net_udp.go b/vendor/github.com/prometheus/procfs/net_udp.go new file mode 100644 index 0000000000..d017e3f18d --- /dev/null +++ b/vendor/github.com/prometheus/procfs/net_udp.go @@ -0,0 +1,229 @@ +// Copyright 2020 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "bufio" + "encoding/hex" + "fmt" + "io" + "net" + "os" + "strconv" + "strings" +) + +const ( + // readLimit is used by io.LimitReader while reading the content of the + // /proc/net/udp{,6} files. The number of lines inside such a file is dynamic + // as each line represents a single used socket. + // In theory, the number of available sockets is 65535 (2^16 - 1) per IP. + // With e.g. 150 Byte per line and the maximum number of 65535, + // the reader needs to handle 150 Byte * 65535 =~ 10 MB for a single IP. + readLimit = 4294967296 // Byte -> 4 GiB +) + +type ( + // NetUDP represents the contents of /proc/net/udp{,6} file without the header. + NetUDP []*netUDPLine + + // NetUDPSummary provides already computed values like the total queue lengths or + // the total number of used sockets. In contrast to NetUDP it does not collect + // the parsed lines into a slice. + NetUDPSummary struct { + // TxQueueLength shows the total queue length of all parsed tx_queue lengths. + TxQueueLength uint64 + // RxQueueLength shows the total queue length of all parsed rx_queue lengths. + RxQueueLength uint64 + // UsedSockets shows the total number of parsed lines representing the + // number of used sockets. + UsedSockets uint64 + } + + // netUDPLine represents the fields parsed from a single line + // in /proc/net/udp{,6}. Fields which are not used by UDP are skipped. + // For the proc file format details, see https://linux.die.net/man/5/proc. + netUDPLine struct { + Sl uint64 + LocalAddr net.IP + LocalPort uint64 + RemAddr net.IP + RemPort uint64 + St uint64 + TxQueue uint64 + RxQueue uint64 + UID uint64 + } +) + +// NetUDP returns the IPv4 kernel/networking statistics for UDP datagrams +// read from /proc/net/udp. +func (fs FS) NetUDP() (NetUDP, error) { + return newNetUDP(fs.proc.Path("net/udp")) +} + +// NetUDP6 returns the IPv6 kernel/networking statistics for UDP datagrams +// read from /proc/net/udp6. +func (fs FS) NetUDP6() (NetUDP, error) { + return newNetUDP(fs.proc.Path("net/udp6")) +} + +// NetUDPSummary returns already computed statistics like the total queue lengths +// for UDP datagrams read from /proc/net/udp. +func (fs FS) NetUDPSummary() (*NetUDPSummary, error) { + return newNetUDPSummary(fs.proc.Path("net/udp")) +} + +// NetUDP6Summary returns already computed statistics like the total queue lengths +// for UDP datagrams read from /proc/net/udp6. +func (fs FS) NetUDP6Summary() (*NetUDPSummary, error) { + return newNetUDPSummary(fs.proc.Path("net/udp6")) +} + +// newNetUDP creates a new NetUDP{,6} from the contents of the given file. +func newNetUDP(file string) (NetUDP, error) { + f, err := os.Open(file) + if err != nil { + return nil, err + } + defer f.Close() + + netUDP := NetUDP{} + + lr := io.LimitReader(f, readLimit) + s := bufio.NewScanner(lr) + s.Scan() // skip first line with headers + for s.Scan() { + fields := strings.Fields(s.Text()) + line, err := parseNetUDPLine(fields) + if err != nil { + return nil, err + } + netUDP = append(netUDP, line) + } + if err := s.Err(); err != nil { + return nil, err + } + return netUDP, nil +} + +// newNetUDPSummary creates a new NetUDP{,6} from the contents of the given file. +func newNetUDPSummary(file string) (*NetUDPSummary, error) { + f, err := os.Open(file) + if err != nil { + return nil, err + } + defer f.Close() + + netUDPSummary := &NetUDPSummary{} + + lr := io.LimitReader(f, readLimit) + s := bufio.NewScanner(lr) + s.Scan() // skip first line with headers + for s.Scan() { + fields := strings.Fields(s.Text()) + line, err := parseNetUDPLine(fields) + if err != nil { + return nil, err + } + netUDPSummary.TxQueueLength += line.TxQueue + netUDPSummary.RxQueueLength += line.RxQueue + netUDPSummary.UsedSockets++ + } + if err := s.Err(); err != nil { + return nil, err + } + return netUDPSummary, nil +} + +// parseNetUDPLine parses a single line, represented by a list of fields. +func parseNetUDPLine(fields []string) (*netUDPLine, error) { + line := &netUDPLine{} + if len(fields) < 8 { + return nil, fmt.Errorf( + "cannot parse net udp socket line as it has less then 8 columns: %s", + strings.Join(fields, " "), + ) + } + var err error // parse error + + // sl + s := strings.Split(fields[0], ":") + if len(s) != 2 { + return nil, fmt.Errorf( + "cannot parse sl field in udp socket line: %s", fields[0]) + } + + if line.Sl, err = strconv.ParseUint(s[0], 0, 64); err != nil { + return nil, fmt.Errorf("cannot parse sl value in udp socket line: %s", err) + } + // local_address + l := strings.Split(fields[1], ":") + if len(l) != 2 { + return nil, fmt.Errorf( + "cannot parse local_address field in udp socket line: %s", fields[1]) + } + if line.LocalAddr, err = hex.DecodeString(l[0]); err != nil { + return nil, fmt.Errorf( + "cannot parse local_address value in udp socket line: %s", err) + } + if line.LocalPort, err = strconv.ParseUint(l[1], 16, 64); err != nil { + return nil, fmt.Errorf( + "cannot parse local_address port value in udp socket line: %s", err) + } + + // remote_address + r := strings.Split(fields[2], ":") + if len(r) != 2 { + return nil, fmt.Errorf( + "cannot parse rem_address field in udp socket line: %s", fields[1]) + } + if line.RemAddr, err = hex.DecodeString(r[0]); err != nil { + return nil, fmt.Errorf( + "cannot parse rem_address value in udp socket line: %s", err) + } + if line.RemPort, err = strconv.ParseUint(r[1], 16, 64); err != nil { + return nil, fmt.Errorf( + "cannot parse rem_address port value in udp socket line: %s", err) + } + + // st + if line.St, err = strconv.ParseUint(fields[3], 16, 64); err != nil { + return nil, fmt.Errorf( + "cannot parse st value in udp socket line: %s", err) + } + + // tx_queue and rx_queue + q := strings.Split(fields[4], ":") + if len(q) != 2 { + return nil, fmt.Errorf( + "cannot parse tx/rx queues in udp socket line as it has a missing colon: %s", + fields[4], + ) + } + if line.TxQueue, err = strconv.ParseUint(q[0], 16, 64); err != nil { + return nil, fmt.Errorf("cannot parse tx_queue value in udp socket line: %s", err) + } + if line.RxQueue, err = strconv.ParseUint(q[1], 16, 64); err != nil { + return nil, fmt.Errorf("cannot parse rx_queue value in udp socket line: %s", err) + } + + // uid + if line.UID, err = strconv.ParseUint(fields[7], 0, 64); err != nil { + return nil, fmt.Errorf( + "cannot parse uid value in udp socket line: %s", err) + } + + return line, nil +} diff --git a/vendor/github.com/prometheus/procfs/net_unix.go b/vendor/github.com/prometheus/procfs/net_unix.go index 240340a83a..c55b4b18e4 100644 --- a/vendor/github.com/prometheus/procfs/net_unix.go +++ b/vendor/github.com/prometheus/procfs/net_unix.go @@ -15,7 +15,6 @@ package procfs import ( "bufio" - "errors" "fmt" "io" "os" @@ -27,25 +26,15 @@ import ( // see https://elixir.bootlin.com/linux/v4.17/source/net/unix/af_unix.c#L2815 // and https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/net.h#L48. -const ( - netUnixKernelPtrIdx = iota - netUnixRefCountIdx - _ - netUnixFlagsIdx - netUnixTypeIdx - netUnixStateIdx - netUnixInodeIdx - - // Inode and Path are optional. - netUnixStaticFieldsCnt = 6 -) - +// Constants for the various /proc/net/unix enumerations. +// TODO: match against x/sys/unix or similar? const ( netUnixTypeStream = 1 netUnixTypeDgram = 2 netUnixTypeSeqpacket = 5 - netUnixFlagListen = 1 << 16 + netUnixFlagDefault = 0 + netUnixFlagListen = 1 << 16 netUnixStateUnconnected = 1 netUnixStateConnecting = 2 @@ -53,129 +42,127 @@ const ( netUnixStateDisconnected = 4 ) -var errInvalidKernelPtrFmt = errors.New("Invalid Num(the kernel table slot number) format") +// NetUNIXType is the type of the type field. +type NetUNIXType uint64 -// NetUnixType is the type of the type field. -type NetUnixType uint64 +// NetUNIXFlags is the type of the flags field. +type NetUNIXFlags uint64 -// NetUnixFlags is the type of the flags field. -type NetUnixFlags uint64 +// NetUNIXState is the type of the state field. +type NetUNIXState uint64 -// NetUnixState is the type of the state field. -type NetUnixState uint64 - -// NetUnixLine represents a line of /proc/net/unix. -type NetUnixLine struct { +// NetUNIXLine represents a line of /proc/net/unix. +type NetUNIXLine struct { KernelPtr string RefCount uint64 Protocol uint64 - Flags NetUnixFlags - Type NetUnixType - State NetUnixState + Flags NetUNIXFlags + Type NetUNIXType + State NetUNIXState Inode uint64 Path string } -// NetUnix holds the data read from /proc/net/unix. -type NetUnix struct { - Rows []*NetUnixLine +// NetUNIX holds the data read from /proc/net/unix. +type NetUNIX struct { + Rows []*NetUNIXLine } -// NewNetUnix returns data read from /proc/net/unix. -func NewNetUnix() (*NetUnix, error) { - fs, err := NewFS(DefaultMountPoint) - if err != nil { - return nil, err - } - - return fs.NewNetUnix() +// NetUNIX returns data read from /proc/net/unix. +func (fs FS) NetUNIX() (*NetUNIX, error) { + return readNetUNIX(fs.proc.Path("net/unix")) } -// NewNetUnix returns data read from /proc/net/unix. -func (fs FS) NewNetUnix() (*NetUnix, error) { - return NewNetUnixByPath(fs.proc.Path("net/unix")) -} - -// NewNetUnixByPath returns data read from /proc/net/unix by file path. -// It might returns an error with partial parsed data, if an error occur after some data parsed. -func NewNetUnixByPath(path string) (*NetUnix, error) { - f, err := os.Open(path) +// readNetUNIX reads data in /proc/net/unix format from the specified file. +func readNetUNIX(file string) (*NetUNIX, error) { + // This file could be quite large and a streaming read is desirable versus + // reading the entire contents at once. + f, err := os.Open(file) if err != nil { return nil, err } defer f.Close() - return NewNetUnixByReader(f) + + return parseNetUNIX(f) } -// NewNetUnixByReader returns data read from /proc/net/unix by a reader. -// It might returns an error with partial parsed data, if an error occur after some data parsed. -func NewNetUnixByReader(reader io.Reader) (*NetUnix, error) { - nu := &NetUnix{ - Rows: make([]*NetUnixLine, 0, 32), - } - scanner := bufio.NewScanner(reader) - // Omit the header line. - scanner.Scan() - header := scanner.Text() +// parseNetUNIX creates a NetUnix structure from the incoming stream. +func parseNetUNIX(r io.Reader) (*NetUNIX, error) { + // Begin scanning by checking for the existence of Inode. + s := bufio.NewScanner(r) + s.Scan() + // From the man page of proc(5), it does not contain an Inode field, - // but in actually it exists. - // This code works for both cases. - hasInode := strings.Contains(header, "Inode") + // but in actually it exists. This code works for both cases. + hasInode := strings.Contains(s.Text(), "Inode") - minFieldsCnt := netUnixStaticFieldsCnt + // Expect a minimum number of fields, but Inode and Path are optional: + // Num RefCount Protocol Flags Type St Inode Path + minFields := 6 if hasInode { - minFieldsCnt++ + minFields++ } - for scanner.Scan() { - line := scanner.Text() - item, err := nu.parseLine(line, hasInode, minFieldsCnt) + + var nu NetUNIX + for s.Scan() { + line := s.Text() + item, err := nu.parseLine(line, hasInode, minFields) if err != nil { - return nu, err + return nil, fmt.Errorf("failed to parse /proc/net/unix data %q: %v", line, err) } + nu.Rows = append(nu.Rows, item) } - return nu, scanner.Err() + if err := s.Err(); err != nil { + return nil, fmt.Errorf("failed to scan /proc/net/unix data: %v", err) + } + + return &nu, nil } -func (u *NetUnix) parseLine(line string, hasInode bool, minFieldsCnt int) (*NetUnixLine, error) { +func (u *NetUNIX) parseLine(line string, hasInode bool, min int) (*NetUNIXLine, error) { fields := strings.Fields(line) - fieldsLen := len(fields) - if fieldsLen < minFieldsCnt { - return nil, fmt.Errorf( - "Parse Unix domain failed: expect at least %d fields but got %d", - minFieldsCnt, fieldsLen) - } - kernelPtr, err := u.parseKernelPtr(fields[netUnixKernelPtrIdx]) - if err != nil { - return nil, fmt.Errorf("Parse Unix domain num(%s) failed: %s", fields[netUnixKernelPtrIdx], err) + + l := len(fields) + if l < min { + return nil, fmt.Errorf("expected at least %d fields but got %d", min, l) } - users, err := u.parseUsers(fields[netUnixRefCountIdx]) + + // Field offsets are as follows: + // Num RefCount Protocol Flags Type St Inode Path + + kernelPtr := strings.TrimSuffix(fields[0], ":") + + users, err := u.parseUsers(fields[1]) if err != nil { - return nil, fmt.Errorf("Parse Unix domain ref count(%s) failed: %s", fields[netUnixRefCountIdx], err) + return nil, fmt.Errorf("failed to parse ref count(%s): %v", fields[1], err) } - flags, err := u.parseFlags(fields[netUnixFlagsIdx]) + + flags, err := u.parseFlags(fields[3]) if err != nil { - return nil, fmt.Errorf("Parse Unix domain flags(%s) failed: %s", fields[netUnixFlagsIdx], err) + return nil, fmt.Errorf("failed to parse flags(%s): %v", fields[3], err) } - typ, err := u.parseType(fields[netUnixTypeIdx]) + + typ, err := u.parseType(fields[4]) if err != nil { - return nil, fmt.Errorf("Parse Unix domain type(%s) failed: %s", fields[netUnixTypeIdx], err) + return nil, fmt.Errorf("failed to parse type(%s): %v", fields[4], err) } - state, err := u.parseState(fields[netUnixStateIdx]) + + state, err := u.parseState(fields[5]) if err != nil { - return nil, fmt.Errorf("Parse Unix domain state(%s) failed: %s", fields[netUnixStateIdx], err) + return nil, fmt.Errorf("failed to parse state(%s): %v", fields[5], err) } + var inode uint64 if hasInode { - inodeStr := fields[netUnixInodeIdx] - inode, err = u.parseInode(inodeStr) + inode, err = u.parseInode(fields[6]) if err != nil { - return nil, fmt.Errorf("Parse Unix domain inode(%s) failed: %s", inodeStr, err) + return nil, fmt.Errorf("failed to parse inode(%s): %v", fields[6], err) } } - nuLine := &NetUnixLine{ + n := &NetUNIXLine{ KernelPtr: kernelPtr, RefCount: users, Type: typ, @@ -185,61 +172,56 @@ func (u *NetUnix) parseLine(line string, hasInode bool, minFieldsCnt int) (*NetU } // Path field is optional. - if fieldsLen > minFieldsCnt { - pathIdx := netUnixInodeIdx + 1 + if l > min { + // Path occurs at either index 6 or 7 depending on whether inode is + // already present. + pathIdx := 7 if !hasInode { pathIdx-- } - nuLine.Path = fields[pathIdx] - } - - return nuLine, nil -} -func (u NetUnix) parseKernelPtr(str string) (string, error) { - if !strings.HasSuffix(str, ":") { - return "", errInvalidKernelPtrFmt + n.Path = fields[pathIdx] } - return str[:len(str)-1], nil -} -func (u NetUnix) parseUsers(hexStr string) (uint64, error) { - return strconv.ParseUint(hexStr, 16, 32) + return n, nil } -func (u NetUnix) parseProtocol(hexStr string) (uint64, error) { - return strconv.ParseUint(hexStr, 16, 32) +func (u NetUNIX) parseUsers(s string) (uint64, error) { + return strconv.ParseUint(s, 16, 32) } -func (u NetUnix) parseType(hexStr string) (NetUnixType, error) { - typ, err := strconv.ParseUint(hexStr, 16, 16) +func (u NetUNIX) parseType(s string) (NetUNIXType, error) { + typ, err := strconv.ParseUint(s, 16, 16) if err != nil { return 0, err } - return NetUnixType(typ), nil + + return NetUNIXType(typ), nil } -func (u NetUnix) parseFlags(hexStr string) (NetUnixFlags, error) { - flags, err := strconv.ParseUint(hexStr, 16, 32) +func (u NetUNIX) parseFlags(s string) (NetUNIXFlags, error) { + flags, err := strconv.ParseUint(s, 16, 32) if err != nil { return 0, err } - return NetUnixFlags(flags), nil + + return NetUNIXFlags(flags), nil } -func (u NetUnix) parseState(hexStr string) (NetUnixState, error) { - st, err := strconv.ParseInt(hexStr, 16, 8) +func (u NetUNIX) parseState(s string) (NetUNIXState, error) { + st, err := strconv.ParseInt(s, 16, 8) if err != nil { return 0, err } - return NetUnixState(st), nil + + return NetUNIXState(st), nil } -func (u NetUnix) parseInode(inodeStr string) (uint64, error) { - return strconv.ParseUint(inodeStr, 10, 64) +func (u NetUNIX) parseInode(s string) (uint64, error) { + return strconv.ParseUint(s, 10, 64) } -func (t NetUnixType) String() string { +func (t NetUNIXType) String() string { switch t { case netUnixTypeStream: return "stream" @@ -251,7 +233,7 @@ func (t NetUnixType) String() string { return "unknown" } -func (f NetUnixFlags) String() string { +func (f NetUNIXFlags) String() string { switch f { case netUnixFlagListen: return "listen" @@ -260,7 +242,7 @@ func (f NetUnixFlags) String() string { } } -func (s NetUnixState) String() string { +func (s NetUNIXState) String() string { switch s { case netUnixStateUnconnected: return "unconnected" diff --git a/vendor/github.com/prometheus/procfs/proc.go b/vendor/github.com/prometheus/procfs/proc.go index b7c79cf77b..330e472c70 100644 --- a/vendor/github.com/prometheus/procfs/proc.go +++ b/vendor/github.com/prometheus/procfs/proc.go @@ -22,6 +22,7 @@ import ( "strings" "github.com/prometheus/procfs/internal/fs" + "github.com/prometheus/procfs/internal/util" ) // Proc provides information about a running process. @@ -121,13 +122,7 @@ func (fs FS) AllProcs() (Procs, error) { // CmdLine returns the command line of a process. func (p Proc) CmdLine() ([]string, error) { - f, err := os.Open(p.path("cmdline")) - if err != nil { - return nil, err - } - defer f.Close() - - data, err := ioutil.ReadAll(f) + data, err := util.ReadFileNoStat(p.path("cmdline")) if err != nil { return nil, err } @@ -141,13 +136,7 @@ func (p Proc) CmdLine() ([]string, error) { // Comm returns the command name of a process. func (p Proc) Comm() (string, error) { - f, err := os.Open(p.path("comm")) - if err != nil { - return "", err - } - defer f.Close() - - data, err := ioutil.ReadAll(f) + data, err := util.ReadFileNoStat(p.path("comm")) if err != nil { return "", err } @@ -252,13 +241,11 @@ func (p Proc) MountStats() ([]*Mount, error) { // It supplies information missing in `/proc/self/mounts` and // fixes various other problems with that file too. func (p Proc) MountInfo() ([]*MountInfo, error) { - f, err := os.Open(p.path("mountinfo")) + data, err := util.ReadFileNoStat(p.path("mountinfo")) if err != nil { return nil, err } - defer f.Close() - - return parseMountInfo(f) + return parseMountInfo(data) } func (p Proc) fileDescriptors() ([]string, error) { diff --git a/vendor/github.com/prometheus/procfs/proc_environ.go b/vendor/github.com/prometheus/procfs/proc_environ.go index 7172bb586e..6134b3580c 100644 --- a/vendor/github.com/prometheus/procfs/proc_environ.go +++ b/vendor/github.com/prometheus/procfs/proc_environ.go @@ -14,22 +14,16 @@ package procfs import ( - "io/ioutil" - "os" "strings" + + "github.com/prometheus/procfs/internal/util" ) // Environ reads process environments from /proc//environ func (p Proc) Environ() ([]string, error) { environments := make([]string, 0) - f, err := os.Open(p.path("environ")) - if err != nil { - return environments, err - } - defer f.Close() - - data, err := ioutil.ReadAll(f) + data, err := util.ReadFileNoStat(p.path("environ")) if err != nil { return environments, err } diff --git a/vendor/github.com/prometheus/procfs/proc_fdinfo.go b/vendor/github.com/prometheus/procfs/proc_fdinfo.go index 83b67d1bde..0c9c402850 100644 --- a/vendor/github.com/prometheus/procfs/proc_fdinfo.go +++ b/vendor/github.com/prometheus/procfs/proc_fdinfo.go @@ -15,19 +15,20 @@ package procfs import ( "bufio" - "fmt" - "io/ioutil" - "os" + "bytes" + "errors" "regexp" - "strings" + + "github.com/prometheus/procfs/internal/util" ) // Regexp variables var ( - rPos = regexp.MustCompile(`^pos:\s+(\d+)$`) - rFlags = regexp.MustCompile(`^flags:\s+(\d+)$`) - rMntID = regexp.MustCompile(`^mnt_id:\s+(\d+)$`) - rInotify = regexp.MustCompile(`^inotify`) + rPos = regexp.MustCompile(`^pos:\s+(\d+)$`) + rFlags = regexp.MustCompile(`^flags:\s+(\d+)$`) + rMntID = regexp.MustCompile(`^mnt_id:\s+(\d+)$`) + rInotify = regexp.MustCompile(`^inotify`) + rInotifyParts = regexp.MustCompile(`^inotify\s+wd:([0-9a-f]+)\s+ino:([0-9a-f]+)\s+sdev:([0-9a-f]+)(?:\s+mask:([0-9a-f]+))?`) ) // ProcFDInfo contains represents file descriptor information. @@ -46,21 +47,15 @@ type ProcFDInfo struct { // FDInfo constructor. On kernels older than 3.8, InotifyInfos will always be empty. func (p Proc) FDInfo(fd string) (*ProcFDInfo, error) { - f, err := os.Open(p.path("fdinfo", fd)) + data, err := util.ReadFileNoStat(p.path("fdinfo", fd)) if err != nil { return nil, err } - defer f.Close() - - fdinfo, err := ioutil.ReadAll(f) - if err != nil { - return nil, fmt.Errorf("could not read %s: %s", f.Name(), err) - } var text, pos, flags, mntid string var inotify []InotifyInfo - scanner := bufio.NewScanner(strings.NewReader(string(fdinfo))) + scanner := bufio.NewScanner(bytes.NewReader(data)) for scanner.Scan() { text = scanner.Text() if rPos.MatchString(text) { @@ -103,15 +98,21 @@ type InotifyInfo struct { // InotifyInfo constructor. Only available on kernel 3.8+. func parseInotifyInfo(line string) (*InotifyInfo, error) { - r := regexp.MustCompile(`^inotify\s+wd:([0-9a-f]+)\s+ino:([0-9a-f]+)\s+sdev:([0-9a-f]+)\s+mask:([0-9a-f]+)`) - m := r.FindStringSubmatch(line) - i := &InotifyInfo{ - WD: m[1], - Ino: m[2], - Sdev: m[3], - Mask: m[4], + m := rInotifyParts.FindStringSubmatch(line) + if len(m) >= 4 { + var mask string + if len(m) == 5 { + mask = m[4] + } + i := &InotifyInfo{ + WD: m[1], + Ino: m[2], + Sdev: m[3], + Mask: mask, + } + return i, nil } - return i, nil + return nil, errors.New("invalid inode entry: " + line) } // ProcFDInfos represents a list of ProcFDInfo structs. diff --git a/vendor/github.com/prometheus/procfs/proc_io.go b/vendor/github.com/prometheus/procfs/proc_io.go index 0ff89b1cef..776f349717 100644 --- a/vendor/github.com/prometheus/procfs/proc_io.go +++ b/vendor/github.com/prometheus/procfs/proc_io.go @@ -15,8 +15,8 @@ package procfs import ( "fmt" - "io/ioutil" - "os" + + "github.com/prometheus/procfs/internal/util" ) // ProcIO models the content of /proc//io. @@ -43,13 +43,7 @@ type ProcIO struct { func (p Proc) IO() (ProcIO, error) { pio := ProcIO{} - f, err := os.Open(p.path("io")) - if err != nil { - return pio, err - } - defer f.Close() - - data, err := ioutil.ReadAll(f) + data, err := util.ReadFileNoStat(p.path("io")) if err != nil { return pio, err } diff --git a/vendor/github.com/prometheus/procfs/proc_maps.go b/vendor/github.com/prometheus/procfs/proc_maps.go new file mode 100644 index 0000000000..28d5c6eb1d --- /dev/null +++ b/vendor/github.com/prometheus/procfs/proc_maps.go @@ -0,0 +1,208 @@ +// Copyright 2019 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// +build !windows + +package procfs + +import ( + "bufio" + "fmt" + "os" + "strconv" + "strings" + + "golang.org/x/sys/unix" +) + +type ProcMapPermissions struct { + // mapping has the [R]ead flag set + Read bool + // mapping has the [W]rite flag set + Write bool + // mapping has the [X]ecutable flag set + Execute bool + // mapping has the [S]hared flag set + Shared bool + // mapping is marked as [P]rivate (copy on write) + Private bool +} + +// ProcMap contains the process memory-mappings of the process, +// read from /proc/[pid]/maps +type ProcMap struct { + // The start address of current mapping. + StartAddr uintptr + // The end address of the current mapping + EndAddr uintptr + // The permissions for this mapping + Perms *ProcMapPermissions + // The current offset into the file/fd (e.g., shared libs) + Offset int64 + // Device owner of this mapping (major:minor) in Mkdev format. + Dev uint64 + // The inode of the device above + Inode uint64 + // The file or psuedofile (or empty==anonymous) + Pathname string +} + +// parseDevice parses the device token of a line and converts it to a dev_t +// (mkdev) like structure. +func parseDevice(s string) (uint64, error) { + toks := strings.Split(s, ":") + if len(toks) < 2 { + return 0, fmt.Errorf("unexpected number of fields") + } + + major, err := strconv.ParseUint(toks[0], 16, 0) + if err != nil { + return 0, err + } + + minor, err := strconv.ParseUint(toks[1], 16, 0) + if err != nil { + return 0, err + } + + return unix.Mkdev(uint32(major), uint32(minor)), nil +} + +// parseAddress just converts a hex-string to a uintptr +func parseAddress(s string) (uintptr, error) { + a, err := strconv.ParseUint(s, 16, 0) + if err != nil { + return 0, err + } + + return uintptr(a), nil +} + +// parseAddresses parses the start-end address +func parseAddresses(s string) (uintptr, uintptr, error) { + toks := strings.Split(s, "-") + if len(toks) < 2 { + return 0, 0, fmt.Errorf("invalid address") + } + + saddr, err := parseAddress(toks[0]) + if err != nil { + return 0, 0, err + } + + eaddr, err := parseAddress(toks[1]) + if err != nil { + return 0, 0, err + } + + return saddr, eaddr, nil +} + +// parsePermissions parses a token and returns any that are set. +func parsePermissions(s string) (*ProcMapPermissions, error) { + if len(s) < 4 { + return nil, fmt.Errorf("invalid permissions token") + } + + perms := ProcMapPermissions{} + for _, ch := range s { + switch ch { + case 'r': + perms.Read = true + case 'w': + perms.Write = true + case 'x': + perms.Execute = true + case 'p': + perms.Private = true + case 's': + perms.Shared = true + } + } + + return &perms, nil +} + +// parseProcMap will attempt to parse a single line within a proc/[pid]/maps +// buffer. +func parseProcMap(text string) (*ProcMap, error) { + fields := strings.Fields(text) + if len(fields) < 5 { + return nil, fmt.Errorf("truncated procmap entry") + } + + saddr, eaddr, err := parseAddresses(fields[0]) + if err != nil { + return nil, err + } + + perms, err := parsePermissions(fields[1]) + if err != nil { + return nil, err + } + + offset, err := strconv.ParseInt(fields[2], 16, 0) + if err != nil { + return nil, err + } + + device, err := parseDevice(fields[3]) + if err != nil { + return nil, err + } + + inode, err := strconv.ParseUint(fields[4], 10, 0) + if err != nil { + return nil, err + } + + pathname := "" + + if len(fields) >= 5 { + pathname = strings.Join(fields[5:], " ") + } + + return &ProcMap{ + StartAddr: saddr, + EndAddr: eaddr, + Perms: perms, + Offset: offset, + Dev: device, + Inode: inode, + Pathname: pathname, + }, nil +} + +// ProcMaps reads from /proc/[pid]/maps to get the memory-mappings of the +// process. +func (p Proc) ProcMaps() ([]*ProcMap, error) { + file, err := os.Open(p.path("maps")) + if err != nil { + return nil, err + } + defer file.Close() + + maps := []*ProcMap{} + scan := bufio.NewScanner(file) + + for scan.Scan() { + m, err := parseProcMap(scan.Text()) + if err != nil { + return nil, err + } + + maps = append(maps, m) + } + + return maps, nil +} diff --git a/vendor/github.com/prometheus/procfs/proc_psi.go b/vendor/github.com/prometheus/procfs/proc_psi.go index 46fe266263..0d7bee54ca 100644 --- a/vendor/github.com/prometheus/procfs/proc_psi.go +++ b/vendor/github.com/prometheus/procfs/proc_psi.go @@ -24,11 +24,13 @@ package procfs // > full avg10=0.00 avg60=0.13 avg300=0.96 total=8183134 import ( + "bufio" + "bytes" "fmt" "io" - "io/ioutil" - "os" "strings" + + "github.com/prometheus/procfs/internal/util" ) const lineFormat = "avg10=%f avg60=%f avg300=%f total=%d" @@ -55,24 +57,21 @@ type PSIStats struct { // resource from /proc/pressure/. At time of writing this can be // either "cpu", "memory" or "io". func (fs FS) PSIStatsForResource(resource string) (PSIStats, error) { - file, err := os.Open(fs.proc.Path(fmt.Sprintf("%s/%s", "pressure", resource))) + data, err := util.ReadFileNoStat(fs.proc.Path(fmt.Sprintf("%s/%s", "pressure", resource))) if err != nil { return PSIStats{}, fmt.Errorf("psi_stats: unavailable for %s", resource) } - defer file.Close() - return parsePSIStats(resource, file) + return parsePSIStats(resource, bytes.NewReader(data)) } // parsePSIStats parses the specified file for pressure stall information -func parsePSIStats(resource string, file io.Reader) (PSIStats, error) { +func parsePSIStats(resource string, r io.Reader) (PSIStats, error) { psiStats := PSIStats{} - stats, err := ioutil.ReadAll(file) - if err != nil { - return psiStats, fmt.Errorf("psi_stats: unable to read data for %s", resource) - } - for _, l := range strings.Split(string(stats), "\n") { + scanner := bufio.NewScanner(r) + for scanner.Scan() { + l := scanner.Text() prefix := strings.Split(l, " ")[0] switch prefix { case "some": diff --git a/vendor/github.com/prometheus/procfs/proc_stat.go b/vendor/github.com/prometheus/procfs/proc_stat.go index dbde1fa0d6..4517d2e9dd 100644 --- a/vendor/github.com/prometheus/procfs/proc_stat.go +++ b/vendor/github.com/prometheus/procfs/proc_stat.go @@ -16,10 +16,10 @@ package procfs import ( "bytes" "fmt" - "io/ioutil" "os" "github.com/prometheus/procfs/internal/fs" + "github.com/prometheus/procfs/internal/util" ) // Originally, this USER_HZ value was dynamically retrieved via a sysconf call @@ -113,13 +113,7 @@ func (p Proc) NewStat() (ProcStat, error) { // Stat returns the current status information of the process. func (p Proc) Stat() (ProcStat, error) { - f, err := os.Open(p.path("stat")) - if err != nil { - return ProcStat{}, err - } - defer f.Close() - - data, err := ioutil.ReadAll(f) + data, err := util.ReadFileNoStat(p.path("stat")) if err != nil { return ProcStat{}, err } diff --git a/vendor/github.com/prometheus/procfs/proc_status.go b/vendor/github.com/prometheus/procfs/proc_status.go index ad290fae7d..c58346d910 100644 --- a/vendor/github.com/prometheus/procfs/proc_status.go +++ b/vendor/github.com/prometheus/procfs/proc_status.go @@ -15,10 +15,10 @@ package procfs import ( "bytes" - "io/ioutil" - "os" "strconv" "strings" + + "github.com/prometheus/procfs/internal/util" ) // ProcStatus provides status information about the process, @@ -33,37 +33,37 @@ type ProcStatus struct { TGID int // Peak virtual memory size. - VmPeak uint64 + VmPeak uint64 // nolint:golint // Virtual memory size. - VmSize uint64 + VmSize uint64 // nolint:golint // Locked memory size. - VmLck uint64 + VmLck uint64 // nolint:golint // Pinned memory size. - VmPin uint64 + VmPin uint64 // nolint:golint // Peak resident set size. - VmHWM uint64 + VmHWM uint64 // nolint:golint // Resident set size (sum of RssAnnon RssFile and RssShmem). - VmRSS uint64 + VmRSS uint64 // nolint:golint // Size of resident anonymous memory. - RssAnon uint64 + RssAnon uint64 // nolint:golint // Size of resident file mappings. - RssFile uint64 + RssFile uint64 // nolint:golint // Size of resident shared memory. - RssShmem uint64 + RssShmem uint64 // nolint:golint // Size of data segments. - VmData uint64 + VmData uint64 // nolint:golint // Size of stack segments. - VmStk uint64 + VmStk uint64 // nolint:golint // Size of text segments. - VmExe uint64 + VmExe uint64 // nolint:golint // Shared library code size. - VmLib uint64 + VmLib uint64 // nolint:golint // Page table entries size. - VmPTE uint64 + VmPTE uint64 // nolint:golint // Size of second-level page tables. - VmPMD uint64 + VmPMD uint64 // nolint:golint // Swapped-out virtual memory size by anonymous private. - VmSwap uint64 + VmSwap uint64 // nolint:golint // Size of hugetlb memory portions HugetlbPages uint64 @@ -71,17 +71,14 @@ type ProcStatus struct { VoluntaryCtxtSwitches uint64 // Number of involuntary context switches. NonVoluntaryCtxtSwitches uint64 + + // UIDs of the process (Real, effective, saved set, and filesystem UIDs (GIDs)) + UIDs [4]string } // NewStatus returns the current status information of the process. func (p Proc) NewStatus() (ProcStatus, error) { - f, err := os.Open(p.path("status")) - if err != nil { - return ProcStatus{}, err - } - defer f.Close() - - data, err := ioutil.ReadAll(f) + data, err := util.ReadFileNoStat(p.path("status")) if err != nil { return ProcStatus{}, err } @@ -120,6 +117,8 @@ func (s *ProcStatus) fillStatus(k string, vString string, vUint uint64, vUintByt s.TGID = int(vUint) case "Name": s.Name = vString + case "Uid": + copy(s.UIDs[:], strings.Split(vString, "\t")) case "VmPeak": s.VmPeak = vUintBytes case "VmSize": diff --git a/vendor/github.com/prometheus/procfs/stat.go b/vendor/github.com/prometheus/procfs/stat.go index 6661ee03a6..b2a6fc994c 100644 --- a/vendor/github.com/prometheus/procfs/stat.go +++ b/vendor/github.com/prometheus/procfs/stat.go @@ -15,13 +15,14 @@ package procfs import ( "bufio" + "bytes" "fmt" "io" - "os" "strconv" "strings" "github.com/prometheus/procfs/internal/fs" + "github.com/prometheus/procfs/internal/util" ) // CPUStat shows how much time the cpu spend in various stages. @@ -164,16 +165,15 @@ func (fs FS) NewStat() (Stat, error) { // Stat returns information about current cpu/process statistics. // See https://www.kernel.org/doc/Documentation/filesystems/proc.txt func (fs FS) Stat() (Stat, error) { - - f, err := os.Open(fs.proc.Path("stat")) + fileName := fs.proc.Path("stat") + data, err := util.ReadFileNoStat(fileName) if err != nil { return Stat{}, err } - defer f.Close() stat := Stat{} - scanner := bufio.NewScanner(f) + scanner := bufio.NewScanner(bytes.NewReader(data)) for scanner.Scan() { line := scanner.Text() parts := strings.Fields(scanner.Text()) @@ -237,7 +237,7 @@ func (fs FS) Stat() (Stat, error) { } if err := scanner.Err(); err != nil { - return Stat{}, fmt.Errorf("couldn't parse %s: %s", f.Name(), err) + return Stat{}, fmt.Errorf("couldn't parse %s: %s", fileName, err) } return stat, nil diff --git a/vendor/github.com/prometheus/procfs/swaps.go b/vendor/github.com/prometheus/procfs/swaps.go new file mode 100644 index 0000000000..15edc2212b --- /dev/null +++ b/vendor/github.com/prometheus/procfs/swaps.go @@ -0,0 +1,89 @@ +// Copyright 2019 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "bufio" + "bytes" + "fmt" + "strconv" + "strings" + + "github.com/prometheus/procfs/internal/util" +) + +// Swap represents an entry in /proc/swaps. +type Swap struct { + Filename string + Type string + Size int + Used int + Priority int +} + +// Swaps returns a slice of all configured swap devices on the system. +func (fs FS) Swaps() ([]*Swap, error) { + data, err := util.ReadFileNoStat(fs.proc.Path("swaps")) + if err != nil { + return nil, err + } + return parseSwaps(data) +} + +func parseSwaps(info []byte) ([]*Swap, error) { + swaps := []*Swap{} + scanner := bufio.NewScanner(bytes.NewReader(info)) + scanner.Scan() // ignore header line + for scanner.Scan() { + swapString := scanner.Text() + parsedSwap, err := parseSwapString(swapString) + if err != nil { + return nil, err + } + swaps = append(swaps, parsedSwap) + } + + err := scanner.Err() + return swaps, err +} + +func parseSwapString(swapString string) (*Swap, error) { + var err error + + swapFields := strings.Fields(swapString) + swapLength := len(swapFields) + if swapLength < 5 { + return nil, fmt.Errorf("too few fields in swap string: %s", swapString) + } + + swap := &Swap{ + Filename: swapFields[0], + Type: swapFields[1], + } + + swap.Size, err = strconv.Atoi(swapFields[2]) + if err != nil { + return nil, fmt.Errorf("invalid swap size: %s", swapFields[2]) + } + swap.Used, err = strconv.Atoi(swapFields[3]) + if err != nil { + return nil, fmt.Errorf("invalid swap used: %s", swapFields[3]) + } + swap.Priority, err = strconv.Atoi(swapFields[4]) + if err != nil { + return nil, fmt.Errorf("invalid swap priority: %s", swapFields[4]) + } + + return swap, nil +} diff --git a/vendor/modules.txt b/vendor/modules.txt index f9b620a160..88840faf70 100644 --- a/vendor/modules.txt +++ b/vendor/modules.txt @@ -245,13 +245,13 @@ github.com/pmezard/go-difflib/difflib github.com/prometheus/client_golang/prometheus github.com/prometheus/client_golang/prometheus/internal github.com/prometheus/client_golang/prometheus/promhttp -# github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 => github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 +# github.com/prometheus/client_model v0.2.0 github.com/prometheus/client_model/go -# github.com/prometheus/common v0.9.1 => github.com/prometheus/common v0.7.0 +# github.com/prometheus/common v0.9.1 github.com/prometheus/common/expfmt github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg github.com/prometheus/common/model -# github.com/prometheus/procfs v0.0.11 => github.com/prometheus/procfs v0.0.5 +# github.com/prometheus/procfs v0.0.11 github.com/prometheus/procfs github.com/prometheus/procfs/internal/fs github.com/prometheus/procfs/internal/util From 5c94e6f142c6d57f9ec6a46d24e57fa4f42e279b Mon Sep 17 00:00:00 2001 From: Ian Milligan Date: Wed, 6 May 2020 09:07:44 -0700 Subject: [PATCH 11/12] Update knative.dev/pkg and update codegen (#1005) --- go.mod | 12 +- go.sum | 391 +++++++++++++++- .../broker/v1beta1/broker/controller.go | 57 ++- .../broker/v1beta1/trigger/controller.go | 57 ++- .../cloudauditlogssource/controller.go | 57 ++- .../v1alpha1/cloudbuildsource/controller.go | 57 ++- .../v1alpha1/cloudpubsubsource/controller.go | 57 ++- .../cloudschedulersource/controller.go | 57 ++- .../v1alpha1/cloudstoragesource/controller.go | 57 ++- .../cloudauditlogssource/controller.go | 57 ++- .../v1beta1/cloudpubsubsource/controller.go | 57 ++- .../cloudschedulersource/controller.go | 57 ++- .../v1beta1/cloudstoragesource/controller.go | 57 ++- .../v1alpha1/brokercell/controller.go | 57 ++- .../v1alpha1/pullsubscription/controller.go | 57 ++- .../intevents/v1alpha1/topic/controller.go | 57 ++- .../messaging/v1alpha1/channel/controller.go | 57 ++- .../messaging/v1beta1/channel/controller.go | 57 ++- .../v1alpha1/eventpolicybinding/controller.go | 57 ++- .../v1alpha1/httppolicybinding/controller.go | 57 ++- .../v1alpha1/pullsubscription/controller.go | 57 ++- .../pubsub/v1alpha1/topic/controller.go | 57 ++- .../v1beta1/pullsubscription/controller.go | 57 ++- .../pubsub/v1beta1/topic/controller.go | 57 ++- .../github.com/cespare/xxhash/v2/LICENSE.txt | 22 + .../github.com/hashicorp/golang-lru/lru.go | 22 +- .../github.com/cespare/xxhash/v2/.travis.yml | 8 + .../github.com/cespare/xxhash/v2/LICENSE.txt | 22 + vendor/github.com/cespare/xxhash/v2/README.md | 67 +++ vendor/github.com/cespare/xxhash/v2/go.mod | 3 + vendor/github.com/cespare/xxhash/v2/go.sum | 0 vendor/github.com/cespare/xxhash/v2/xxhash.go | 236 ++++++++++ .../cespare/xxhash/v2/xxhash_amd64.go | 13 + .../cespare/xxhash/v2/xxhash_amd64.s | 215 +++++++++ .../cespare/xxhash/v2/xxhash_other.go | 76 ++++ .../cespare/xxhash/v2/xxhash_safe.go | 15 + .../cespare/xxhash/v2/xxhash_unsafe.go | 46 ++ .../github.com/go-openapi/spec/.golangci.yml | 5 + .../go-openapi/spec/contact_info.go | 30 ++ vendor/github.com/go-openapi/spec/expander.go | 7 +- vendor/github.com/go-openapi/spec/go.mod | 7 +- vendor/github.com/go-openapi/spec/go.sum | 29 +- vendor/github.com/go-openapi/spec/license.go | 30 ++ vendor/github.com/go-openapi/spec/ref.go | 1 + .../go-openapi/spec/schema_loader.go | 7 +- vendor/github.com/go-openapi/swag/convert.go | 16 +- .../go-openapi/swag/convert_types.go | 60 +-- vendor/github.com/go-openapi/swag/go.mod | 4 +- vendor/github.com/go-openapi/swag/go.sum | 4 +- vendor/github.com/go-openapi/swag/json.go | 8 +- vendor/github.com/hashicorp/golang-lru/lru.go | 22 +- .../client_golang/prometheus/counter.go | 48 +- .../client_golang/prometheus/desc.go | 21 +- .../client_golang/prometheus/doc.go | 37 +- .../client_golang/prometheus/gauge.go | 11 +- .../client_golang/prometheus/go_collector.go | 2 +- .../client_golang/prometheus/histogram.go | 114 +++-- .../client_golang/prometheus/metric.go | 3 +- .../client_golang/prometheus/observer.go | 12 + .../prometheus/promhttp/delegator.go | 9 + .../client_golang/prometheus/promhttp/http.go | 63 ++- .../client_golang/prometheus/registry.go | 32 +- .../client_golang/prometheus/summary.go | 2 +- .../client_golang/prometheus/value.go | 50 ++- .../client_golang/prometheus/vec.go | 14 +- vendor/go.uber.org/atomic/.gitignore | 3 +- vendor/go.uber.org/atomic/.travis.yml | 18 +- vendor/go.uber.org/atomic/CHANGELOG.md | 64 +++ vendor/go.uber.org/atomic/Makefile | 60 +-- vendor/go.uber.org/atomic/README.md | 31 +- vendor/go.uber.org/atomic/atomic.go | 9 +- vendor/go.uber.org/atomic/glide.lock | 17 - vendor/go.uber.org/atomic/glide.yaml | 6 - vendor/go.uber.org/atomic/go.mod | 10 + vendor/go.uber.org/atomic/go.sum | 22 + vendor/go.uber.org/multierr/.gitignore | 3 + vendor/go.uber.org/multierr/.travis.yml | 8 +- vendor/go.uber.org/multierr/CHANGELOG.md | 19 + vendor/go.uber.org/multierr/Makefile | 56 +-- vendor/go.uber.org/multierr/error.go | 54 ++- vendor/go.uber.org/multierr/glide.lock | 19 - vendor/go.uber.org/multierr/go.mod | 12 + vendor/go.uber.org/multierr/go.sum | 45 ++ .../pkg/apis/apiextensions/deepcopy.go | 6 + .../apis/apiextensions/types_jsonschema.go | 15 +- .../pkg/apis/apiextensions/v1/conversion.go | 14 - .../pkg/apis/apiextensions/v1/deepcopy.go | 6 + .../pkg/apis/apiextensions/v1/generated.pb.go | 416 ++++++++++-------- .../pkg/apis/apiextensions/v1/generated.proto | 40 +- .../pkg/apis/apiextensions/v1/register.go | 4 +- .../pkg/apis/apiextensions/v1/types.go | 2 +- .../apis/apiextensions/v1/types_jsonschema.go | 44 +- .../v1/zz_generated.conversion.go | 37 +- .../apis/apiextensions/v1beta1/conversion.go | 14 - .../apis/apiextensions/v1beta1/deepcopy.go | 6 + .../apiextensions/v1beta1/generated.pb.go | 416 ++++++++++-------- .../apiextensions/v1beta1/generated.proto | 38 ++ .../apis/apiextensions/v1beta1/register.go | 4 +- .../apiextensions/v1beta1/types_jsonschema.go | 44 +- .../v1beta1/zz_generated.conversion.go | 17 +- .../pkg/apis/duck/v1/kresource_type.go | 102 +++++ .../pkg/apis/duck/v1/status_types.go | 46 -- .../generators/comment_parser.go | 75 ++++ .../cmd/injection-gen/generators/packages.go | 34 +- .../generators/reconciler_controller.go | 51 ++- .../knative.dev/pkg/controller/controller.go | 4 +- vendor/knative.dev/pkg/controller/options.go | 4 + .../knative.dev/pkg/hack/generate-knative.sh | 9 +- vendor/knative.dev/pkg/hack/update-codegen.sh | 4 +- vendor/knative.dev/pkg/metrics/config.go | 26 +- vendor/knative.dev/pkg/metrics/exporter.go | 32 +- .../pkg/metrics/opencensus_exporter.go | 49 ++- .../pkg/metrics/stackdriver_exporter.go | 42 +- vendor/knative.dev/pkg/network/transports.go | 5 +- vendor/knative.dev/pkg/test/helpers/name.go | 2 - .../knative.dev/test-infra/scripts/dummy.go | 3 - .../knative.dev/test-infra/scripts/library.sh | 38 +- .../test-infra/scripts/presubmit-tests.sh | 8 +- vendor/modules.txt | 32 +- 119 files changed, 3705 insertions(+), 1490 deletions(-) create mode 100644 third_party/VENDOR-LICENSE/github.com/cespare/xxhash/v2/LICENSE.txt create mode 100644 vendor/github.com/cespare/xxhash/v2/.travis.yml create mode 100644 vendor/github.com/cespare/xxhash/v2/LICENSE.txt create mode 100644 vendor/github.com/cespare/xxhash/v2/README.md create mode 100644 vendor/github.com/cespare/xxhash/v2/go.mod create mode 100644 vendor/github.com/cespare/xxhash/v2/go.sum create mode 100644 vendor/github.com/cespare/xxhash/v2/xxhash.go create mode 100644 vendor/github.com/cespare/xxhash/v2/xxhash_amd64.go create mode 100644 vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s create mode 100644 vendor/github.com/cespare/xxhash/v2/xxhash_other.go create mode 100644 vendor/github.com/cespare/xxhash/v2/xxhash_safe.go create mode 100644 vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go create mode 100644 vendor/go.uber.org/atomic/CHANGELOG.md delete mode 100644 vendor/go.uber.org/atomic/glide.lock delete mode 100644 vendor/go.uber.org/atomic/glide.yaml create mode 100644 vendor/go.uber.org/atomic/go.mod create mode 100644 vendor/go.uber.org/atomic/go.sum delete mode 100644 vendor/go.uber.org/multierr/glide.lock create mode 100644 vendor/go.uber.org/multierr/go.mod create mode 100644 vendor/go.uber.org/multierr/go.sum create mode 100644 vendor/knative.dev/pkg/apis/duck/v1/kresource_type.go create mode 100644 vendor/knative.dev/pkg/codegen/cmd/injection-gen/generators/comment_parser.go diff --git a/go.mod b/go.mod index bf791616e2..d0351b2296 100644 --- a/go.mod +++ b/go.mod @@ -16,14 +16,12 @@ require ( github.com/google/uuid v1.1.1 github.com/google/wire v0.4.0 github.com/googleapis/gax-go/v2 v2.0.5 - github.com/googleapis/gnostic v0.4.0 // indirect - github.com/gorilla/mux v1.7.3 // indirect github.com/kelseyhightower/envconfig v1.4.0 github.com/pkg/errors v0.9.1 go.opencensus.io v0.22.3 go.opentelemetry.io/otel v0.3.0 // indirect - go.uber.org/multierr v1.2.0 - go.uber.org/zap v1.10.0 + go.uber.org/multierr v1.5.0 + go.uber.org/zap v1.14.1 golang.org/x/crypto v0.0.0-20200317142112-1b76d66859c6 // indirect google.golang.org/api v0.22.1-0.20200430202532-ac9be1f8f530 google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84 @@ -32,13 +30,11 @@ require ( istio.io/api v0.0.0-20200227213531-891bf31f3c32 istio.io/client-go v0.0.0-20200227214646-23b87b42e49b istio.io/gogo-genproto v0.0.0-20200130224810-a0338448499a // indirect - k8s.io/api v0.17.0 + k8s.io/api v0.17.3 k8s.io/apimachinery v0.18.1 k8s.io/client-go v11.0.1-0.20190805182717-6502b5e7b1b5+incompatible - k8s.io/kube-openapi v0.0.0-20190918143330-0270cf2f1c1d // indirect - k8s.io/utils v0.0.0-20191114184206-e782cd3c129f // indirect knative.dev/eventing v0.14.1-0.20200501170243-0bb51bb8d62b - knative.dev/pkg v0.0.0-20200501164043-2e4e82aa49f1 + knative.dev/pkg v0.0.0-20200506001744-478962f05e2b knative.dev/serving v0.14.1-0.20200424135249-b16b68297056 ) diff --git a/go.sum b/go.sum index 3082526c06..43aa3dfd8c 100644 --- a/go.sum +++ b/go.sum @@ -1,15 +1,20 @@ +bazil.org/fuse v0.0.0-20160811212531-371fbbdaa898/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8= +bazil.org/fuse v0.0.0-20180421153158-65cc252bf669/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8= cloud.google.com/go v0.25.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.30.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU= +cloud.google.com/go v0.39.0/go.mod h1:rVLT6fkc8chs9sfPtFc1SBH6em7n+ZoXaG+87tDISts= cloud.google.com/go v0.40.0/go.mod h1:Tk58MuI9rbLMKlAjeO/bDnteAx7tX2gJIXw4T5Jwlro= +cloud.google.com/go v0.43.0/go.mod h1:BOSR3VbTLkk6FDC/TcffxP4NF/FFBGA5ku+jvKOP7pg= cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU= cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY= cloud.google.com/go v0.44.3/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY= cloud.google.com/go v0.45.1 h1:lRi0CHyU+ytlvylOlFKKq0af6JncuyoRh1J+QJBqQx0= cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc= cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0= +cloud.google.com/go v0.47.0/go.mod h1:5p3Ky/7f3N10VBkhuR5LFtddroTiMyjZV/Kj5qOQFxU= cloud.google.com/go v0.50.0 h1:0E3eE8MX426vUOs7aHfI7aN1BrIzzzf4ccKCSfSjGmc= cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To= cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4= @@ -28,6 +33,7 @@ cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUM cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= cloud.google.com/go/datastore v1.1.0 h1:/May9ojXjRkPBNVrq+oWLqmWCkr4OU5uRY29bu0mRyQ= cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk= +cloud.google.com/go/logging v1.0.0/go.mod h1:V1cc3ogwobYzQq5f2R7DS/GvRIrI4FKj01Gs5glwAls= cloud.google.com/go/logging v1.0.1-0.20200331222814-69e77e66e597 h1:5Bx/8W5lEh1pLs2t/IXiSharJ+3Ympb9yGlEny2IGAQ= cloud.google.com/go/logging v1.0.1-0.20200331222814-69e77e66e597/go.mod h1:aEE55ap4rXapfUQnFJlbBXh/VHufGPQfZk/ipzDyigE= cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I= @@ -43,7 +49,9 @@ cloud.google.com/go/storage v1.6.0 h1:UDpwYIwla4jHGzZJaEJYx1tOejbgSoNqsAfHAUYe2r cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk= cloud.google.com/go/storage v1.6.1-0.20200331222814-69e77e66e597 h1:+1zrw2RC+ugDaiLBcwOnCAOjz7Q5WvQEH+IIjmE5qpw= cloud.google.com/go/storage v1.6.1-0.20200331222814-69e77e66e597/go.mod h1:xetmKKqouc0l+sl+AE08szRvYBxHed0JRxryxX3aU5w= +contrib.go.opencensus.io/exporter/aws v0.0.0-20181029163544-2befc13012d0/go.mod h1:uu1P0UCM/6RbsMrgPa98ll8ZcHM858i/AD06a9aLRCA= contrib.go.opencensus.io/exporter/ocagent v0.4.12/go.mod h1:450APlNTSR6FrvC3CTRqYosuDstRB9un7SOx2k/9ckA= +contrib.go.opencensus.io/exporter/ocagent v0.5.0/go.mod h1:ImxhfLRpxoYiSq891pBrLVhN+qmP8BTVvdH2YLs7Gl0= contrib.go.opencensus.io/exporter/ocagent v0.6.0 h1:Z1n6UAyr0QwM284yUuh5Zd8JlvxUGAhFZcgMJkMPrGM= contrib.go.opencensus.io/exporter/ocagent v0.6.0/go.mod h1:zmKjrJcdo0aYcVS7bmEeSEBLPA9YJp5bjrofdU3pIXs= contrib.go.opencensus.io/exporter/prometheus v0.1.0 h1:SByaIoWwNgMdPSgl5sMqM2KDE5H/ukPWBRo314xiDvg= @@ -52,25 +60,49 @@ contrib.go.opencensus.io/exporter/stackdriver v0.12.9-0.20191108183826-59d068f8d contrib.go.opencensus.io/exporter/stackdriver v0.12.9-0.20191108183826-59d068f8d8ff/go.mod h1:XyyafDnFOsqoxHJgTFycKZMrRUrPThLh2iYTJF6uoO0= contrib.go.opencensus.io/exporter/zipkin v0.1.1 h1:PR+1zWqY8ceXs1qDQQIlgXe+sdiwCf0n32bH4+Epk8g= contrib.go.opencensus.io/exporter/zipkin v0.1.1/go.mod h1:GMvdSl3eJ2gapOaLKzTKE3qDgUkJ86k9k3yY2eqwkzc= +contrib.go.opencensus.io/integrations/ocsql v0.1.4/go.mod h1:8DsSdjz3F+APR+0z0WkU1aRorQCFfRxvqjUUPMbF3fE= dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= +git.apache.org/thrift.git v0.12.0/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg= +github.com/Azure/azure-amqp-common-go/v2 v2.1.0/go.mod h1:R8rea+gJRuJR6QxTir/XuEd+YuKoUiazDC/N96FiDEU= github.com/Azure/azure-pipeline-go v0.1.8/go.mod h1:XA1kFWRVhSK+KNFiOhfv83Fv8L9achrP7OxIzeTn1Yg= github.com/Azure/azure-pipeline-go v0.1.9/go.mod h1:XA1kFWRVhSK+KNFiOhfv83Fv8L9achrP7OxIzeTn1Yg= +github.com/Azure/azure-pipeline-go v0.2.1/go.mod h1:UGSo8XybXnIGZ3epmeBw7Jdz+HiUVpqIlpz/HKHylF4= +github.com/Azure/azure-sdk-for-go v16.2.1+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= github.com/Azure/azure-sdk-for-go v19.1.1+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= github.com/Azure/azure-sdk-for-go v21.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= +github.com/Azure/azure-sdk-for-go v28.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= +github.com/Azure/azure-sdk-for-go v29.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= github.com/Azure/azure-sdk-for-go v30.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= +github.com/Azure/azure-sdk-for-go v35.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= +github.com/Azure/azure-sdk-for-go v38.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= +github.com/Azure/azure-service-bus-go v0.9.1/go.mod h1:yzBx6/BUGfjfeqbRZny9AQIbIe3AcV9WZbAdpkoXOa0= github.com/Azure/azure-storage-blob-go v0.0.0-20190123011202-457680cc0804/go.mod h1:oGfmITT1V6x//CswqY2gtAHND+xIP64/qL7a5QJix0Y= +github.com/Azure/azure-storage-blob-go v0.8.0/go.mod h1:lPI3aLPpuLTeUwh1sViKXFxwl2B6teiRqI0deQUvsw0= github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8= +github.com/Azure/go-autorest v10.8.1+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24= github.com/Azure/go-autorest v10.15.5+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24= github.com/Azure/go-autorest v11.1.2+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24= +github.com/Azure/go-autorest v12.0.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24= +github.com/Azure/go-autorest/autorest v0.1.0/go.mod h1:AKyIcETwSUFxIcs/Wnq/C+kwCtlEYGUVd7FPNb2slmg= github.com/Azure/go-autorest/autorest v0.2.0/go.mod h1:AKyIcETwSUFxIcs/Wnq/C+kwCtlEYGUVd7FPNb2slmg= github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI= +github.com/Azure/go-autorest/autorest v0.9.3/go.mod h1:GsRuLYvwzLjjjRoWEIyMUaYq8GNUx2nRB378IPt/1p0= +github.com/Azure/go-autorest/autorest v0.9.6/go.mod h1:/FALq9T/kS7b5J5qsQ+RSTUdAmGFqi0vUdVNNx8q630= github.com/Azure/go-autorest/autorest/adal v0.1.0/go.mod h1:MeS4XhScH55IST095THyTxElntu7WqB7pNbZo8Q5G3E= github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0= +github.com/Azure/go-autorest/autorest/adal v0.8.0/go.mod h1:Z6vX6WXXuyieHAXwMj0S6HY6e6wcHn37qQMBQlvY3lc= +github.com/Azure/go-autorest/autorest/adal v0.8.1/go.mod h1:ZjhuQClTqx435SRJ2iMlOxPYt3d2C/T/7TiQCVZSn3Q= +github.com/Azure/go-autorest/autorest/adal v0.8.2/go.mod h1:ZjhuQClTqx435SRJ2iMlOxPYt3d2C/T/7TiQCVZSn3Q= github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA= +github.com/Azure/go-autorest/autorest/date v0.2.0/go.mod h1:vcORJHLJEh643/Ioh9+vPmf1Ij9AEBM5FuBIXLmIy0g= github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/autorest/mocks v0.3.0/go.mod h1:a8FDP3DYzQ4RYfVAxAN3SVSiiO77gL2j2ronKKP0syM= +github.com/Azure/go-autorest/autorest/to v0.1.0/go.mod h1:GunWKJp1AEqgMaGLV+iocmRAJWqST1wQYhyyjXJ3SJc= github.com/Azure/go-autorest/autorest/to v0.2.0/go.mod h1:GunWKJp1AEqgMaGLV+iocmRAJWqST1wQYhyyjXJ3SJc= +github.com/Azure/go-autorest/autorest/to v0.3.0/go.mod h1:MgwOyqaIuKdG4TL/2ywSsIWKAfJfgHDo8ObuUk3t5sA= github.com/Azure/go-autorest/autorest/validation v0.1.0/go.mod h1:Ha3z/SqBeaalWQvokg3NZAlQTalVMtOIAs1aGK7G6u8= +github.com/Azure/go-autorest/autorest/validation v0.2.0/go.mod h1:3EEqHnBxQGHXRYq3HT1WyXAvT7LLY3tl70hw6tQIbjI= github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc= github.com/Azure/go-autorest/tracing v0.1.0/go.mod h1:ROEEAFwXycQw7Sn3DXNtEedEvdeRAgDr0izn4z5Ij88= github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk= @@ -80,9 +112,23 @@ github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03 github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= github.com/DataDog/datadog-go v2.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ= github.com/DataDog/sketches-go v0.0.0-20190923095040-43f19ad77ff7/go.mod h1:Q5DbzQ+3AkgGwymQO7aZFNP7ns2lZKGtvRBzRXfdi60= +github.com/DataDog/zstd v1.3.6-0.20190409195224-796139022798/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo= +github.com/DataDog/zstd v1.4.1/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo= +github.com/GoogleCloudPlatform/cloud-builders/gcs-fetcher v0.0.0-20191203181535-308b93ad1f39/go.mod h1:yfGmCjKuUzk9WzubMlW2zwjhCraIc/J+M40cufdemRM= +github.com/GoogleCloudPlatform/cloudsql-proxy v0.0.0-20191009163259-e802c2cb94ae/go.mod h1:mjwGPas4yKduTyubHvD1Atl9r1rUq8DfVy+gkVvZ+oo= +github.com/GoogleCloudPlatform/k8s-cloud-provider v0.0.0-20190822182118-27a4ced34534/go.mod h1:iroGtC8B3tQiqtds1l+mgk/BBOrxbqjH+eUfFQYRc14= github.com/GoogleCloudPlatform/testgrid v0.0.1-alpha.3/go.mod h1:f96W2HYy3tiBNV5zbbRc+NczwYHgG1PHXMQfoEWv680= +github.com/GoogleCloudPlatform/testgrid v0.0.7/go.mod h1:lmtHGBL0M/MLbu1tR9BWV7FGZ1FEFIdPqmJiHNCL7y8= +github.com/MakeNowJust/heredoc v0.0.0-20170808103936-bb23615498cd/go.mod h1:64YHyfSL2R96J44Nlwm39UHepQbyR5q10x7iYa1ks2E= +github.com/Masterminds/goutils v1.1.0/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= +github.com/Masterminds/semver/v3 v3.0.3/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs= +github.com/Masterminds/sprig/v3 v3.0.2/go.mod h1:oesJ8kPONMONaZgtiHNzUShJbksypC5kWczhZAf6+aU= +github.com/Masterminds/vcs v1.13.1/go.mod h1:N09YCmOQr6RLxC6UNHzuVwAdodYbbnycGHSmwVJjcKA= github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA= +github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw= +github.com/Microsoft/hcsshim v0.8.7/go.mod h1:OHd7sQqRFrYd3RmSgbgji+ctCwkbq2wbEYNSzOYtcBQ= github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ= +github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU= github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= github.com/PuerkitoBio/purell v1.1.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI= @@ -90,50 +136,89 @@ github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbt github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M= github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= +github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ= github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo= +github.com/Shopify/sarama v1.23.1/go.mod h1:XLH1GYJnLVE0XCr6KdJGVJRTwY30moWNJ4sERjXX6fs= github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI= +github.com/agnivade/levenshtein v1.0.1/go.mod h1:CURSv5d9Uaml+FovSIICkLbAUZ9S4RqaHDIsdSBg7lM= +github.com/alcortesm/tgz v0.0.0-20161220082320-9c5fe88206d7/go.mod h1:6zEj6s6u/ghQa61ZWa/C2Aw3RkjiTBOix7dkqa1VLIs= github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= +github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo29Kk6CurOXKm700vrz8f0KW0JNfpkRJY/8= +github.com/andybalholm/brotli v0.0.0-20190621154722-5f990b63d2d6/go.mod h1:+lx6/Aqd1kLJ1GQfkvOnaZ1WGmLpMpbprPuIOOZX30U= github.com/andygrunwald/go-gerrit v0.0.0-20190120104749-174420ebee6c/go.mod h1:0iuRQp6WJ44ts+iihy5E/WlPqfg5RNeQxOmzRkxCdtk= +github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c= github.com/antihax/optional v0.0.0-20180407024304-ca021399b1a6/go.mod h1:V8iCPQYkqmusNa815XgQio277wI47sdRh1dUOLdyC6Q= +github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ= github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= github.com/armon/go-metrics v0.0.0-20190430140413-ec5e00d3c878/go.mod h1:3AMJUQhVx52RsWOnlkpikZr01T/yAVN2gn0861vByNg= +github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs= github.com/asaskevich/govalidator v0.0.0-20180720115003-f9ffefc3facf/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY= github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY= +github.com/asaskevich/govalidator v0.0.0-20200108200545-475eaeb16496/go.mod h1:oGkLhpf+kjZl6xBf758TQhh5XrAeiJv/7FRz/2spLIg= github.com/aws/aws-k8s-tester v0.0.0-20190114231546-b411acf57dfe/go.mod h1:1ADF5tAtU1/mVtfMcHAYSm2fPw71DA7fFk0yed64/0I= +github.com/aws/aws-k8s-tester v0.9.3/go.mod h1:nsh1f7joi8ZI1lvR+Ron6kJM2QdCYPU/vFePghSSuTc= github.com/aws/aws-sdk-go v1.25.1 h1:d7zDXFT2Tgq/yw7Wku49+lKisE8Xc85erb+8PlE/Shk= github.com/aws/aws-sdk-go v1.25.1/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/bazelbuild/buildtools v0.0.0-20190917191645-69366ca98f89/go.mod h1:5JP0TXzWDHXv8qvxRC4InIazwdyDseBDbzESUMKk1yU= github.com/benbjohnson/clock v1.0.0/go.mod h1:bGMdMPoPVvcYyt1gHDf4J2KE153Yf9BuiUKYMaxlTDM= +github.com/beorn7/perks v0.0.0-20160804104726-4c0e84591b9a/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs= +github.com/bitly/go-simplejson v0.5.0/go.mod h1:cXHtHw4XUPsvGaxgjIAn8PhEWG9NfngEKAMDJEczWVA= github.com/blang/semver v1.1.1-0.20190414102917-ba2c2ddd8906 h1:/HvlpHir75MQ1/grQRJMu8xFDhvHY7VSnY6wO/crzS8= github.com/blang/semver v1.1.1-0.20190414102917-ba2c2ddd8906/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk= +github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4= github.com/bmizerany/perks v0.0.0-20141205001514-d9a9656a3a4b/go.mod h1:ac9efd0D1fsDb3EJvhqgXRbFx7bs2wqZ10HQPeU8U/Q= github.com/boltdb/bolt v1.3.1/go.mod h1:clJnj/oiGkjum5o1McbSZDSLxVThjynRyGBgiAx27Ps= +github.com/bshuster-repo/logrus-logstash-hook v0.4.1/go.mod h1:zsTqEiSzDgAa/8GZR7E1qaXrhYNDKBYy5/dWPTIflbk= +github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd/go.mod h1:2oa8nejYd4cQ/b0hMIopN0lCRxU0bueqREvZLWFrtK8= +github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0BsqsP2LwDJ9aOkm/6J86V6lyAXCoQWGw3K50= +github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE= github.com/bwmarrin/snowflake v0.0.0/go.mod h1:NdZxfVWX+oR6y2K0o6qAYv6gIOP9rjG0/E9WsDpxqwE= github.com/census-instrumentation/opencensus-proto v0.2.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= github.com/census-instrumentation/opencensus-proto v0.2.1 h1:glEXhBS5PSLLv4IXzLA5yPRVX4bilULVyxxbrfOtDAk= github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= +github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko= +github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= +github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY= +github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5/go.mod h1:/iP1qXHoty45bqomnu2LM+VVyAEdWN+vtSHGlQgyxbw= github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI= github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= +github.com/cihub/seelog v0.0.0-20170130134532-f561c5e57575/go.mod h1:9d6lWj8KzO/fd/NrVaLscBKmPigpZpn5YawRPw+e3Yo= github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag= github.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I= github.com/clarketm/json v1.13.4/go.mod h1:ynr2LRfb0fQU34l07csRNBTcivjySLLiY1YzQqKVfdo= github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/cloudevents/sdk-go v0.0.0-20190509003705-56931988abe3/go.mod h1:j1nZWMLGg3om8SswStBoY6/SHvcLM19MuZqwDtMtmzs= github.com/cloudevents/sdk-go v1.1.2/go.mod h1:ss+jWJ88wypiewnPEzChSBzTYXGpdcILoN9YHk8uhTQ= github.com/cloudevents/sdk-go v1.2.0 h1:2AxI14EJUw1PclJ5gZJtzbxnHIfNMdi76Qq3P3G1BRU= github.com/cloudevents/sdk-go v1.2.0/go.mod h1:ss+jWJ88wypiewnPEzChSBzTYXGpdcILoN9YHk8uhTQ= github.com/cloudevents/sdk-go/v2 v2.0.0-RC1 h1:/cR5VkgD2/xmrmHryBgsvVn3O/p9cmysI6b5sapBQ4c= github.com/cloudevents/sdk-go/v2 v2.0.0-RC1/go.mod h1:akZr/joO3DfDft2KZnI91LEs15NSKIBNPYcAMBQ1xbk= github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= +github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8= +github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f/go.mod h1:OApqhQ4XNSNC13gXIwDjhOQxjWa/NxkwZXJ1EvqT0ko= +github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw= +github.com/containerd/containerd v1.3.0-beta.2.0.20190828155532-0293cbd26c69/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA= +github.com/containerd/containerd v1.3.0/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA= +github.com/containerd/containerd v1.3.2/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA= +github.com/containerd/containerd v1.3.3/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA= +github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y= +github.com/containerd/continuity v0.0.0-20200107194136-26c1120b8d41/go.mod h1:Dq467ZllaHgAtVp4p1xUQWBrFXR9s/wyoTpG8zOJGkY= +github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI= +github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0= +github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o= +github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc= github.com/coreos/bbolt v1.3.1-coreos.6/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= +github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= github.com/coreos/bbolt v1.3.3/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= @@ -145,35 +230,63 @@ github.com/coreos/go-semver v0.0.0-20180108230905-e214231b295a/go.mod h1:nnelYz7 github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= +github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= github.com/coreos/pkg v0.0.0-20180108230652-97fdf19511ea/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= +github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= +github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= +github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY= +github.com/cyphar/filepath-securejoin v0.2.2/go.mod h1:FpkQEhXnPnOthhzymB7CGsFk2G9VLXONKD9G7QGMM+4= github.com/davecgh/go-spew v0.0.0-20151105211317-5215b55f46b2/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/daviddengcn/go-colortext v0.0.0-20160507010035-511bcaf42ccd/go.mod h1:dv4zxwHi5C/8AeI+4gX4dCWOIvNi7I6JCSX0HvlKPgE= +github.com/deislabs/oras v0.8.1/go.mod h1:Mx0rMSbBNaNfY9hjpccEnxkOqJL6KGjtxNHPLC4G4As= github.com/denisenkom/go-mssqldb v0.0.0-20190111225525-2fea367d496d/go.mod h1:xN/JuLBIz4bjkxNmByTiV1IbhfnYb6oo99phBn4Eqhc= +github.com/denisenkom/go-mssqldb v0.0.0-20191124224453-732737034ffd/go.mod h1:xbL0rPBG9cCiLr28tMa8zpbdarY27NDyej4t/EjAShU= +github.com/denverdino/aliyungo v0.0.0-20190125010748-a747050bb1ba/go.mod h1:dV8lFg6daOBZbT6/BDGIz6Y3WFGn8juu6G+CQ6LHtl0= +github.com/devigned/tab v0.1.1/go.mod h1:XG9mPq0dFghrYvoBF3xdRrJzSTX1b7IQrvaL9mzjeJY= +github.com/dgrijalva/jwt-go v0.0.0-20170104182250-a601269ab70c/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= github.com/dgryski/go-gk v0.0.0-20200319235926-a69029f61654/go.mod h1:qm+vckxRlDt0aOla0RYJJVeqHZlWfOm2UIxHaqPB46E= +github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= +github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8= github.com/djherbis/atime v1.0.0/go.mod h1:5W+KBIuTwVGcqjIfaTwt+KSYX1o6uep8dtevevQP/f8= +github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E= github.com/docker/cli v0.0.0-20190925022749-754388324470/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8= +github.com/docker/cli v0.0.0-20191017083524-a8ff7f821017/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8= +github.com/docker/cli v0.0.0-20200130152716-5d0cf8839492/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8= +github.com/docker/distribution v0.0.0-20191216044856-a8371794149d/go.mod h1:0+TTO4EOBfRPhZXAeF1Vu+W3hHZ8eLp8PgKVZlcvtFY= github.com/docker/distribution v2.6.0-rc.1.0.20180327202408-83389a148052+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= +github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= github.com/docker/docker v0.7.3-0.20190327010347-be7ac8be2ae0/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/docker/docker v1.4.2-0.20180531152204-71cd53e4a197/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= +github.com/docker/docker v1.4.2-0.20190924003213-a8608b5b67c7/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= +github.com/docker/docker v1.4.2-0.20200203170920-46ec8731fbce/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= +github.com/docker/docker v1.13.1/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/docker/docker-credential-helpers v0.6.3/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y= github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec= +github.com/docker/go-metrics v0.0.0-20180209012529-399ea8c73916/go.mod h1:/u0gXw0Gay3ceNrsHubL3BtdOL2fHf93USgMTe0W5dI= github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= +github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= +github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE= github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM= github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE= +github.com/dsnet/compress v0.0.1/go.mod h1:Aw8dCMJ7RioblQeTqt88akK31OvO8Dhf5JflhBbQEHo= +github.com/dsnet/golib v0.0.0-20171103203638-1ea166775780/go.mod h1:Lj+Z9rebOhdfkVLjJ8T6VcRQv3SXugXy999NBtR9aFY= github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs= +github.com/eapache/go-resiliency v1.2.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs= github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU= github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I= github.com/elazarl/goproxy v0.0.0-20170405201442-c4fc26588b6e/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc= github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs= github.com/emicklei/go-restful v2.9.5+incompatible h1:spTtZBk5DYEvbxMVutUuTyh1Ao2r4iyvLdACqsl/Ljk= github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs= +github.com/emirpasic/gods v1.12.0/go.mod h1:YfzfFFoVP/catgzJb4IKIqXjX78Ha8FMSDh3ymbK86o= github.com/envoyproxy/go-control-plane v0.6.9/go.mod h1:SBwIajubJHhxtWwsL9s8ss4safvEdbitLhGGK48rN6g= github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= @@ -184,21 +297,30 @@ github.com/evanphx/json-patch v0.0.0-20190203023257-5858425f7550/go.mod h1:50XU6 github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/evanphx/json-patch v4.5.0+incompatible h1:ouOWdg56aJriqS0huScTkVXPC5IcNrDCXZ6OoTAWu7M= github.com/evanphx/json-patch v4.5.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= +github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d/go.mod h1:ZZMPRZwes7CROmyNKgQzC3XPs6L/G2EJLHddWejkmf4= +github.com/fatih/camelcase v1.0.0/go.mod h1:yN2Sb0lFhZJUdVvtELVWefmrXpuZESvPmqwoZc+/fpc= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= +github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU= +github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc= +github.com/fortytw2/leaktest v1.2.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g= github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g= +github.com/frankban/quicktest v1.8.1/go.mod h1:ui7WezCLWMWxVWr1GETZY3smRy0G4KWq9vcPtJmFl7Y= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4= github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= github.com/fsouza/fake-gcs-server v0.0.0-20180612165233-e85be23bdaa8/go.mod h1:1/HufuJ+eaDf4KTnYdS6HJMGvMRU8d4cYTuu/1QaBbI= +github.com/garyburd/redigo v0.0.0-20150301180006-535138d7bcd7/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY= github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/ghodss/yaml v0.0.0-20180820084758-c7ce16629ff4/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= +github.com/gliderlabs/ssh v0.2.2/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0= github.com/globalsign/mgo v0.0.0-20180905125535-1ca0a4f7cbcb/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q= github.com/globalsign/mgo v0.0.0-20181015135952-eeefdecb41b8/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q= github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= +github.com/go-ini/ini v1.46.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= @@ -212,6 +334,7 @@ github.com/go-openapi/analysis v0.17.0/go.mod h1:IowGgpVeD0vNm45So8nr+IcQ3pxVtpR github.com/go-openapi/analysis v0.17.2/go.mod h1:IowGgpVeD0vNm45So8nr+IcQ3pxVtpRoBWb8PVZO0ik= github.com/go-openapi/analysis v0.18.0/go.mod h1:IowGgpVeD0vNm45So8nr+IcQ3pxVtpRoBWb8PVZO0ik= github.com/go-openapi/analysis v0.19.2/go.mod h1:3P1osvZa9jKjb8ed2TPng3f0i/UY9snX6gxi44djMjk= +github.com/go-openapi/analysis v0.19.5/go.mod h1:hkEAkxagaIvIP7VTn8ygJNkd4kAYON2rCu0v0ObL0AU= github.com/go-openapi/errors v0.17.0/go.mod h1:LcZQpmvG4wyF5j4IhA73wkLFQg+QJXOQHVjmcZxhka0= github.com/go-openapi/errors v0.17.2/go.mod h1:LcZQpmvG4wyF5j4IhA73wkLFQg+QJXOQHVjmcZxhka0= github.com/go-openapi/errors v0.18.0/go.mod h1:LcZQpmvG4wyF5j4IhA73wkLFQg+QJXOQHVjmcZxhka0= @@ -235,19 +358,25 @@ github.com/go-openapi/loads v0.17.2/go.mod h1:72tmFy5wsWx89uEVddd0RjRWPZm92WRLhf github.com/go-openapi/loads v0.18.0/go.mod h1:72tmFy5wsWx89uEVddd0RjRWPZm92WRLhf7AC+0+OOU= github.com/go-openapi/loads v0.19.0/go.mod h1:72tmFy5wsWx89uEVddd0RjRWPZm92WRLhf7AC+0+OOU= github.com/go-openapi/loads v0.19.2/go.mod h1:QAskZPMX5V0C2gvfkGZzJlINuP7Hx/4+ix5jWFxsNPs= +github.com/go-openapi/loads v0.19.4/go.mod h1:zZVHonKd8DXyxyw4yfnVjPzBjIQcLt0CCsn0N0ZrQsk= github.com/go-openapi/runtime v0.0.0-20180920151709-4f900dc2ade9/go.mod h1:6v9a6LTXWQCdL8k1AO3cvqx5OtZY/Y9wKTgaoP6YRfA= github.com/go-openapi/runtime v0.17.2/go.mod h1:QO936ZXeisByFmZEO1IS1Dqhtf4QV1sYYFtIq6Ld86Q= github.com/go-openapi/runtime v0.19.0/go.mod h1:OwNfisksmmaZse4+gpV3Ne9AyMOlP1lt4sK4FXt0O64= +github.com/go-openapi/runtime v0.19.4/go.mod h1:X277bwSUBxVlCYR3r7xgZZGKVvBd/29gLDlFGtJ8NL4= github.com/go-openapi/spec v0.0.0-20160808142527-6aced65f8501/go.mod h1:J8+jY1nAiCcj+friV/PDoE1/3eeccG9LYBs0tYvLOWc= github.com/go-openapi/spec v0.17.0/go.mod h1:XkF/MOi14NmjsfZ8VtAKf8pIlbZzyoTvZsdfssdxcBI= github.com/go-openapi/spec v0.17.2/go.mod h1:XkF/MOi14NmjsfZ8VtAKf8pIlbZzyoTvZsdfssdxcBI= github.com/go-openapi/spec v0.18.0/go.mod h1:XkF/MOi14NmjsfZ8VtAKf8pIlbZzyoTvZsdfssdxcBI= github.com/go-openapi/spec v0.19.2/go.mod h1:sCxk3jxKgioEJikev4fgkNmwS+3kuYdJtcsZsD5zxMY= +github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo= github.com/go-openapi/spec v0.19.4 h1:ixzUSnHTd6hCemgtAJgluaTSGYpLNpJY4mA2DIkdOAo= github.com/go-openapi/spec v0.19.4/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo= +github.com/go-openapi/spec v0.19.6 h1:rMMMj8cV38KVXK7SFc+I2MWClbEfbK705+j+dyqun5g= +github.com/go-openapi/spec v0.19.6/go.mod h1:Hm2Jr4jv8G1ciIAo+frC/Ft+rR2kQDh8JHKHb3gWUSk= github.com/go-openapi/strfmt v0.17.0/go.mod h1:P82hnJI0CXkErkXi8IKjPbNBM6lV6+5pLP5l494TcyU= github.com/go-openapi/strfmt v0.18.0/go.mod h1:P82hnJI0CXkErkXi8IKjPbNBM6lV6+5pLP5l494TcyU= github.com/go-openapi/strfmt v0.19.0/go.mod h1:+uW+93UVvGGq2qGaZxdDeJqSAqBqBdl+ZPMF/cC8nDY= +github.com/go-openapi/strfmt v0.19.3/go.mod h1:0yX7dbo8mKIvc3XSKp7MNfxw4JytCfCD6+bY1AVL9LU= github.com/go-openapi/swag v0.0.0-20160704191624-1d0bd113de87/go.mod h1:DXUve3Dpr1UfpPtxFw+EFuQ41HhCWZfha5jSVRG7C7I= github.com/go-openapi/swag v0.17.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg= github.com/go-openapi/swag v0.17.2/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg= @@ -255,10 +384,14 @@ github.com/go-openapi/swag v0.18.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/ github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= github.com/go-openapi/swag v0.19.5 h1:lTz6Ys4CmqqCQmZPBlbQENR1/GucA2bzYTE12Pw4tFY= github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= +github.com/go-openapi/swag v0.19.7 h1:VRuXN2EnMSsZdauzdss6JBC29YotDqG59BZ+tdlIL1s= +github.com/go-openapi/swag v0.19.7/go.mod h1:ao+8BpOPyKdpQz3AOJfbeEVpLmWAvlT1IfTe5McPyhY= github.com/go-openapi/validate v0.17.0/go.mod h1:Uh4HdOzKt19xGIGm1qHf/ofbX1YQ4Y+MYsct2VUrAJ4= github.com/go-openapi/validate v0.18.0/go.mod h1:Uh4HdOzKt19xGIGm1qHf/ofbX1YQ4Y+MYsct2VUrAJ4= github.com/go-openapi/validate v0.19.2/go.mod h1:1tRCw7m3jtI8eNWEEliiAqUIcBztB2KDnRCRMUi7GTA= +github.com/go-openapi/validate v0.19.5/go.mod h1:8DJv2CVJQ6kGNpFW6eV9N3JviE1C85nY1c2z52x1Gk4= github.com/go-sql-driver/mysql v0.0.0-20160411075031-7ebe0a500653/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w= +github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w= github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/go-test/deep v1.0.4/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA= @@ -267,12 +400,18 @@ github.com/gobuffalo/envy v1.6.5/go.mod h1:N+GkhhZ/93bGZc6ZKhJLP6+m+tCNPKwgSpH9k github.com/gobuffalo/envy v1.7.0/go.mod h1:n7DRkBerg/aorDM8kbduw5dN3oXGswK5liaSCx4T5NI= github.com/gobuffalo/envy v1.7.1 h1:OQl5ys5MBea7OGCdvPbBJWRgnhC/fGona6QKfvFeau8= github.com/gobuffalo/envy v1.7.1/go.mod h1:FurDp9+EDPE4aIUS3ZLyD+7/9fpx7YRt/ukY6jIHf0w= +github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= +github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4= +github.com/gofrs/flock v0.7.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU= github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s= github.com/gogo/protobuf v1.3.0 h1:G8O7TerXerS4F6sx9OV7/nRfJdnXgHZu/S/7F2SN+UE= github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o= +github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0= +github.com/golang/gddo v0.0.0-20190419222130-af0f2af80721/go.mod h1:xEhNfoBDX1hzLm2Nf80qUvZ2sVwoMZ8d6IE2SrsQfh4= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20180513044358-24b0969c4cb7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= +github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6 h1:ZgQEtGgCBiWRM39fZuwSd1LwSqqSW0hOdXCYYDX0R3I= github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= @@ -299,6 +438,10 @@ github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:W github.com/golang/protobuf v1.4.0 h1:oOuy+ugB+P/kBdUnG5QaMXSIyJ1q38wWSojYCb3z5VQ= github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= +github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= +github.com/golangplus/bytes v0.0.0-20160111154220-45c989fe5450/go.mod h1:Bk6SMAONeMXrxql8uvOKuAZSu8aM5RUGv+1C6IJaEho= +github.com/golangplus/fmt v0.0.0-20150411045040-2a5d6d7d2995/go.mod h1:lJgMEyOkYFkPcDKwRXegd+iM6E7matEszMG5HhwytU8= +github.com/golangplus/testing v0.0.0-20180327235837-af21d9c3145e/go.mod h1:0AA//k/eakGydO4jKRoRL2j92ZKSzTgj9tclaCrvXHk= github.com/gomodule/redigo v1.7.0/go.mod h1:B4C85qUVwatsJoIUNIfCRsp7qO0iAmpGFZ4EELWSbC4= github.com/google/btree v0.0.0-20180124185431-e89373fe6b4a/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= @@ -310,19 +453,30 @@ github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4= github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-containerregistry v0.0.0-20191010200024-a3d713f9b7f8 h1:i2MA7D3vtR5uk9ZPzVp/IC9616kCPv0RScyRD/tVQGM= github.com/google/go-containerregistry v0.0.0-20191010200024-a3d713f9b7f8/go.mod h1:KyKXa9ciM8+lgMXwOVsXi7UxGrsf9mM61Mzs+xKUrKE= +github.com/google/go-containerregistry v0.0.0-20200115214256-379933c9c22b/go.mod h1:Wtl/v6YdQxv397EREtzwgd9+Ud7Q5D8XMbi3Zazgkrs= +github.com/google/go-containerregistry v0.0.0-20200123184029-53ce695e4179 h1:wFBYu1QOSE+sgYeX2jtZUldOgLUebWYm/thF0Et7U8o= +github.com/google/go-containerregistry v0.0.0-20200123184029-53ce695e4179/go.mod h1:Wtl/v6YdQxv397EREtzwgd9+Ud7Q5D8XMbi3Zazgkrs= github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ= github.com/google/go-github/v27 v27.0.6/go.mod h1:/0Gr8pJ55COkmv+S/yPKCczSkUPIM/LnFyubufRNIS0= +github.com/google/go-licenses v0.0.0-20191112164736-212ea350c932/go.mod h1:16wa6pRqNDUIhOtwF0GcROVqMeXHZJ7H6eGDFUh5Pfk= github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= +github.com/google/go-replayers/grpcreplay v0.1.0/go.mod h1:8Ig2Idjpr6gifRd6pNVggX6TC1Zw6Jx74AKp7QNH2QE= +github.com/google/go-replayers/httpreplay v0.1.0/go.mod h1:YKZViNhiGgqdBlUbI2MwGpq4pXxNmhJLPHQ7cv2b5no= github.com/google/gofuzz v0.0.0-20161122191042-44d81051d367/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI= github.com/google/gofuzz v0.0.0-20170612174753-24818f796faf/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g= github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/licenseclassifier v0.0.0-20190926221455-842c0d70d702/go.mod h1:qsqn2hxC+vURpyBRygGUuinTO42MFRLcsmQ/P8v94+M= +github.com/google/licenseclassifier v0.0.0-20200402202327-879cb1424de0/go.mod h1:qsqn2hxC+vURpyBRygGUuinTO42MFRLcsmQ/P8v94+M= github.com/google/mako v0.0.0-20190821191249-122f8dcef9e3/go.mod h1:YzLcVlL+NqWnmUEPuhS1LxDDwGO9WNbVlEXaF4IH35g= github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no= github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= +github.com/google/martian v2.1.1-0.20190517191504-25dcb96d9e51+incompatible h1:xmapqc1AyLoB+ddYT6r04bD9lIjlOqGaREovi0SzFaE= +github.com/google/martian v2.1.1-0.20190517191504-25dcb96d9e51+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= +github.com/google/pprof v0.0.0-20190723021845-34ac40c74b70/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= @@ -330,12 +484,16 @@ github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm4 github.com/google/subcommands v1.0.1 h1:/eqq+otEXm5vhfBrbREPCSVQbvofip6kIz+mX5TUH7k= github.com/google/subcommands v1.0.1/go.mod h1:ZjhPrFU+Olkh9WazFPsl27BQ4UPiG37m3yTrtFlrHVk= github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/google/uuid v1.1.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY= github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/google/wire v0.3.0/go.mod h1:i1DMg/Lu8Sz5yYl25iOdmc5CT5qusaa+zmRWs16741s= github.com/google/wire v0.4.0 h1:kXcsA/rIGzJImVqPdhfnr6q0xsS9gU0515q1EPpJ9fE= github.com/google/wire v0.4.0/go.mod h1:ngWDr9Qvq3yZA10YrxfyGELY/AFWGVpy9c1LTRi1EoU= github.com/googleapis/gax-go v2.0.0+incompatible h1:j0GKcs05QVmm7yesiZq2+9cxHkNK9YM6zKx4D2qucQU= github.com/googleapis/gax-go v2.0.0+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY= +github.com/googleapis/gax-go v2.0.2+incompatible h1:silFMLAnr330+NRuag/VjIGF7TLp/LBrV2CJKFLWEww= +github.com/googleapis/gax-go v2.0.2+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM= github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk= @@ -345,48 +503,63 @@ github.com/googleapis/gnostic v0.3.1/go.mod h1:on+2t9HRStVgn95RSsFWFz+6Q0Snyqv1a github.com/googleapis/gnostic v0.4.0 h1:BXDUo8p/DaxC+4FJY/SSx3gvnx9C1VdHNgaUkiEL5mk= github.com/googleapis/gnostic v0.4.0/go.mod h1:on+2t9HRStVgn95RSsFWFz+6Q0Snyqv1awfrALZdbtU= github.com/gophercloud/gophercloud v0.1.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8= +github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg= github.com/gorilla/csrf v1.6.2/go.mod h1:7tSf8kmjNYr7IWDCYhd3U8Ck34iQ/Yw5CJu7bAkHEGI= +github.com/gorilla/handlers v0.0.0-20150720190736-60c7bfde3e33/go.mod h1:Qkdc/uu4tH4g6mTK6auzZ766c4CA0Ng8+o/OAirnOIQ= github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs= +github.com/gorilla/mux v1.7.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs= github.com/gorilla/mux v1.7.3 h1:gnP5JzjVOuiZD07fKKToCAOjS0yOpj/qPETTXCCS6hw= github.com/gorilla/mux v1.7.3/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs= github.com/gorilla/securecookie v1.1.1/go.mod h1:ra0sb63/xPlUeL+yeDciTfxMRAA+MP+HVt/4epWDjd4= github.com/gorilla/sessions v1.1.3/go.mod h1:8KCfur6+4Mqcc6S0FEfKuN15Vl5MgXW92AE8ovaJD0w= +github.com/gorilla/sessions v1.2.0/go.mod h1:dk2InVEVJ0sfLlnXv9EAgkf6ecYs/i80K/zI+bUmuGM= github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= +github.com/gosuri/uitable v0.0.4/go.mod h1:tKR86bXuXPZazfOTG1FIzvjIdXzd0mo4Vtn16vt0PJo= github.com/gotestyourself/gotestyourself v2.2.0+incompatible/go.mod h1:zZKM6oeNM8k+FRljX1mnzVYeS8wiGgQyvST1/GafPbY= github.com/gregjones/httpcache v0.0.0-20170728041850-787624de3eb7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA= github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA= github.com/gregjones/httpcache v0.0.0-20190212212710-3befbb6ad0cc/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA= github.com/grpc-ecosystem/go-grpc-middleware v0.0.0-20190222133341-cfaf5686ec79/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs= github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs= +github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs= github.com/grpc-ecosystem/go-grpc-prometheus v0.0.0-20170330212424-2500245aa611/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= github.com/grpc-ecosystem/grpc-gateway v1.3.0/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw= github.com/grpc-ecosystem/grpc-gateway v1.4.1/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw= github.com/grpc-ecosystem/grpc-gateway v1.8.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= +github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= +github.com/grpc-ecosystem/grpc-gateway v1.9.2/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= github.com/grpc-ecosystem/grpc-gateway v1.9.4/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= +github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= github.com/grpc-ecosystem/grpc-gateway v1.12.1 h1:zCy2xE9ablevUOrUZc3Dl72Dt+ya2FNAvC2yLYMHzi4= github.com/grpc-ecosystem/grpc-gateway v1.12.1/go.mod h1:8XEsbTttt/W+VvjtQhLACqCisSPWTxCZ7sBRjU6iH9c= +github.com/h2non/gock v1.0.9/go.mod h1:CZMcB0Lg5IWnr9bF79pPMg9WeV6WumxQiUJ1UvdO1iE= github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= github.com/hashicorp/go-hclog v0.9.1/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ= github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60= github.com/hashicorp/go-msgpack v0.5.5/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM= +github.com/hashicorp/go-multierror v0.0.0-20161216184304-ed905158d874/go.mod h1:JMRHfdO9jKNzS/+BTlxCjKNQHg/jZAft8U7LloJvN7I= github.com/hashicorp/go-multierror v0.0.0-20171204182908-b7773ae21874/go.mod h1:JMRHfdO9jKNzS/+BTlxCjKNQHg/jZAft8U7LloJvN7I= github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk= github.com/hashicorp/go-retryablehttp v0.5.3/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs= github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/golang-lru v0.5.3 h1:YPkqC67at8FYaadspW/6uE0COsBxS2656RLEr8Bppgk= github.com/hashicorp/golang-lru v0.5.3/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4= +github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc= +github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4= github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= github.com/hashicorp/raft v1.1.1/go.mod h1:vPAJM8Asw6u8LxC3eJCUZmRP/E4QmUGE1R7g7k8sG/8= github.com/hashicorp/raft-boltdb v0.0.0-20171010151810-6e5ba93211ea/go.mod h1:pNv7Wc3ycL6F5oOWn+tPGo2gWD4a5X+yp/ntwdKLjRk= github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI= github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= +github.com/huandu/xstrings v1.2.0/go.mod h1:DvyZB1rfVYsBIigL8HwpZgxHwXozlTgGqn63UyNX5k4= github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/imdario/mergo v0.3.7 h1:Y+UAYTZ7gDEuOfhxKWy+dvb5dRQ6rJjFSdX2HZY1/gI= github.com/imdario/mergo v0.3.7/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= @@ -394,11 +567,20 @@ github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANyt github.com/influxdata/influxdb v0.0.0-20161215172503-049f9b42e9a5/go.mod h1:qZna6X/4elxqT3yI9iZYdZrWWdeFOOprn86kgg4+IzY= github.com/influxdata/tdigest v0.0.0-20181121200506-bf2b5ad3c0a9/go.mod h1:Js0mqiSBE6Ffsg94weZZ2c+v/ciT8QRHFOap7EKDrR0= github.com/influxdata/tdigest v0.0.0-20191024211133-5d87a7585faa/go.mod h1:Z0kXnxzbTC2qrx4NaIzYkE1k66+6oEDQTvL95hQFh5Y= +github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo= +github.com/jcmturner/gofork v0.0.0-20190328161633-dc7c13fece03/go.mod h1:MK8+TM0La+2rjBD4jE12Kj1pCCxK7d2LK/UM3ncEo0o= +github.com/jcmturner/gofork v1.0.0/go.mod h1:MK8+TM0La+2rjBD4jE12Kj1pCCxK7d2LK/UM3ncEo0o= +github.com/jenkins-x/go-scm v1.5.65/go.mod h1:MgGRkJScE/rJ30J/bXYqduN5sDPZqZFITJopsnZmTOw= +github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= github.com/jinzhu/gorm v0.0.0-20170316141641-572d0a0ab1eb/go.mod h1:Vla75njaFJ8clLU1W44h34PjIkijhjHIYnZxMqCdxqo= +github.com/jinzhu/gorm v1.9.12/go.mod h1:vhTjlKSJUTWNtcbQtrMBFCxy7eXTzeCAzfL5fBZT/Qs= github.com/jinzhu/inflection v0.0.0-20190603042836-f5c5f50e6090/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc= +github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc= github.com/jinzhu/now v1.0.1/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8= +github.com/jinzhu/now v1.1.1/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8= github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af h1:pmfjZENx5imkbgOkpRUYLnmbU7UEFbjtDA2hxJ1ichM= github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k= +github.com/joefitzgerald/rainbow-reporter v0.1.0/go.mod h1:481CNgqmVHQZzdIbN52CupLJyoVwB10FQ/IQlF1pdL8= github.com/joho/godotenv v1.3.0 h1:Zjp+RcGpHhGlrMbJzXTrZZPrWj+1vfm90La1wgB6Bhc= github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= github.com/jonboulle/clockwork v0.0.0-20141017032234-72f9bd7c4e0c/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo= @@ -408,13 +590,19 @@ github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/u github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= github.com/jstemmer/go-junit-report v0.9.1 h1:6QPYqodiu3GuPL+7mfx+NwDdp2eTkp9IfEUpgAwUN0o= github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk= +github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8= +github.com/kelseyhightower/envconfig v1.3.0/go.mod h1:cccZRl6mQpaq41TPp5QxidR+Sa3axMbJDNb//FQX6Gg= github.com/kelseyhightower/envconfig v1.4.0 h1:Im6hONhd3pLkfDFsbRgu68RDNkGF1r3dvMUtDTo2cv8= github.com/kelseyhightower/envconfig v1.4.0/go.mod h1:cccZRl6mQpaq41TPp5QxidR+Sa3axMbJDNb//FQX6Gg= +github.com/kevinburke/ssh_config v0.0.0-20190725054713-01f96b0aa0cd/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM= github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/klauspost/compress v1.4.1/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= +github.com/klauspost/compress v1.9.2/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= +github.com/klauspost/compress v1.10.2/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs= +github.com/klauspost/cpuid v1.2.0/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek= github.com/klauspost/cpuid v1.2.2/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek= github.com/klauspost/pgzip v1.2.1/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs= github.com/knative/build v0.1.2/go.mod h1:/sU74ZQkwlYA5FwYDJhYTy61i/Kn+5eWfln2jDbw3Qo= @@ -428,12 +616,16 @@ github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfn github.com/kr/pty v1.0.0/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA= +github.com/kr/pty v1.1.8/go.mod h1:O1sed60cT9XZ5uDucP5qwvh+TE3NnUj51EiZO/lmSfw= github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= +github.com/lib/pq v1.1.1/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= github.com/lib/pq v1.3.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= +github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de/go.mod h1:zAbeS9B/r2mtpb6U+EI2rYA5OAXxsYw6wTamcNW+zcE= github.com/lightstep/tracecontext.go v0.0.0-20181129014701-1757c391b1ac h1:+2b6iGRJe3hvV/yVXrd41yVEjxuFHxasJqDhkIjS4gk= github.com/lightstep/tracecontext.go v0.0.0-20181129014701-1757c391b1ac/go.mod h1:Frd2bnT3w5FB5q49ENTfVlztJES+1k/7lyWX2+9gq/M= +github.com/lithammer/dedent v1.1.0/go.mod h1:jrXYCQtgg0nJiN+StA2KgR7w6CiQNv9Fd/Z9BP0jIOc= github.com/lyft/protoc-gen-validate v0.0.13/go.mod h1:XbGvPuh87YZc5TdIa2/I4pLk0QoUACkjt2znoq26NVQ= github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= @@ -442,42 +634,69 @@ github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329/go.mod h1:C1wdFJiN github.com/mailru/easyjson v0.0.0-20190312143242-1de009706dbe/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mailru/easyjson v0.7.0/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs= github.com/mailru/easyjson v0.7.1-0.20191009090205-6c0755d89d1e h1:jcoUdG1TzY/M/eM5BLFLP8DJeMximx5NQYSlLL9YeWc= github.com/mailru/easyjson v0.7.1-0.20191009090205-6c0755d89d1e/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs= github.com/markbates/inflect v1.0.4 h1:5fh1gzTFhfae06u3hzHYO9xe3l3v3nW5Pwt3naLTP5g= github.com/markbates/inflect v1.0.4/go.mod h1:1fR9+pO2KHEO9ZRtto13gDwwZaAKstQzferVeWqbgNs= +github.com/marstr/guid v1.1.0/go.mod h1:74gB1z2wpxxInTG6yaqA7KrtM0NZ+RbrcqDvYHefzho= github.com/mattbaird/jsonpatch v0.0.0-20171005235357-81af80346b1a h1:+J2gw7Bw77w/fbK7wnNJJDKmw1IbWft2Ul5BzrG1Qm8= github.com/mattbaird/jsonpatch v0.0.0-20171005235357-81af80346b1a/go.mod h1:M1qoD/MqPgTZIk0EWKB38wE28ACRfVcn+cU08jyArI0= github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU= +github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= +github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= +github.com/mattn/go-ieproxy v0.0.0-20190610004146-91bb50d98149/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc= github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= +github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= +github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE= github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU= +github.com/mattn/go-runewidth v0.0.8/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI= +github.com/mattn/go-shellwords v1.0.9/go.mod h1:EZzvwXDESEeg03EKmM+RmDnNOPKG4lLtQsUlTZDWQ8Y= github.com/mattn/go-sqlite3 v0.0.0-20160514122348-38ee283dabf1/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc= +github.com/mattn/go-sqlite3 v2.0.1+incompatible/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc= github.com/mattn/go-zglob v0.0.1/go.mod h1:9fxibJccNxU2cnpIKLRRFA7zX7qhkJIQWBb449FYHOo= github.com/matttproud/golang_protobuf_extensions v1.0.0/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU= github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= +github.com/maxbrunsfeld/counterfeiter/v6 v6.2.2/go.mod h1:eD9eIE7cdwcMi9rYluz88Jz2VyhSmden33/aXg4oVIY= +github.com/mholt/archiver/v3 v3.3.0/go.mod h1:YnQtqsp+94Rwd0D/rk5cnLrxusUBUXg+08Ebtr1Mqao= +github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw= github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo= github.com/mitchellh/ioprogress v0.0.0-20180201004757-6a23b12fa88e/go.mod h1:waEya8ee1Ro/lgxpVhkJI4BVASzkm3UZqkx/cFJiYHM= github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/mitchellh/osext v0.0.0-20151018003038-5e2d6d41470f/go.mod h1:OkQIRizQZAeMln+1tSwduZz7+Af5oFlKirV/MSYes2A= +github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742 h1:Esafd1046DLDQ0W1YjYsBW+p8U2u7vzgW2SQVmlNazg= github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= +github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc= github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= +github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw= github.com/natefinch/lumberjack v2.0.0+incompatible/go.mod h1:Wi9p2TTF5DG5oU+6YfsmYQpsTIOm0B1VNzQg9Mw6nPk= +github.com/nats-io/gnatsd v1.4.1/go.mod h1:nqco77VO78hLCJpIcVfygDP2rPGfsEHkGTUk94uh5DQ= +github.com/nats-io/go-nats v1.7.0/go.mod h1:+t7RHT5ApZebkrQdnn6AhQJmhJJiKAvJUio1PiiCtj0= github.com/nats-io/jwt v0.3.0/go.mod h1:fRYCDE99xlTsqUzISS1Bi75UBJ6ljOJQOAAu5VglpSg= github.com/nats-io/jwt v0.3.2/go.mod h1:/euKqTS1ZD+zzjYrY7pseZrTtWQSjujC7xjPc8wL6eU= github.com/nats-io/nats-server/v2 v2.1.2/go.mod h1:Afk+wRZqkMQs/p45uXdrVLuab3gwv3Z8C4HTBu8GD/k= github.com/nats-io/nats-server/v2 v2.1.4/go.mod h1:Jw1Z28soD/QasIA2uWjXyM9El1jly3YwyFOuR8tH1rg= github.com/nats-io/nats-streaming-server v0.17.0/go.mod h1:ewPBEsmp62Znl3dcRsYtlcfwudxHEdYMtYqUQSt4fE0= github.com/nats-io/nats.go v1.9.1/go.mod h1:ZjDU1L/7fJ09jvUSRVBR2e7+RnLiiIQyqyzEE/Zbp4w= +github.com/nats-io/nkeys v0.0.2/go.mod h1:dab7URMsZm6Z/jp9Z5UGa87Uutgc2mVpXLC4B7TDb/4= github.com/nats-io/nkeys v0.1.0/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w= github.com/nats-io/nkeys v0.1.3/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w= +github.com/nats-io/nuid v1.0.0/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c= github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c= github.com/nats-io/stan.go v0.6.0/go.mod h1:eIcD5bi3pqbHT/xIIvXMwvzXYElgouBvaVRftaE+eac= +github.com/nbio/st v0.0.0-20140626010706-e9e8d9816f32/go.mod h1:9wM+0iRr9ahx58uYLpLIr5fm8diHn0JbqRycJi6w0Ms= +github.com/ncw/swift v1.0.47/go.mod h1:23YIA4yWVnGwv2dQlN4bB7egfYX6YLn0Yo/S6zZO/ZM= +github.com/nwaples/rardecode v1.0.0/go.mod h1:5DzqNKiOdpKKBH87u8VlvAnPZMXcGRhxWkRpHbbfGS0= +github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo= github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= @@ -485,25 +704,45 @@ github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+W github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.10.1 h1:q/mM8GF/n0shIN8SaAZ0V+jnLPzen6WIVZdiwrRlMlo= github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= +github.com/onsi/ginkgo v1.11.0 h1:JAKSXpt1YjtLA7YpPiqO9ss6sNXEsPfSGdwN0UHqzrw= +github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA= github.com/onsi/gomega v1.4.2/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME= github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= +github.com/onsi/gomega v1.8.1 h1:C5Dqfs/LeauYDX0jJXIe2SWmwCbGzx9yF8C8xy3Lh34= +github.com/onsi/gomega v1.8.1/go.mod h1:Ho0h+IUsWyvy1OpqCwxlQ/21gkhVunqlU8fDGcoTdcA= +github.com/opencontainers/go-digest v0.0.0-20170106003457-a6d0ee40d420/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s= +github.com/opencontainers/go-digest v0.0.0-20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s= github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s= +github.com/opencontainers/image-spec v1.0.0/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0= github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0= +github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U= +github.com/opencontainers/runc v0.1.1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U= +github.com/opencontainers/runtime-spec v0.1.2-0.20190507144316-5b71a03e2700/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0= +github.com/opencontainers/runtime-tools v0.0.0-20181011054405-1d69bd0f9c39/go.mod h1:r3f7wjNzSs2extwzU3Y+6pKfobzPh+kKFJ3ofN+3nfs= github.com/opentracing/opentracing-go v1.1.1-0.20190913142402-a7454ce5950e/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o= github.com/openzipkin/zipkin-go v0.1.6/go.mod h1:QgAqvLzwWbR/WpD4A3cGpPtJrZXNIiJc5AZX7/PBEpw= +github.com/openzipkin/zipkin-go v0.2.0/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnhQw8ySjnjRyN4= github.com/openzipkin/zipkin-go v0.2.2 h1:nY8Hti+WKaP0cRsSeQ026wU03QsM762XBeCXBb9NAWI= github.com/openzipkin/zipkin-go v0.2.2/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnhQw8ySjnjRyN4= +github.com/otiai10/copy v1.0.2/go.mod h1:c7RpqBkwMom4bYTSkLSym4VSJz/XtncWRAj/J4PEIMY= +github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95/go.mod h1:9qAhocn7zKJG+0mI8eUu6xqkFDYS2kb2saOteoSB3cE= +github.com/otiai10/mint v1.3.0/go.mod h1:F5AjcsTsWUqX+Na9fpHb52P8pcRX2CI6A3ctIT91xUo= github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k= +github.com/pelletier/go-buffruneio v0.2.0/go.mod h1:JkE26KsDizTr40EUHkXVtNPvgGtbSNq5BcowyYOWdKo= github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= github.com/pelletier/go-toml v1.3.0/go.mod h1:PN7xzY2wHTK0K9p34ErDQMlFxa51Fk0OUruD3k1mMwo= +github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys= github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU= +github.com/phayes/freeport v0.0.0-20180830031419-95f893ade6f2/go.mod h1:iIss55rKnNBTvrwdmkUpLnDpZoAHvWaiq5+iMmen4AE= +github.com/pierrec/lz4 v0.0.0-20190327172049-315a67e90e41/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc= github.com/pierrec/lz4 v1.0.2-0.20190131084431-473cd7ce01a1/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc= github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY= +github.com/pierrec/lz4 v2.2.6+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY= github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I= github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/profile v1.2.1/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA= @@ -511,36 +750,51 @@ github.com/pmezard/go-difflib v0.0.0-20151028094244-d8ed2627bdf0/go.mod h1:iKH77 github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA= +github.com/prometheus/client_golang v0.0.0-20180209125602-c332b6f63c06/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.2/go.mod h1:OsXs2jCmiKlQ1lTBmv21f2mNfw4xf/QclQDMrYNZzcM= +github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= github.com/prometheus/client_golang v1.1.0 h1:BQ53HtBmfOitExawJ6LokA4x8ov/z0SYYb0+HxJfRI8= github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g= +github.com/prometheus/client_golang v1.5.0 h1:Ctq0iGpCmr3jeP77kbF2UxgvRwzWWz+4Bh9/vJTyg1A= +github.com/prometheus/client_golang v1.5.0/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU= github.com/prometheus/client_model v0.0.0-20170216185247-6f3806018612/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= +github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 h1:gQz4mCbXsO+nc9n1hCxHcGA3Zx3Eo+UHZoInFGUIXNM= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M= github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= +github.com/prometheus/common v0.0.0-20180110214958-89604d197083/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.0.0-20180518154759-7600349dcfe1/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.0.0-20181020173914-7e9e6cabbd39/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= +github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.0.0-20181126121408-4724e9255275/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= +github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc= +github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA= github.com/prometheus/common v0.9.1 h1:KOMtN28tlbam3/7ZKEYKHhKoJZYYj3gMH4uc62x7X7U= github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4= +github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20180612222113-7d6f385de8be/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= +github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ= +github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ= github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A= +github.com/prometheus/procfs v0.0.10/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A= github.com/prometheus/procfs v0.0.11 h1:DhHlBtkHWPYi8O2y31JkK0TF+DGM+51OopZjH/Ia5qI= github.com/prometheus/procfs v0.0.11/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= +github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4= +github.com/rcrowley/go-metrics v0.0.0-20190706150252-9beb055b7962/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4= github.com/remyoudompheng/bigfft v0.0.0-20170806203942-52369c62f446/go.mod h1:uYEyJGbgTkfkS4+E/PavXkNJcbFIpEtjt2B0KDQ5+9M= github.com/robfig/cron/v3 v3.0.0 h1:kQ6Cb7aHOHTSzNVNEhmp8EcWKLb4CbiMW9h9VyIhO4E= github.com/robfig/cron/v3 v3.0.0/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro= @@ -551,32 +805,56 @@ github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFR github.com/rogpeppe/go-internal v1.3.2/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc= github.com/rogpeppe/go-internal v1.5.0 h1:Usqs0/lDK/NqTkvrmKSwA/3XkZAs7ZAW/eLeQ2MVBTw= github.com/rogpeppe/go-internal v1.5.0/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc= +github.com/rubiojr/go-vhd v0.0.0-20160810183302-0bfd3b39853c/go.mod h1:DM5xW0nvfNNm2uytzsvhI3OnX8uzaRAg8UX/CnDqbto= github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= +github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/satori/go.uuid v0.0.0-20160713180306-0aa62d5ddceb/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0= +github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0= +github.com/sclevine/spec v1.2.0/go.mod h1:W4J29eT/Kzv7/b9IWLB055Z+qvVC9vt0Arko24q7p+U= +github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo= +github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM= github.com/shurcooL/githubv4 v0.0.0-20180925043049-51d7b505e2e9/go.mod h1:hAF0iLZy4td2EX+/8Tw+4nodhlMrwN3HupfaXj3zkGo= +github.com/shurcooL/githubv4 v0.0.0-20190718010115-4ba037080260/go.mod h1:hAF0iLZy4td2EX+/8Tw+4nodhlMrwN3HupfaXj3zkGo= +github.com/shurcooL/githubv4 v0.0.0-20191102174205-af46314aec7b/go.mod h1:hAF0iLZy4td2EX+/8Tw+4nodhlMrwN3HupfaXj3zkGo= github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk= github.com/shurcooL/graphql v0.0.0-20180924043259-e4a3a37e6d42/go.mod h1:AuYgA5Kyo4c7HfUmvRGs/6rGlMMV/6B1bVnB9JxJEEg= +github.com/shurcooL/graphql v0.0.0-20181231061246-d48a9a75455f/go.mod h1:AuYgA5Kyo4c7HfUmvRGs/6rGlMMV/6B1bVnB9JxJEEg= +github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= +github.com/sirupsen/logrus v1.0.4-0.20170822132746-89742aefa4b2/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc= github.com/sirupsen/logrus v1.0.5/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc= github.com/sirupsen/logrus v1.1.1/go.mod h1:zrgwTnHtNr00buQ1vSptGe8m1f/BbgsPukg8qsT7A+A= github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q= github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= +github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= +github.com/smartystreets/goconvey v0.0.0-20190731233626-505e41936337/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= +github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= github.com/soheilhy/cmux v0.1.3/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM= github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM= +github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk= github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= +github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cobra v0.0.0-20180319062004-c439c4fa0937/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ= +github.com/spf13/cobra v0.0.2-0.20171109065643-2da4a54c5cee/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ= github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ= github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= +github.com/spf13/cobra v0.0.6/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE= github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= +github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo= github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/pflag v1.0.1-0.20171106142849-4c012f6dcd95/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/spf13/pflag v1.0.2/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= +github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE= +github.com/spf13/viper v1.6.2/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k= +github.com/src-d/gcfg v1.4.0/go.mod h1:p/UMsR43ujA89BJY9duynAwIpvqEujIH/jFlfL7jWoI= github.com/streadway/amqp v0.0.0-20190404075320-75d898a42a94/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw= github.com/streadway/quantile v0.0.0-20150917103942-b0c588724d25/go.mod h1:lbP8tGiBjZ5YWIc2fzuRpTaz0b/53vT6PEs3QuAWzuU= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= @@ -588,21 +866,53 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4= github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= +github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww= github.com/tektoncd/pipeline v0.8.0/go.mod h1:IZzJdiX9EqEMuUcgdnElozdYYRh0/ZRC+NKMLj1K3Yw= +github.com/tektoncd/pipeline v0.10.1/go.mod h1:D2X0exT46zYx95BU7ByM8+erpjoN7thmUBvlKThOszU= +github.com/tektoncd/plumbing v0.0.0-20191216083742-847dcf196de9/go.mod h1:QZHgU07PRBTRF6N57w4+ApRu8OgfYLFNqCDlfEZaD9Y= +github.com/tektoncd/plumbing/pipelinerun-logs v0.0.0-20191206114338-712d544c2c21/go.mod h1:S62EUWtqmejjJgUMOGB1CCCHRp6C706laH06BoALkzU= +github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk= github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= +github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= github.com/tsenart/vegeta v12.7.1-0.20190725001342-b5f4fca92137+incompatible/go.mod h1:Smz/ZWfhKRcyDDChZkG3CyTHdj87lHzio/HOCkbndXM= github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM= github.com/ugorji/go v1.1.1/go.mod h1:hnLbHMwcvSihnDhEfx2/BzKp2xb0Y+ErdfYcrs9tkJQ= +github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc= github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= +github.com/ulikunitz/xz v0.5.6/go.mod h1:2bypXElzHzzJZwzH67Y6wb67pO62Rzfn7BSiF4ABRW8= +github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA= github.com/urfave/cli v1.18.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA= +github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA= github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw= github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc= +github.com/vdemeester/k8s-pkg-credentialprovider v0.0.0-20200107171650-7c61ffa44238/go.mod h1:JwQJCMWpUDqjZrB5jpw0f5VbN7U95zxFy1ZDpoEarGo= +github.com/vdemeester/k8s-pkg-credentialprovider v1.13.12-1/go.mod h1:Fko0rTxEtDW2kju5Ky7yFJNS3IcNvW8IPsp4/e9oev0= +github.com/vektah/gqlparser v1.1.2/go.mod h1:1ycwN7Ij5njmMkPPAOaRFY4rET2Enx7IkVv3vaXspKw= +github.com/vmware/govmomi v0.20.3/go.mod h1:URlwyTFZX72RmxtxuaFL2Uj3fD1JTvZdx59bHWk6aFU= +github.com/xanzy/ssh-agent v0.2.1/go.mod h1:mLlQY/MoOhWBj+gOGMQkOeiEvkx+8pJSI+0Bx9h2kr4= +github.com/xdg/scram v0.0.0-20180814205039-7eeb5667e42c/go.mod h1:lB8K/P019DLNhemzwFU4jHLhdvlE6uDZjXFejJXr49I= +github.com/xdg/stringprep v1.0.0/go.mod h1:Jhud4/sHMO4oL310DaZAKk9ZaJ08SJfe+sJh0HrGL1Y= +github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= +github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= +github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f/go.mod h1:5yf86TLmAcydyeJq5YvxkGPE2fm/u4myDekKRoLuqhs= +github.com/xeipuuv/gojsonschema v1.1.0/go.mod h1:5yf86TLmAcydyeJq5YvxkGPE2fm/u4myDekKRoLuqhs= +github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8/go.mod h1:HUYIGzjTL3rfEspMxjDjgmT5uz5wzYJKVo23qUhYTos= github.com/xiang90/probing v0.0.0-20160813154853-07dd2e8dfe18/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= +github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= github.com/xlab/handysort v0.0.0-20150421192137-fb3537ed64a1/go.mod h1:QcJo0QPSfTONNIgpN5RA8prR7fF8nkF6cTWTcNerRO8= github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= +github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43/go.mod h1:aX5oPXxHm3bOH+xeAttToC8pqch2ScQN/JoXYupl6xs= +github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50/go.mod h1:NUSPSUX/bi6SeDMUh6brw0nXpxHnc96TguQh0+r/ssA= +github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f/go.mod h1:GlGEuHIJweS1mbCqG+7vt2nvWLzLLnRHbXz5JKd/Qbg= go.etcd.io/bbolt v1.3.1-etcd.7/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= +go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= go.etcd.io/etcd v0.0.0-20181031231232-83304cfc808c/go.mod h1:weASp41xM3dk0YHg1s/W8ecdGP5G4teSTMBPpYAaUgA= +go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738/go.mod h1:dnLIgRNXwCJa5e+c6mIZCrds/GIG4ncV9HhK5PX7jPg= +go.mongodb.org/mongo-driver v1.0.3/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM= +go.mongodb.org/mongo-driver v1.1.1/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM= +go.mongodb.org/mongo-driver v1.1.2/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM= go.opencensus.io v0.22.1 h1:8dP3SGL7MPB94crU3bEPplMPe83FI4EouesJUeFHv50= go.opencensus.io v0.22.1/go.mod h1:Ap50jQcDJrx6rB6VgeeFPtuPIf3wMRvRfrfYDO6+BmA= go.opentelemetry.io/otel v0.2.3/go.mod h1:OgNpQOjrlt33Ew6Ds0mGjmcTQg/rhUctsbkRdk/g1fw= @@ -612,28 +922,46 @@ go.uber.org/atomic v0.0.0-20181018215023-8dc6146f7569/go.mod h1:gD2HeocX3+yG+ygL go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= go.uber.org/atomic v1.4.0 h1:cxzIVoETapQEqDhQu3QfnvXAV4AlzcvUCxkVUFw3+EU= go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= +go.uber.org/atomic v1.6.0 h1:Ezj3JGmsOnG1MoRWQkPBsKLe9DwWD9QeXzTRzzldNVk= +go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ= go.uber.org/multierr v0.0.0-20180122172545-ddea229ff1df/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0= go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0= go.uber.org/multierr v1.2.0 h1:6I+W7f5VwC5SV9dNrZ3qXrDB9mD0dyGOi/ZJmYw03T4= go.uber.org/multierr v1.2.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0= +go.uber.org/multierr v1.5.0 h1:KCa4XfM8CWFCpxXRGok+Q0SS/0XBhMDbHHGABQLvD2A= +go.uber.org/multierr v1.5.0/go.mod h1:FeouvMocqHpRaaGuG9EjoKcStLC43Zu/fmqdUMPcKYU= +go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee h1:0mgffUl7nfd+FpvXMVz4IDEaUSmT1ysygQC7qYo7sG4= +go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA= go.uber.org/zap v1.9.2-0.20180814183419-67bc79d13d15 h1:0yi2i4dLbYtFqehls82kppDGdgRCYhsK1XaO0dOQRSg= go.uber.org/zap v1.9.2-0.20180814183419-67bc79d13d15/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= +gocloud.dev v0.19.0/go.mod h1:SmKwiR8YwIMMJvQBKLsC3fHNyMwXLw3PMDO+VVteJMI= +golang.org/x/crypto v0.0.0-20171113213409-9f005a07e0d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20180608092829-8ac0e0d97ce4/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20181015023909-0c41d7ab0a0e/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20181025213731-e84da0312774/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20190219172222-a4c6cb3142f2/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190320223903-b7391e95e576/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20190325154230-a5d413f7728c/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20190404164418-38d8ce5564a5/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE= golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20190617133340-57b3e21c3d56/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190923035154-9ee001bba392/go.mod h1:/lpIB1dKB+9EgE3H3cr1v9wB50oz8l4C4h62xy7jSTY= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20191205180655-e7c4368fe9dd/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20200128174031-69ecbb4d6d5d/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200204104054-c9f3fb736b72/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200206161412-a0c6ece9d31a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200317142112-1b76d66859c6 h1:TjszyFsQsyZNHwdVdZ5m7bjmreu0znc2kRYsEml9/Ww= golang.org/x/crypto v0.0.0-20200317142112-1b76d66859c6/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= @@ -642,7 +970,9 @@ golang.org/x/exp v0.0.0-20190125153040-c74c464bbbf2/go.mod h1:CJ0aWSM057203Lf6IL golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190312203227-4b39c73a6495/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= +golang.org/x/exp v0.0.0-20190731235908-ec7cb31e5a56/go.mod h1:JhuoJpWY28nO4Vef9tZUw9qufEGTyX1+7lmHxV5q5G4= golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek= +golang.org/x/exp v0.0.0-20191002040644-a1355ae1e2c3/go.mod h1:NOZ3BPKG0ec/BKJQgnvsSFpcKLM5xXVWnvZS97DWHgE= golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY= golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= @@ -656,6 +986,7 @@ golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f h1:J5lckAjkw6qYlOZNj90mLYNT golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs= golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE= golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o= +golang.org/x/mobile v0.0.0-20190806162312-597adff16ade/go.mod h1:AlhUtkH4DA4asiFC5RgK7ZKmauvtkAVcy9L0epCzlWo= golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc= golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY= golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= @@ -687,10 +1018,13 @@ gonum.org/v1/netlib v0.0.0-20181029234149-ec6d1f5cefe6/go.mod h1:wa6Ws7BG/ESfp6d gonum.org/v1/netlib v0.0.0-20190313105609-8cb42192e0e0/go.mod h1:wa6Ws7BG/ESfp6dHfk7C6KdzKA7wR7u/rKwOGE66zvw= gonum.org/v1/netlib v0.0.0-20190331212654-76723241ea4e h1:jRyg0XfpwWlhEV8mDfdNGBeSJM2fuyh9Yjrnd8kF2Ts= gonum.org/v1/netlib v0.0.0-20190331212654-76723241ea4e/go.mod h1:kS+toOQn6AQKjmKJ7gzohV1XkqsFehRA2FbsbkopSuQ= +google.golang.org/api v0.0.0-20160322025152-9bf6e6e569ff/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0= google.golang.org/api v0.0.0-20181021000519-a2651947f503/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0= google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk= google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE= +google.golang.org/api v0.5.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE= google.golang.org/api v0.6.0/go.mod h1:btoxGiFvQNVUZQ8W08zLtrVS08CNpINPEfxXxgJL1Q4= +google.golang.org/api v0.6.1-0.20190607001116-5213b8090861/go.mod h1:btoxGiFvQNVUZQ8W08zLtrVS08CNpINPEfxXxgJL1Q4= google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M= google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg= google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg= @@ -713,6 +1047,7 @@ google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww google.golang.org/appengine v1.6.2/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0= google.golang.org/appengine v1.6.5 h1:tycE03LOZYQNhDpS27tcQdAzLCVMaj7QT2SXxebnpCM= google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= +google.golang.org/cloud v0.0.0-20151119220103-975617b05ea8/go.mod h1:0H1ncTHf11KCFhTc/+EFRbzSCOZx+VUbRMk55Yv5MYk= google.golang.org/genproto v0.0.0-20170731182057-09f6ed296fc6/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20180608181217-32ee49c4dd80/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= @@ -721,7 +1056,10 @@ google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRn google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= +google.golang.org/genproto v0.0.0-20190508193815-b515fa19cec8/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190530194941-fb225487d101/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s= +google.golang.org/genproto v0.0.0-20190620144150-6af8c5fc6601/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s= +google.golang.org/genproto v0.0.0-20190708153700-3bdd9d9f5532/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s= google.golang.org/genproto v0.0.0-20190716160619-c506a9f90610/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= @@ -748,6 +1086,7 @@ google.golang.org/genproto v0.0.0-20200326112834-f447254575fd/go.mod h1:55QSHmfG google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84 h1:pSLkPbrjnPyLDYUO2VM9mDLqo2V6CFBY84lFSZAfoi4= google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= +google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= google.golang.org/grpc v1.13.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= google.golang.org/grpc v1.15.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio= @@ -756,8 +1095,10 @@ google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZi google.golang.org/grpc v1.19.1/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM= google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= +google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= google.golang.org/grpc v1.22.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= +google.golang.org/grpc v1.22.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA= @@ -779,6 +1120,7 @@ google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzi gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= @@ -787,23 +1129,38 @@ gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qS gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4= gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= +gopkg.in/gcfg.v1 v1.2.0/go.mod h1:yesOnuUOFQAhST5vPY4nbZsb/huCgGGXlipJsBn0b3o= gopkg.in/gemnasium/logrus-airbrake-hook.v2 v2.1.2/go.mod h1:Xk6kEKp8OKb+X14hQBKWaSkCsqBpgog8nAV2xsGOxlo= gopkg.in/inf.v0 v0.9.0/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= +gopkg.in/ini.v1 v1.46.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= +gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= +gopkg.in/ini.v1 v1.52.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= +gopkg.in/jcmturner/aescts.v1 v1.0.1/go.mod h1:nsR8qBOg+OucoIW+WMhB3GspUQXq9XorLnQb9XtvcOo= +gopkg.in/jcmturner/dnsutils.v1 v1.0.1/go.mod h1:m3v+5svpVOhtFAP/wSz+yzh4Mc0Fg7eRhxkJMWSIz9Q= +gopkg.in/jcmturner/gokrb5.v7 v7.2.3/go.mod h1:l8VISx+WGYp+Fp7KRbsiUuXTTOnxIc3Tuvyavf11/WM= +gopkg.in/jcmturner/gokrb5.v7 v7.3.0/go.mod h1:l8VISx+WGYp+Fp7KRbsiUuXTTOnxIc3Tuvyavf11/WM= +gopkg.in/jcmturner/rpc.v1 v1.1.0/go.mod h1:YIdkC4XfD6GXbzje11McwsDuOlZQSb9W4vfLvuNnlv8= gopkg.in/natefinch/lumberjack.v2 v2.0.0-20150622162204-20b71e5b60d7/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k= gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k= gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo= gopkg.in/robfig/cron.v2 v2.0.0-20150107220207-be2e0b0deed5/go.mod h1:hiOFpYm0ZJbusNj2ywpbrXowU3G8U6GIQzqn2mw1UIE= gopkg.in/square/go-jose.v2 v2.0.0-20180411045311-89060dee6a84/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI= gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI= +gopkg.in/src-d/go-billy.v4 v4.3.2/go.mod h1:nDjArDMp+XMs1aFAESLRjfGSgfvoYN0hDfzEk0GjC98= +gopkg.in/src-d/go-git-fixtures.v3 v3.5.0/go.mod h1:dLBcvytrw/TYZsNTWCnkNF2DSIlzWYqTe3rJR56Ac7g= +gopkg.in/src-d/go-git.v4 v4.13.1/go.mod h1:nx5NYcxdKxq5fpltdHnPa2Exj4Sx0EclMWZQbYDu2z8= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= +gopkg.in/warnings.v0 v0.1.1/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI= +gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI= gopkg.in/yaml.v1 v1.0.0-20140924161607-9f9df34309c0/go.mod h1:WDnlLJ4WF5VGsH/HVa3CI79GS0ol3YnhVnKP89i0kNg= gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v3 v3.0.0-20190709130402-674ba3eaed22/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw= +helm.sh/helm/v3 v3.1.1/go.mod h1:WYsFJuMASa/4XUqLyv54s0U/f3mlAaRErGmyy4z921g= honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM= honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= istio.io/api v0.0.0-20200227213531-891bf31f3c32 h1:Kbv2woZ1iw8s4JSz/WgcLEUVmBjQDneL/EB3k5jhU+A= @@ -818,17 +1175,28 @@ k8s.io/api v0.16.4/go.mod h1:AtzMnsR45tccQss5q8RnF+W8L81DH6XwXwo/joEx9u0= k8s.io/apiextensions-apiserver v0.0.0-20190918201827-3de75813f604/go.mod h1:7H8sjDlWQu89yWB3FhZfsLyRCRLuoXoCoY5qtwW1q6I= k8s.io/apiextensions-apiserver v0.16.4 h1:QaDkeAe13GDPT9JZT2VwvllZs4b3wGLuh1AdbAhclOo= k8s.io/apiextensions-apiserver v0.16.4/go.mod h1:HYQwjujEkXmQNhap2C9YDdIVOSskGZ3et0Mvjcyjbto= +k8s.io/apiextensions-apiserver v0.17.2 h1:cP579D2hSZNuO/rZj9XFRzwJNYb41DbNANJb6Kolpss= +k8s.io/apiextensions-apiserver v0.17.2/go.mod h1:4KdMpjkEjjDI2pPfBA15OscyNldHWdBCfsWMDWAmSTs= k8s.io/apimachinery v0.16.5-beta.1 h1:xfpLJfATeAyqr8BIb96RYGHu2j0LpjU36Fp28xoFC90= k8s.io/apimachinery v0.16.5-beta.1/go.mod h1:llRdnznGEAqC3DcNm6yEj472xaFVfLM7hnYofMb12tQ= k8s.io/apiserver v0.0.0-20190918200908-1e17798da8c1/go.mod h1:4FuDU+iKPjdsdQSN3GsEKZLB/feQsj1y9dhhBDVV2Ns= k8s.io/apiserver v0.16.4 h1:BbvSsgra871cSu3WG1tqZkxBeWB1W1UYNL1tm6gtBgw= k8s.io/apiserver v0.16.4/go.mod h1:kbLJOak655g6W7C+muqu1F76u9wnEycfKMqbVaXIdAc= +k8s.io/apiserver v0.17.0/go.mod h1:ABM+9x/prjINN6iiffRVNCBR2Wk7uY4z+EtEGZD48cg= +k8s.io/apiserver v0.17.2 h1:NssVvPALll6SSeNgo1Wk1h2myU1UHNwmhxV0Oxbcl8Y= +k8s.io/apiserver v0.17.2/go.mod h1:lBmw/TtQdtxvrTk0e2cgtOxHizXI+d0mmGQURIHQZlo= +k8s.io/cli-runtime v0.17.2/go.mod h1:aa8t9ziyQdbkuizkNLAw3qe3srSyWh9zlSB7zTqRNPI= +k8s.io/cli-runtime v0.17.3/go.mod h1:X7idckYphH4SZflgNpOOViSxetiMj6xI0viMAjM81TA= k8s.io/client-go v0.16.4 h1:sf+FEZXYhJNjpTZapQDLvvN+0kBeUTxCYxlXcVdhv2E= k8s.io/client-go v0.16.4/go.mod h1:ZgxhFDxSnoKY0J0U2/Y1C8obKDdlhGPZwA7oHH863Ok= +k8s.io/cloud-provider v0.17.0/go.mod h1:Ze4c3w2C0bRsjkBUoHpFi+qWe3ob1wI2/7cUn+YQIDE= k8s.io/code-generator v0.16.5-beta.1 h1:+zWxMQH3a6fd8lZe6utWyW/V7nmG2ZMXwtovSJI2p+0= k8s.io/code-generator v0.16.5-beta.1/go.mod h1:mJUgkl06XV4kstAnLHAIzJPVCOzVR+ZcfPIv4fUsFCY= k8s.io/component-base v0.0.0-20190918200425-ed2f0867c778/go.mod h1:DFWQCXgXVLiWtzFaS17KxHdlUeUymP7FLxZSkmL9/jU= k8s.io/component-base v0.16.4/go.mod h1:GYQ+4hlkEwdlpAp59Ztc4gYuFhdoZqiAJD1unYDJ3FM= +k8s.io/component-base v0.17.0/go.mod h1:rKuRAokNMY2nn2A6LP/MiwpoaMRHpfRnrPaUJJj1Yoc= +k8s.io/component-base v0.17.2/go.mod h1:zMPW3g5aH7cHJpKYQ/ZsGMcgbsA/VyhEugF3QT1awLs= +k8s.io/csi-translation-lib v0.17.0/go.mod h1:HEF7MEz7pOLJCnxabi45IPkhSsE/KmxPQksuCrHKWls= k8s.io/gengo v0.0.0-20190907103519-ebc107f98eab h1:j4L8spMe0tFfBvvW6lrc0c+Ql8+nnkcV3RYfi3eSwGY= k8s.io/gengo v0.0.0-20190907103519-ebc107f98eab/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0= k8s.io/klog v0.0.0-20181102134211-b9b56d5dfc92/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk= @@ -840,10 +1208,15 @@ k8s.io/klog v1.0.0 h1:Pt+yjF5aB1xDSVbau4VsWe+dQNzA0qv1LlXdC2dF6Q8= k8s.io/klog v1.0.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I= k8s.io/kube-openapi v0.0.0-20190918143330-0270cf2f1c1d h1:Xpe6sK+RY4ZgCTyZ3y273UmFmURhjtoJiwOMbQsXitY= k8s.io/kube-openapi v0.0.0-20190918143330-0270cf2f1c1d/go.mod h1:1TqjTSzOxsLGIKfj0lK8EeCP7K1iUG65v09OM0/WG5E= +k8s.io/kubectl v0.17.2/go.mod h1:y4rfLV0n6aPmvbRCqZQjvOp3ezxsFgpqL+zF5jH/lxk= k8s.io/kubernetes v1.11.10/go.mod h1:ocZa8+6APFNC2tX1DZASIbocyYT5jHzqFVsY5aoB7Jk= +k8s.io/kubernetes v1.13.0/go.mod h1:ocZa8+6APFNC2tX1DZASIbocyYT5jHzqFVsY5aoB7Jk= k8s.io/kubernetes v1.14.7/go.mod h1:ocZa8+6APFNC2tX1DZASIbocyYT5jHzqFVsY5aoB7Jk= +k8s.io/legacy-cloud-providers v0.17.0/go.mod h1:DdzaepJ3RtRy+e5YhNtrCYwlgyK87j/5+Yfp0L9Syp8= +k8s.io/metrics v0.17.2/go.mod h1:3TkNHET4ROd+NfzNxkjoVfQ0Ob4iZnaHmSEA4vYpwLw= k8s.io/test-infra v0.0.0-20181019233642-2e10a0bbe9b3/go.mod h1:2NzXB13Ji0nqpyublHeiPC4FZwU0TknfvyaaNfl/BTA= k8s.io/test-infra v0.0.0-20191212060232-70b0b49fe247/go.mod h1:d8SKryJBXAwfCFVL4wieRez47J2NOOAb9d029sWLseQ= +k8s.io/test-infra v0.0.0-20200407001919-bc7f71ef65b8/go.mod h1:/WpJWcaDvuykB322WXP4kJbX8IpalOzuPxA62GpwkJk= k8s.io/utils v0.0.0-20181019225348-5e321f9a457c/go.mod h1:8k8uAuAQ0rXslZKaEWd0c3oVhZz7sSzSiPnVZayjIX0= k8s.io/utils v0.0.0-20190221042446-c2654d5206da/go.mod h1:8k8uAuAQ0rXslZKaEWd0c3oVhZz7sSzSiPnVZayjIX0= k8s.io/utils v0.0.0-20190506122338-8fab8cb257d5/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew= @@ -852,20 +1225,28 @@ k8s.io/utils v0.0.0-20190907131718-3d4f5b7dea0b/go.mod h1:sZAwmy6armz5eXlNoLmJcl k8s.io/utils v0.0.0-20191010214722-8d271d903fe4/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew= k8s.io/utils v0.0.0-20191114184206-e782cd3c129f h1:GiPwtSzdP43eI1hpPCbROQCCIgCuiMMNF8YUVLF3vJo= k8s.io/utils v0.0.0-20191114184206-e782cd3c129f/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew= +k8s.io/utils v0.0.0-20200124190032-861946025e34 h1:HjlUD6M0K3P8nRXmr2B9o4F9dUy9TCj/aEpReeyi6+k= +k8s.io/utils v0.0.0-20200124190032-861946025e34/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew= +knative.dev/caching v0.0.0-20190719140829-2032732871ff/go.mod h1:dHXFU6CGlLlbzaWc32g80cR92iuBSpsslDNBWI8C7eg= knative.dev/eventing v0.14.1-0.20200501170243-0bb51bb8d62b h1:EZvcCOEJl0o82wqy7oSbSBH/V1S+nfakzxykgZDyr8Q= knative.dev/eventing v0.14.1-0.20200501170243-0bb51bb8d62b/go.mod h1:5unykJf/FwntjEXfRpspIimWz/TvPE527dFcHoS0MHU= +knative.dev/eventing-contrib v0.6.1-0.20190723221543-5ce18048c08b/go.mod h1:SnXZgSGgMSMLNFTwTnpaOH7hXDzTFtw0J8OmHflNx3g= knative.dev/pkg v0.0.0-20191101194912-56c2594e4f11 h1:w+AcPuGp389HAI5FDW9L0j7MQbxnU1RtZfLP7BMgNDI= knative.dev/pkg v0.0.0-20191101194912-56c2594e4f11/go.mod h1:pgODObA1dTyhNoFxPZTTjNWfx6F0aKsKzn+vaT9XO/Q= +knative.dev/pkg v0.0.0-20191111150521-6d806b998379/go.mod h1:pgODObA1dTyhNoFxPZTTjNWfx6F0aKsKzn+vaT9XO/Q= knative.dev/pkg v0.0.0-20200501005942-d980c0865972 h1:N/umsmNgROaU+fIziEBZ+L32OMpgwZRYW3VeHUPR8ZA= knative.dev/pkg v0.0.0-20200501005942-d980c0865972/go.mod h1:X4wmXb4xUR+1eDBoP6AeVfAqsyxl1yATnRdSgFdjhQw= -knative.dev/pkg v0.0.0-20200501164043-2e4e82aa49f1 h1:lBFP9v60PpDrSZGpKsWTpu41ufdI7P7LztWHeJDFx0s= -knative.dev/pkg v0.0.0-20200501164043-2e4e82aa49f1/go.mod h1:ZpqLEYvV5TtJE46JSZzZy+6aQl00Gk8Dy9nog770gb4= +knative.dev/pkg v0.0.0-20200504180943-4a2ba059b008/go.mod h1:1RvwKBbKqKYt5rgI4lfYdWCdtXgMxJY73QxPb3jZPC4= +knative.dev/pkg v0.0.0-20200506001744-478962f05e2b h1:SFCuEj+NeA8dVn4Ms1ymfF4FUru8jc7D56L17Co1qe8= +knative.dev/pkg v0.0.0-20200506001744-478962f05e2b/go.mod h1:9UQS6bJECqqFG0q9BPaATbcG78co0s9Q6Dzo/6mR4uI= knative.dev/serving v0.14.1-0.20200424135249-b16b68297056 h1:vkJSA9PBIrD10BrTiP+5RMAemgH/LOZbY8Tte+wvQ80= knative.dev/serving v0.14.1-0.20200424135249-b16b68297056/go.mod h1:x2n255JS2XBI39tmjZ8CwTxIf9EKNMCrkVuiOttLRm0= knative.dev/test-infra v0.0.0-20200429211942-f4c4853375cf h1:rNWg3NiXNLjZC9C1EJf2qKA+mRnrWMLW1KONsEusLYg= knative.dev/test-infra v0.0.0-20200429211942-f4c4853375cf/go.mod h1:xcdUkMJrLlBswIZqL5zCuBFOC22WIPMQoVX1L35i0vQ= knative.dev/test-infra v0.0.0-20200430225942-f7c1fafc1cde h1:QSzxFsf21WXNhODvh0jRKbFR+c5UI7WFjiISy/sMOLg= knative.dev/test-infra v0.0.0-20200430225942-f7c1fafc1cde/go.mod h1:xcdUkMJrLlBswIZqL5zCuBFOC22WIPMQoVX1L35i0vQ= +knative.dev/test-infra v0.0.0-20200505192244-75864c82db21 h1:SsvqMKpvrn7cl7UqRUIT90SXDowHzpzHwHaTu+wN70s= +knative.dev/test-infra v0.0.0-20200505192244-75864c82db21/go.mod h1:AqweEMgaMbb2xmYq9ZOPsH/lQ61qNx2XGr5tGltj5QU= modernc.org/cc v1.0.0/go.mod h1:1Sk4//wdnYJiUIxnW8ddKpaOJCF37yAdqYnkxUpaYxw= modernc.org/golex v1.0.0/go.mod h1:b/QX9oBD/LhixY6NDh+IdGv17hgB+51fET1i2kPSmvk= modernc.org/mathutil v1.0.0/go.mod h1:wU0vUrJsVWBZ4P6e7xtFJEhFSNsfRLJ8H458uRjg03k= @@ -873,14 +1254,20 @@ modernc.org/strutil v1.0.0/go.mod h1:lstksw84oURvj9y3tn8lGvRxyRC1S2+g5uuIzNfIOBs modernc.org/xc v1.0.0/go.mod h1:mRNCo0bvLjGhHO9WsyuKVU4q0ceiDDDoEeWDJHrNx8I= mvdan.cc/xurls/v2 v2.0.0/go.mod h1:2/webFPYOXN9jp/lzuj0zuAVlF+9g4KPFJANH1oJhRU= pack.ag/amqp v0.11.0/go.mod h1:4/cbmt4EJXSKlG6LCfWHoqmN0uFdy5i/+YFz+fTfhV4= +pack.ag/amqp v0.11.2/go.mod h1:4/cbmt4EJXSKlG6LCfWHoqmN0uFdy5i/+YFz+fTfhV4= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= +rsc.io/letsencrypt v0.0.3/go.mod h1:buyQKZ6IXrRnB7TdkHP0RyEybLx18HHyOSoTyoOLqNY= rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= sigs.k8s.io/controller-runtime v0.3.0/go.mod h1:Cw6PkEg0Sa7dAYovGT4R0tRkGhHXpYijwNxYhAnAZZk= +sigs.k8s.io/controller-runtime v0.5.0/go.mod h1:REiJzC7Y00U+2YkMbT8wxgrsX5USpXKGhb2sCtAXiT8= +sigs.k8s.io/kustomize v2.0.3+incompatible/go.mod h1:MkjgH3RdOWrievjo6c9T245dYlB5QeXV4WCbnt/PEpU= sigs.k8s.io/structured-merge-diff v0.0.0-20190302045857-e85c7b244fd2/go.mod h1:wWxsB5ozmmv/SG7nM11ayaAW51xMvak/t1r0CSlcokI= sigs.k8s.io/structured-merge-diff v0.0.0-20190525122527-15d366b2352e/go.mod h1:wWxsB5ozmmv/SG7nM11ayaAW51xMvak/t1r0CSlcokI= +sigs.k8s.io/structured-merge-diff v1.0.1-0.20191108220359-b1b620dd3f06/go.mod h1:/ULNhyfzRopfcjskuui0cTITekDduZ7ycKN3oUT9R18= sigs.k8s.io/structured-merge-diff v1.0.1/go.mod h1:IIgPezJWb76P0hotTxzDbWsMYB8APh18qZnxkomBpxA= sigs.k8s.io/testing_frameworks v0.1.1/go.mod h1:VVBKrHmJ6Ekkfz284YKhQePcdycOzNH9qL6ht1zEr/U= sigs.k8s.io/yaml v1.1.0 h1:4A07+ZFc2wgJwo8YNlQpr1rVlgUDlxXHhPJciaPY5gs= sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o= +vbom.ml/util v0.0.0-20160121211510-db5cfe13f5cc/go.mod h1:so/NYdZXCz+E3ZpW0uAoCj6uzU2+8OWDFv/HxUSs7kI= vbom.ml/util v0.0.0-20180919145318-efcd4e0f9787/go.mod h1:so/NYdZXCz+E3ZpW0uAoCj6uzU2+8OWDFv/HxUSs7kI= diff --git a/pkg/client/injection/reconciler/broker/v1beta1/broker/controller.go b/pkg/client/injection/reconciler/broker/v1beta1/broker/controller.go index be3d83bb1d..ae870e33e7 100644 --- a/pkg/client/injection/reconciler/broker/v1beta1/broker/controller.go +++ b/pkg/client/injection/reconciler/broker/v1beta1/broker/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" broker "github.com/google/knative-gcp/pkg/client/injection/informers/broker/v1beta1/broker" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -59,29 +59,9 @@ func NewImpl(ctx context.Context, r Interface, classValue string, optionsFns ... brokerInformer := broker.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: brokerInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, classValue: classValue, @@ -91,6 +71,7 @@ func NewImpl(ctx context.Context, r Interface, classValue string, optionsFns ... queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -101,11 +82,41 @@ func NewImpl(ctx context.Context, r Interface, classValue string, optionsFns ... if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/broker/v1beta1/trigger/controller.go b/pkg/client/injection/reconciler/broker/v1beta1/trigger/controller.go index ba2f5a6749..f58e0168f7 100644 --- a/pkg/client/injection/reconciler/broker/v1beta1/trigger/controller.go +++ b/pkg/client/injection/reconciler/broker/v1beta1/trigger/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" trigger "github.com/google/knative-gcp/pkg/client/injection/informers/broker/v1beta1/trigger" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF triggerInformer := trigger.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: triggerInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/events/v1alpha1/cloudauditlogssource/controller.go b/pkg/client/injection/reconciler/events/v1alpha1/cloudauditlogssource/controller.go index deea132692..5dd7c5331c 100644 --- a/pkg/client/injection/reconciler/events/v1alpha1/cloudauditlogssource/controller.go +++ b/pkg/client/injection/reconciler/events/v1alpha1/cloudauditlogssource/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" cloudauditlogssource "github.com/google/knative-gcp/pkg/client/injection/informers/events/v1alpha1/cloudauditlogssource" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF cloudauditlogssourceInformer := cloudauditlogssource.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: cloudauditlogssourceInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/events/v1alpha1/cloudbuildsource/controller.go b/pkg/client/injection/reconciler/events/v1alpha1/cloudbuildsource/controller.go index d0f7159f69..36f242947f 100644 --- a/pkg/client/injection/reconciler/events/v1alpha1/cloudbuildsource/controller.go +++ b/pkg/client/injection/reconciler/events/v1alpha1/cloudbuildsource/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" cloudbuildsource "github.com/google/knative-gcp/pkg/client/injection/informers/events/v1alpha1/cloudbuildsource" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF cloudbuildsourceInformer := cloudbuildsource.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: cloudbuildsourceInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/events/v1alpha1/cloudpubsubsource/controller.go b/pkg/client/injection/reconciler/events/v1alpha1/cloudpubsubsource/controller.go index 966cd33d7c..249870d3ad 100644 --- a/pkg/client/injection/reconciler/events/v1alpha1/cloudpubsubsource/controller.go +++ b/pkg/client/injection/reconciler/events/v1alpha1/cloudpubsubsource/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" cloudpubsubsource "github.com/google/knative-gcp/pkg/client/injection/informers/events/v1alpha1/cloudpubsubsource" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF cloudpubsubsourceInformer := cloudpubsubsource.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: cloudpubsubsourceInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/events/v1alpha1/cloudschedulersource/controller.go b/pkg/client/injection/reconciler/events/v1alpha1/cloudschedulersource/controller.go index 5f43d94433..d2a46f62d1 100644 --- a/pkg/client/injection/reconciler/events/v1alpha1/cloudschedulersource/controller.go +++ b/pkg/client/injection/reconciler/events/v1alpha1/cloudschedulersource/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" cloudschedulersource "github.com/google/knative-gcp/pkg/client/injection/informers/events/v1alpha1/cloudschedulersource" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF cloudschedulersourceInformer := cloudschedulersource.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: cloudschedulersourceInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/events/v1alpha1/cloudstoragesource/controller.go b/pkg/client/injection/reconciler/events/v1alpha1/cloudstoragesource/controller.go index a6fd68d60e..a10c10f07a 100644 --- a/pkg/client/injection/reconciler/events/v1alpha1/cloudstoragesource/controller.go +++ b/pkg/client/injection/reconciler/events/v1alpha1/cloudstoragesource/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" cloudstoragesource "github.com/google/knative-gcp/pkg/client/injection/informers/events/v1alpha1/cloudstoragesource" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF cloudstoragesourceInformer := cloudstoragesource.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: cloudstoragesourceInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/events/v1beta1/cloudauditlogssource/controller.go b/pkg/client/injection/reconciler/events/v1beta1/cloudauditlogssource/controller.go index 092e3e5726..c0c1c814da 100644 --- a/pkg/client/injection/reconciler/events/v1beta1/cloudauditlogssource/controller.go +++ b/pkg/client/injection/reconciler/events/v1beta1/cloudauditlogssource/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" cloudauditlogssource "github.com/google/knative-gcp/pkg/client/injection/informers/events/v1beta1/cloudauditlogssource" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF cloudauditlogssourceInformer := cloudauditlogssource.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: cloudauditlogssourceInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/events/v1beta1/cloudpubsubsource/controller.go b/pkg/client/injection/reconciler/events/v1beta1/cloudpubsubsource/controller.go index 32873f3aa6..097fe72632 100644 --- a/pkg/client/injection/reconciler/events/v1beta1/cloudpubsubsource/controller.go +++ b/pkg/client/injection/reconciler/events/v1beta1/cloudpubsubsource/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" cloudpubsubsource "github.com/google/knative-gcp/pkg/client/injection/informers/events/v1beta1/cloudpubsubsource" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF cloudpubsubsourceInformer := cloudpubsubsource.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: cloudpubsubsourceInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/events/v1beta1/cloudschedulersource/controller.go b/pkg/client/injection/reconciler/events/v1beta1/cloudschedulersource/controller.go index 88eca96e9d..2a2b840ef7 100644 --- a/pkg/client/injection/reconciler/events/v1beta1/cloudschedulersource/controller.go +++ b/pkg/client/injection/reconciler/events/v1beta1/cloudschedulersource/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" cloudschedulersource "github.com/google/knative-gcp/pkg/client/injection/informers/events/v1beta1/cloudschedulersource" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF cloudschedulersourceInformer := cloudschedulersource.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: cloudschedulersourceInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/events/v1beta1/cloudstoragesource/controller.go b/pkg/client/injection/reconciler/events/v1beta1/cloudstoragesource/controller.go index 014f51781f..c035c5734a 100644 --- a/pkg/client/injection/reconciler/events/v1beta1/cloudstoragesource/controller.go +++ b/pkg/client/injection/reconciler/events/v1beta1/cloudstoragesource/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" cloudstoragesource "github.com/google/knative-gcp/pkg/client/injection/informers/events/v1beta1/cloudstoragesource" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF cloudstoragesourceInformer := cloudstoragesource.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: cloudstoragesourceInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/intevents/v1alpha1/brokercell/controller.go b/pkg/client/injection/reconciler/intevents/v1alpha1/brokercell/controller.go index 888b9e228e..4561ce3623 100644 --- a/pkg/client/injection/reconciler/intevents/v1alpha1/brokercell/controller.go +++ b/pkg/client/injection/reconciler/intevents/v1alpha1/brokercell/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" brokercell "github.com/google/knative-gcp/pkg/client/injection/informers/intevents/v1alpha1/brokercell" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF brokercellInformer := brokercell.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: brokercellInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/intevents/v1alpha1/pullsubscription/controller.go b/pkg/client/injection/reconciler/intevents/v1alpha1/pullsubscription/controller.go index e191b6770e..3f2ba16ccd 100644 --- a/pkg/client/injection/reconciler/intevents/v1alpha1/pullsubscription/controller.go +++ b/pkg/client/injection/reconciler/intevents/v1alpha1/pullsubscription/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" pullsubscription "github.com/google/knative-gcp/pkg/client/injection/informers/intevents/v1alpha1/pullsubscription" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF pullsubscriptionInformer := pullsubscription.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: pullsubscriptionInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/intevents/v1alpha1/topic/controller.go b/pkg/client/injection/reconciler/intevents/v1alpha1/topic/controller.go index 5a7e0f4826..38a5387978 100644 --- a/pkg/client/injection/reconciler/intevents/v1alpha1/topic/controller.go +++ b/pkg/client/injection/reconciler/intevents/v1alpha1/topic/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" topic "github.com/google/knative-gcp/pkg/client/injection/informers/intevents/v1alpha1/topic" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF topicInformer := topic.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: topicInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/messaging/v1alpha1/channel/controller.go b/pkg/client/injection/reconciler/messaging/v1alpha1/channel/controller.go index 4eb2c751fe..7fe4cbe988 100644 --- a/pkg/client/injection/reconciler/messaging/v1alpha1/channel/controller.go +++ b/pkg/client/injection/reconciler/messaging/v1alpha1/channel/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" channel "github.com/google/knative-gcp/pkg/client/injection/informers/messaging/v1alpha1/channel" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF channelInformer := channel.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: channelInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/messaging/v1beta1/channel/controller.go b/pkg/client/injection/reconciler/messaging/v1beta1/channel/controller.go index 021bf61ca9..e5baadc736 100644 --- a/pkg/client/injection/reconciler/messaging/v1beta1/channel/controller.go +++ b/pkg/client/injection/reconciler/messaging/v1beta1/channel/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" channel "github.com/google/knative-gcp/pkg/client/injection/informers/messaging/v1beta1/channel" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF channelInformer := channel.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: channelInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/policy/v1alpha1/eventpolicybinding/controller.go b/pkg/client/injection/reconciler/policy/v1alpha1/eventpolicybinding/controller.go index 909aa43f8f..778fd28881 100644 --- a/pkg/client/injection/reconciler/policy/v1alpha1/eventpolicybinding/controller.go +++ b/pkg/client/injection/reconciler/policy/v1alpha1/eventpolicybinding/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" eventpolicybinding "github.com/google/knative-gcp/pkg/client/injection/informers/policy/v1alpha1/eventpolicybinding" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF eventpolicybindingInformer := eventpolicybinding.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: eventpolicybindingInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/policy/v1alpha1/httppolicybinding/controller.go b/pkg/client/injection/reconciler/policy/v1alpha1/httppolicybinding/controller.go index 6a4c52b4d9..7c37dfe126 100644 --- a/pkg/client/injection/reconciler/policy/v1alpha1/httppolicybinding/controller.go +++ b/pkg/client/injection/reconciler/policy/v1alpha1/httppolicybinding/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" httppolicybinding "github.com/google/knative-gcp/pkg/client/injection/informers/policy/v1alpha1/httppolicybinding" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF httppolicybindingInformer := httppolicybinding.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: httppolicybindingInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/pubsub/v1alpha1/pullsubscription/controller.go b/pkg/client/injection/reconciler/pubsub/v1alpha1/pullsubscription/controller.go index c02c260b6a..2b7dd1623b 100644 --- a/pkg/client/injection/reconciler/pubsub/v1alpha1/pullsubscription/controller.go +++ b/pkg/client/injection/reconciler/pubsub/v1alpha1/pullsubscription/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" pullsubscription "github.com/google/knative-gcp/pkg/client/injection/informers/pubsub/v1alpha1/pullsubscription" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF pullsubscriptionInformer := pullsubscription.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: pullsubscriptionInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/pubsub/v1alpha1/topic/controller.go b/pkg/client/injection/reconciler/pubsub/v1alpha1/topic/controller.go index 8dfe6f212b..21d57317b9 100644 --- a/pkg/client/injection/reconciler/pubsub/v1alpha1/topic/controller.go +++ b/pkg/client/injection/reconciler/pubsub/v1alpha1/topic/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" topic "github.com/google/knative-gcp/pkg/client/injection/informers/pubsub/v1alpha1/topic" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF topicInformer := topic.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: topicInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/pubsub/v1beta1/pullsubscription/controller.go b/pkg/client/injection/reconciler/pubsub/v1beta1/pullsubscription/controller.go index aeaead6b5b..91339628d0 100644 --- a/pkg/client/injection/reconciler/pubsub/v1beta1/pullsubscription/controller.go +++ b/pkg/client/injection/reconciler/pubsub/v1beta1/pullsubscription/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" pullsubscription "github.com/google/knative-gcp/pkg/client/injection/informers/pubsub/v1beta1/pullsubscription" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF pullsubscriptionInformer := pullsubscription.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: pullsubscriptionInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/pkg/client/injection/reconciler/pubsub/v1beta1/topic/controller.go b/pkg/client/injection/reconciler/pubsub/v1beta1/topic/controller.go index 5b86dd9f42..06128ce02d 100644 --- a/pkg/client/injection/reconciler/pubsub/v1beta1/topic/controller.go +++ b/pkg/client/injection/reconciler/pubsub/v1beta1/topic/controller.go @@ -25,14 +25,14 @@ import ( strings "strings" versionedscheme "github.com/google/knative-gcp/pkg/client/clientset/versioned/scheme" - injectionclient "github.com/google/knative-gcp/pkg/client/injection/client" + client "github.com/google/knative-gcp/pkg/client/injection/client" topic "github.com/google/knative-gcp/pkg/client/injection/informers/pubsub/v1beta1/topic" corev1 "k8s.io/api/core/v1" watch "k8s.io/apimachinery/pkg/watch" scheme "k8s.io/client-go/kubernetes/scheme" v1 "k8s.io/client-go/kubernetes/typed/core/v1" record "k8s.io/client-go/tools/record" - client "knative.dev/pkg/client/injection/kube/client" + kubeclient "knative.dev/pkg/client/injection/kube/client" controller "knative.dev/pkg/controller" logging "knative.dev/pkg/logging" ) @@ -56,29 +56,9 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF topicInformer := topic.Get(ctx) - recorder := controller.GetEventRecorder(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := record.NewBroadcaster() - watches := []watch.Interface{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &v1.EventSinkImpl{Interface: client.Get(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ - Client: injectionclient.Get(ctx), + Client: client.Get(ctx), Lister: topicInformer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, } @@ -87,6 +67,7 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF queueName := fmt.Sprintf("%s.%s", strings.ReplaceAll(t.PkgPath(), "/", "-"), t.Name()) impl := controller.NewImpl(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -97,11 +78,41 @@ func NewImpl(ctx context.Context, r Interface, optionsFns ...controller.OptionsF if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := logging.FromContext(ctx) + + recorder := controller.GetEventRecorder(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := record.NewBroadcaster() + watches := []watch.Interface{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &v1.EventSinkImpl{Interface: kubeclient.Get(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { versionedscheme.AddToScheme(scheme.Scheme) } diff --git a/third_party/VENDOR-LICENSE/github.com/cespare/xxhash/v2/LICENSE.txt b/third_party/VENDOR-LICENSE/github.com/cespare/xxhash/v2/LICENSE.txt new file mode 100644 index 0000000000..24b53065f4 --- /dev/null +++ b/third_party/VENDOR-LICENSE/github.com/cespare/xxhash/v2/LICENSE.txt @@ -0,0 +1,22 @@ +Copyright (c) 2016 Caleb Spare + +MIT License + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/third_party/VENDOR-LICENSE/github.com/hashicorp/golang-lru/lru.go b/third_party/VENDOR-LICENSE/github.com/hashicorp/golang-lru/lru.go index 052a38b4c4..4e5e9d8fd0 100644 --- a/third_party/VENDOR-LICENSE/github.com/hashicorp/golang-lru/lru.go +++ b/third_party/VENDOR-LICENSE/github.com/hashicorp/golang-lru/lru.go @@ -37,7 +37,7 @@ func (c *Cache) Purge() { c.lock.Unlock() } -// Add adds a value to the cache. Returns true if an eviction occurred. +// Add adds a value to the cache. Returns true if an eviction occurred. func (c *Cache) Add(key, value interface{}) (evicted bool) { c.lock.Lock() evicted = c.lru.Add(key, value) @@ -71,8 +71,8 @@ func (c *Cache) Peek(key interface{}) (value interface{}, ok bool) { return value, ok } -// ContainsOrAdd checks if a key is in the cache without updating the -// recent-ness or deleting it for being stale, and if not, adds the value. +// ContainsOrAdd checks if a key is in the cache without updating the +// recent-ness or deleting it for being stale, and if not, adds the value. // Returns whether found and whether an eviction occurred. func (c *Cache) ContainsOrAdd(key, value interface{}) (ok, evicted bool) { c.lock.Lock() @@ -85,6 +85,22 @@ func (c *Cache) ContainsOrAdd(key, value interface{}) (ok, evicted bool) { return false, evicted } +// PeekOrAdd checks if a key is in the cache without updating the +// recent-ness or deleting it for being stale, and if not, adds the value. +// Returns whether found and whether an eviction occurred. +func (c *Cache) PeekOrAdd(key, value interface{}) (previous interface{}, ok, evicted bool) { + c.lock.Lock() + defer c.lock.Unlock() + + previous, ok = c.lru.Peek(key) + if ok { + return previous, true, false + } + + evicted = c.lru.Add(key, value) + return nil, false, evicted +} + // Remove removes the provided key from the cache. func (c *Cache) Remove(key interface{}) (present bool) { c.lock.Lock() diff --git a/vendor/github.com/cespare/xxhash/v2/.travis.yml b/vendor/github.com/cespare/xxhash/v2/.travis.yml new file mode 100644 index 0000000000..c516ea88da --- /dev/null +++ b/vendor/github.com/cespare/xxhash/v2/.travis.yml @@ -0,0 +1,8 @@ +language: go +go: + - "1.x" + - master +env: + - TAGS="" + - TAGS="-tags purego" +script: go test $TAGS -v ./... diff --git a/vendor/github.com/cespare/xxhash/v2/LICENSE.txt b/vendor/github.com/cespare/xxhash/v2/LICENSE.txt new file mode 100644 index 0000000000..24b53065f4 --- /dev/null +++ b/vendor/github.com/cespare/xxhash/v2/LICENSE.txt @@ -0,0 +1,22 @@ +Copyright (c) 2016 Caleb Spare + +MIT License + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/cespare/xxhash/v2/README.md b/vendor/github.com/cespare/xxhash/v2/README.md new file mode 100644 index 0000000000..2fd8693c21 --- /dev/null +++ b/vendor/github.com/cespare/xxhash/v2/README.md @@ -0,0 +1,67 @@ +# xxhash + +[![GoDoc](https://godoc.org/github.com/cespare/xxhash?status.svg)](https://godoc.org/github.com/cespare/xxhash) +[![Build Status](https://travis-ci.org/cespare/xxhash.svg?branch=master)](https://travis-ci.org/cespare/xxhash) + +xxhash is a Go implementation of the 64-bit +[xxHash](http://cyan4973.github.io/xxHash/) algorithm, XXH64. This is a +high-quality hashing algorithm that is much faster than anything in the Go +standard library. + +This package provides a straightforward API: + +``` +func Sum64(b []byte) uint64 +func Sum64String(s string) uint64 +type Digest struct{ ... } + func New() *Digest +``` + +The `Digest` type implements hash.Hash64. Its key methods are: + +``` +func (*Digest) Write([]byte) (int, error) +func (*Digest) WriteString(string) (int, error) +func (*Digest) Sum64() uint64 +``` + +This implementation provides a fast pure-Go implementation and an even faster +assembly implementation for amd64. + +## Compatibility + +This package is in a module and the latest code is in version 2 of the module. +You need a version of Go with at least "minimal module compatibility" to use +github.com/cespare/xxhash/v2: + +* 1.9.7+ for Go 1.9 +* 1.10.3+ for Go 1.10 +* Go 1.11 or later + +I recommend using the latest release of Go. + +## Benchmarks + +Here are some quick benchmarks comparing the pure-Go and assembly +implementations of Sum64. + +| input size | purego | asm | +| --- | --- | --- | +| 5 B | 979.66 MB/s | 1291.17 MB/s | +| 100 B | 7475.26 MB/s | 7973.40 MB/s | +| 4 KB | 17573.46 MB/s | 17602.65 MB/s | +| 10 MB | 17131.46 MB/s | 17142.16 MB/s | + +These numbers were generated on Ubuntu 18.04 with an Intel i7-8700K CPU using +the following commands under Go 1.11.2: + +``` +$ go test -tags purego -benchtime 10s -bench '/xxhash,direct,bytes' +$ go test -benchtime 10s -bench '/xxhash,direct,bytes' +``` + +## Projects using this package + +- [InfluxDB](https://github.com/influxdata/influxdb) +- [Prometheus](https://github.com/prometheus/prometheus) +- [FreeCache](https://github.com/coocood/freecache) diff --git a/vendor/github.com/cespare/xxhash/v2/go.mod b/vendor/github.com/cespare/xxhash/v2/go.mod new file mode 100644 index 0000000000..49f67608bf --- /dev/null +++ b/vendor/github.com/cespare/xxhash/v2/go.mod @@ -0,0 +1,3 @@ +module github.com/cespare/xxhash/v2 + +go 1.11 diff --git a/vendor/github.com/cespare/xxhash/v2/go.sum b/vendor/github.com/cespare/xxhash/v2/go.sum new file mode 100644 index 0000000000..e69de29bb2 diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash.go b/vendor/github.com/cespare/xxhash/v2/xxhash.go new file mode 100644 index 0000000000..db0b35fbe3 --- /dev/null +++ b/vendor/github.com/cespare/xxhash/v2/xxhash.go @@ -0,0 +1,236 @@ +// Package xxhash implements the 64-bit variant of xxHash (XXH64) as described +// at http://cyan4973.github.io/xxHash/. +package xxhash + +import ( + "encoding/binary" + "errors" + "math/bits" +) + +const ( + prime1 uint64 = 11400714785074694791 + prime2 uint64 = 14029467366897019727 + prime3 uint64 = 1609587929392839161 + prime4 uint64 = 9650029242287828579 + prime5 uint64 = 2870177450012600261 +) + +// NOTE(caleb): I'm using both consts and vars of the primes. Using consts where +// possible in the Go code is worth a small (but measurable) performance boost +// by avoiding some MOVQs. Vars are needed for the asm and also are useful for +// convenience in the Go code in a few places where we need to intentionally +// avoid constant arithmetic (e.g., v1 := prime1 + prime2 fails because the +// result overflows a uint64). +var ( + prime1v = prime1 + prime2v = prime2 + prime3v = prime3 + prime4v = prime4 + prime5v = prime5 +) + +// Digest implements hash.Hash64. +type Digest struct { + v1 uint64 + v2 uint64 + v3 uint64 + v4 uint64 + total uint64 + mem [32]byte + n int // how much of mem is used +} + +// New creates a new Digest that computes the 64-bit xxHash algorithm. +func New() *Digest { + var d Digest + d.Reset() + return &d +} + +// Reset clears the Digest's state so that it can be reused. +func (d *Digest) Reset() { + d.v1 = prime1v + prime2 + d.v2 = prime2 + d.v3 = 0 + d.v4 = -prime1v + d.total = 0 + d.n = 0 +} + +// Size always returns 8 bytes. +func (d *Digest) Size() int { return 8 } + +// BlockSize always returns 32 bytes. +func (d *Digest) BlockSize() int { return 32 } + +// Write adds more data to d. It always returns len(b), nil. +func (d *Digest) Write(b []byte) (n int, err error) { + n = len(b) + d.total += uint64(n) + + if d.n+n < 32 { + // This new data doesn't even fill the current block. + copy(d.mem[d.n:], b) + d.n += n + return + } + + if d.n > 0 { + // Finish off the partial block. + copy(d.mem[d.n:], b) + d.v1 = round(d.v1, u64(d.mem[0:8])) + d.v2 = round(d.v2, u64(d.mem[8:16])) + d.v3 = round(d.v3, u64(d.mem[16:24])) + d.v4 = round(d.v4, u64(d.mem[24:32])) + b = b[32-d.n:] + d.n = 0 + } + + if len(b) >= 32 { + // One or more full blocks left. + nw := writeBlocks(d, b) + b = b[nw:] + } + + // Store any remaining partial block. + copy(d.mem[:], b) + d.n = len(b) + + return +} + +// Sum appends the current hash to b and returns the resulting slice. +func (d *Digest) Sum(b []byte) []byte { + s := d.Sum64() + return append( + b, + byte(s>>56), + byte(s>>48), + byte(s>>40), + byte(s>>32), + byte(s>>24), + byte(s>>16), + byte(s>>8), + byte(s), + ) +} + +// Sum64 returns the current hash. +func (d *Digest) Sum64() uint64 { + var h uint64 + + if d.total >= 32 { + v1, v2, v3, v4 := d.v1, d.v2, d.v3, d.v4 + h = rol1(v1) + rol7(v2) + rol12(v3) + rol18(v4) + h = mergeRound(h, v1) + h = mergeRound(h, v2) + h = mergeRound(h, v3) + h = mergeRound(h, v4) + } else { + h = d.v3 + prime5 + } + + h += d.total + + i, end := 0, d.n + for ; i+8 <= end; i += 8 { + k1 := round(0, u64(d.mem[i:i+8])) + h ^= k1 + h = rol27(h)*prime1 + prime4 + } + if i+4 <= end { + h ^= uint64(u32(d.mem[i:i+4])) * prime1 + h = rol23(h)*prime2 + prime3 + i += 4 + } + for i < end { + h ^= uint64(d.mem[i]) * prime5 + h = rol11(h) * prime1 + i++ + } + + h ^= h >> 33 + h *= prime2 + h ^= h >> 29 + h *= prime3 + h ^= h >> 32 + + return h +} + +const ( + magic = "xxh\x06" + marshaledSize = len(magic) + 8*5 + 32 +) + +// MarshalBinary implements the encoding.BinaryMarshaler interface. +func (d *Digest) MarshalBinary() ([]byte, error) { + b := make([]byte, 0, marshaledSize) + b = append(b, magic...) + b = appendUint64(b, d.v1) + b = appendUint64(b, d.v2) + b = appendUint64(b, d.v3) + b = appendUint64(b, d.v4) + b = appendUint64(b, d.total) + b = append(b, d.mem[:d.n]...) + b = b[:len(b)+len(d.mem)-d.n] + return b, nil +} + +// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface. +func (d *Digest) UnmarshalBinary(b []byte) error { + if len(b) < len(magic) || string(b[:len(magic)]) != magic { + return errors.New("xxhash: invalid hash state identifier") + } + if len(b) != marshaledSize { + return errors.New("xxhash: invalid hash state size") + } + b = b[len(magic):] + b, d.v1 = consumeUint64(b) + b, d.v2 = consumeUint64(b) + b, d.v3 = consumeUint64(b) + b, d.v4 = consumeUint64(b) + b, d.total = consumeUint64(b) + copy(d.mem[:], b) + b = b[len(d.mem):] + d.n = int(d.total % uint64(len(d.mem))) + return nil +} + +func appendUint64(b []byte, x uint64) []byte { + var a [8]byte + binary.LittleEndian.PutUint64(a[:], x) + return append(b, a[:]...) +} + +func consumeUint64(b []byte) ([]byte, uint64) { + x := u64(b) + return b[8:], x +} + +func u64(b []byte) uint64 { return binary.LittleEndian.Uint64(b) } +func u32(b []byte) uint32 { return binary.LittleEndian.Uint32(b) } + +func round(acc, input uint64) uint64 { + acc += input * prime2 + acc = rol31(acc) + acc *= prime1 + return acc +} + +func mergeRound(acc, val uint64) uint64 { + val = round(0, val) + acc ^= val + acc = acc*prime1 + prime4 + return acc +} + +func rol1(x uint64) uint64 { return bits.RotateLeft64(x, 1) } +func rol7(x uint64) uint64 { return bits.RotateLeft64(x, 7) } +func rol11(x uint64) uint64 { return bits.RotateLeft64(x, 11) } +func rol12(x uint64) uint64 { return bits.RotateLeft64(x, 12) } +func rol18(x uint64) uint64 { return bits.RotateLeft64(x, 18) } +func rol23(x uint64) uint64 { return bits.RotateLeft64(x, 23) } +func rol27(x uint64) uint64 { return bits.RotateLeft64(x, 27) } +func rol31(x uint64) uint64 { return bits.RotateLeft64(x, 31) } diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.go b/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.go new file mode 100644 index 0000000000..ad14b807f4 --- /dev/null +++ b/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.go @@ -0,0 +1,13 @@ +// +build !appengine +// +build gc +// +build !purego + +package xxhash + +// Sum64 computes the 64-bit xxHash digest of b. +// +//go:noescape +func Sum64(b []byte) uint64 + +//go:noescape +func writeBlocks(d *Digest, b []byte) int diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s b/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s new file mode 100644 index 0000000000..d580e32aed --- /dev/null +++ b/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s @@ -0,0 +1,215 @@ +// +build !appengine +// +build gc +// +build !purego + +#include "textflag.h" + +// Register allocation: +// AX h +// CX pointer to advance through b +// DX n +// BX loop end +// R8 v1, k1 +// R9 v2 +// R10 v3 +// R11 v4 +// R12 tmp +// R13 prime1v +// R14 prime2v +// R15 prime4v + +// round reads from and advances the buffer pointer in CX. +// It assumes that R13 has prime1v and R14 has prime2v. +#define round(r) \ + MOVQ (CX), R12 \ + ADDQ $8, CX \ + IMULQ R14, R12 \ + ADDQ R12, r \ + ROLQ $31, r \ + IMULQ R13, r + +// mergeRound applies a merge round on the two registers acc and val. +// It assumes that R13 has prime1v, R14 has prime2v, and R15 has prime4v. +#define mergeRound(acc, val) \ + IMULQ R14, val \ + ROLQ $31, val \ + IMULQ R13, val \ + XORQ val, acc \ + IMULQ R13, acc \ + ADDQ R15, acc + +// func Sum64(b []byte) uint64 +TEXT ·Sum64(SB), NOSPLIT, $0-32 + // Load fixed primes. + MOVQ ·prime1v(SB), R13 + MOVQ ·prime2v(SB), R14 + MOVQ ·prime4v(SB), R15 + + // Load slice. + MOVQ b_base+0(FP), CX + MOVQ b_len+8(FP), DX + LEAQ (CX)(DX*1), BX + + // The first loop limit will be len(b)-32. + SUBQ $32, BX + + // Check whether we have at least one block. + CMPQ DX, $32 + JLT noBlocks + + // Set up initial state (v1, v2, v3, v4). + MOVQ R13, R8 + ADDQ R14, R8 + MOVQ R14, R9 + XORQ R10, R10 + XORQ R11, R11 + SUBQ R13, R11 + + // Loop until CX > BX. +blockLoop: + round(R8) + round(R9) + round(R10) + round(R11) + + CMPQ CX, BX + JLE blockLoop + + MOVQ R8, AX + ROLQ $1, AX + MOVQ R9, R12 + ROLQ $7, R12 + ADDQ R12, AX + MOVQ R10, R12 + ROLQ $12, R12 + ADDQ R12, AX + MOVQ R11, R12 + ROLQ $18, R12 + ADDQ R12, AX + + mergeRound(AX, R8) + mergeRound(AX, R9) + mergeRound(AX, R10) + mergeRound(AX, R11) + + JMP afterBlocks + +noBlocks: + MOVQ ·prime5v(SB), AX + +afterBlocks: + ADDQ DX, AX + + // Right now BX has len(b)-32, and we want to loop until CX > len(b)-8. + ADDQ $24, BX + + CMPQ CX, BX + JG fourByte + +wordLoop: + // Calculate k1. + MOVQ (CX), R8 + ADDQ $8, CX + IMULQ R14, R8 + ROLQ $31, R8 + IMULQ R13, R8 + + XORQ R8, AX + ROLQ $27, AX + IMULQ R13, AX + ADDQ R15, AX + + CMPQ CX, BX + JLE wordLoop + +fourByte: + ADDQ $4, BX + CMPQ CX, BX + JG singles + + MOVL (CX), R8 + ADDQ $4, CX + IMULQ R13, R8 + XORQ R8, AX + + ROLQ $23, AX + IMULQ R14, AX + ADDQ ·prime3v(SB), AX + +singles: + ADDQ $4, BX + CMPQ CX, BX + JGE finalize + +singlesLoop: + MOVBQZX (CX), R12 + ADDQ $1, CX + IMULQ ·prime5v(SB), R12 + XORQ R12, AX + + ROLQ $11, AX + IMULQ R13, AX + + CMPQ CX, BX + JL singlesLoop + +finalize: + MOVQ AX, R12 + SHRQ $33, R12 + XORQ R12, AX + IMULQ R14, AX + MOVQ AX, R12 + SHRQ $29, R12 + XORQ R12, AX + IMULQ ·prime3v(SB), AX + MOVQ AX, R12 + SHRQ $32, R12 + XORQ R12, AX + + MOVQ AX, ret+24(FP) + RET + +// writeBlocks uses the same registers as above except that it uses AX to store +// the d pointer. + +// func writeBlocks(d *Digest, b []byte) int +TEXT ·writeBlocks(SB), NOSPLIT, $0-40 + // Load fixed primes needed for round. + MOVQ ·prime1v(SB), R13 + MOVQ ·prime2v(SB), R14 + + // Load slice. + MOVQ b_base+8(FP), CX + MOVQ b_len+16(FP), DX + LEAQ (CX)(DX*1), BX + SUBQ $32, BX + + // Load vN from d. + MOVQ d+0(FP), AX + MOVQ 0(AX), R8 // v1 + MOVQ 8(AX), R9 // v2 + MOVQ 16(AX), R10 // v3 + MOVQ 24(AX), R11 // v4 + + // We don't need to check the loop condition here; this function is + // always called with at least one block of data to process. +blockLoop: + round(R8) + round(R9) + round(R10) + round(R11) + + CMPQ CX, BX + JLE blockLoop + + // Copy vN back to d. + MOVQ R8, 0(AX) + MOVQ R9, 8(AX) + MOVQ R10, 16(AX) + MOVQ R11, 24(AX) + + // The number of bytes written is CX minus the old base pointer. + SUBQ b_base+8(FP), CX + MOVQ CX, ret+32(FP) + + RET diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_other.go b/vendor/github.com/cespare/xxhash/v2/xxhash_other.go new file mode 100644 index 0000000000..4a5a821603 --- /dev/null +++ b/vendor/github.com/cespare/xxhash/v2/xxhash_other.go @@ -0,0 +1,76 @@ +// +build !amd64 appengine !gc purego + +package xxhash + +// Sum64 computes the 64-bit xxHash digest of b. +func Sum64(b []byte) uint64 { + // A simpler version would be + // d := New() + // d.Write(b) + // return d.Sum64() + // but this is faster, particularly for small inputs. + + n := len(b) + var h uint64 + + if n >= 32 { + v1 := prime1v + prime2 + v2 := prime2 + v3 := uint64(0) + v4 := -prime1v + for len(b) >= 32 { + v1 = round(v1, u64(b[0:8:len(b)])) + v2 = round(v2, u64(b[8:16:len(b)])) + v3 = round(v3, u64(b[16:24:len(b)])) + v4 = round(v4, u64(b[24:32:len(b)])) + b = b[32:len(b):len(b)] + } + h = rol1(v1) + rol7(v2) + rol12(v3) + rol18(v4) + h = mergeRound(h, v1) + h = mergeRound(h, v2) + h = mergeRound(h, v3) + h = mergeRound(h, v4) + } else { + h = prime5 + } + + h += uint64(n) + + i, end := 0, len(b) + for ; i+8 <= end; i += 8 { + k1 := round(0, u64(b[i:i+8:len(b)])) + h ^= k1 + h = rol27(h)*prime1 + prime4 + } + if i+4 <= end { + h ^= uint64(u32(b[i:i+4:len(b)])) * prime1 + h = rol23(h)*prime2 + prime3 + i += 4 + } + for ; i < end; i++ { + h ^= uint64(b[i]) * prime5 + h = rol11(h) * prime1 + } + + h ^= h >> 33 + h *= prime2 + h ^= h >> 29 + h *= prime3 + h ^= h >> 32 + + return h +} + +func writeBlocks(d *Digest, b []byte) int { + v1, v2, v3, v4 := d.v1, d.v2, d.v3, d.v4 + n := len(b) + for len(b) >= 32 { + v1 = round(v1, u64(b[0:8:len(b)])) + v2 = round(v2, u64(b[8:16:len(b)])) + v3 = round(v3, u64(b[16:24:len(b)])) + v4 = round(v4, u64(b[24:32:len(b)])) + b = b[32:len(b):len(b)] + } + d.v1, d.v2, d.v3, d.v4 = v1, v2, v3, v4 + return n - len(b) +} diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_safe.go b/vendor/github.com/cespare/xxhash/v2/xxhash_safe.go new file mode 100644 index 0000000000..fc9bea7a31 --- /dev/null +++ b/vendor/github.com/cespare/xxhash/v2/xxhash_safe.go @@ -0,0 +1,15 @@ +// +build appengine + +// This file contains the safe implementations of otherwise unsafe-using code. + +package xxhash + +// Sum64String computes the 64-bit xxHash digest of s. +func Sum64String(s string) uint64 { + return Sum64([]byte(s)) +} + +// WriteString adds more data to d. It always returns len(s), nil. +func (d *Digest) WriteString(s string) (n int, err error) { + return d.Write([]byte(s)) +} diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go b/vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go new file mode 100644 index 0000000000..53bf76efbc --- /dev/null +++ b/vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go @@ -0,0 +1,46 @@ +// +build !appengine + +// This file encapsulates usage of unsafe. +// xxhash_safe.go contains the safe implementations. + +package xxhash + +import ( + "reflect" + "unsafe" +) + +// Notes: +// +// See https://groups.google.com/d/msg/golang-nuts/dcjzJy-bSpw/tcZYBzQqAQAJ +// for some discussion about these unsafe conversions. +// +// In the future it's possible that compiler optimizations will make these +// unsafe operations unnecessary: https://golang.org/issue/2205. +// +// Both of these wrapper functions still incur function call overhead since they +// will not be inlined. We could write Go/asm copies of Sum64 and Digest.Write +// for strings to squeeze out a bit more speed. Mid-stack inlining should +// eventually fix this. + +// Sum64String computes the 64-bit xxHash digest of s. +// It may be faster than Sum64([]byte(s)) by avoiding a copy. +func Sum64String(s string) uint64 { + var b []byte + bh := (*reflect.SliceHeader)(unsafe.Pointer(&b)) + bh.Data = (*reflect.StringHeader)(unsafe.Pointer(&s)).Data + bh.Len = len(s) + bh.Cap = len(s) + return Sum64(b) +} + +// WriteString adds more data to d. It always returns len(s), nil. +// It may be faster than Write([]byte(s)) by avoiding a copy. +func (d *Digest) WriteString(s string) (n int, err error) { + var b []byte + bh := (*reflect.SliceHeader)(unsafe.Pointer(&b)) + bh.Data = (*reflect.StringHeader)(unsafe.Pointer(&s)).Data + bh.Len = len(s) + bh.Cap = len(s) + return d.Write(b) +} diff --git a/vendor/github.com/go-openapi/spec/.golangci.yml b/vendor/github.com/go-openapi/spec/.golangci.yml index 3e33f9f2e3..4e17ed4979 100644 --- a/vendor/github.com/go-openapi/spec/.golangci.yml +++ b/vendor/github.com/go-openapi/spec/.golangci.yml @@ -21,3 +21,8 @@ linters: - lll - gochecknoinits - gochecknoglobals + - funlen + - godox + - gocognit + - whitespace + - wsl diff --git a/vendor/github.com/go-openapi/spec/contact_info.go b/vendor/github.com/go-openapi/spec/contact_info.go index f285970aa1..f9bf42e8dd 100644 --- a/vendor/github.com/go-openapi/spec/contact_info.go +++ b/vendor/github.com/go-openapi/spec/contact_info.go @@ -14,11 +14,41 @@ package spec +import ( + "encoding/json" + + "github.com/go-openapi/swag" +) + // ContactInfo contact information for the exposed API. // // For more information: http://goo.gl/8us55a#contactObject type ContactInfo struct { + ContactInfoProps + VendorExtensible +} + +type ContactInfoProps struct { Name string `json:"name,omitempty"` URL string `json:"url,omitempty"` Email string `json:"email,omitempty"` } + +func (c *ContactInfo) UnmarshalJSON(data []byte) error { + if err := json.Unmarshal(data, &c.ContactInfoProps); err != nil { + return err + } + return json.Unmarshal(data, &c.VendorExtensible) +} + +func (c ContactInfo) MarshalJSON() ([]byte, error) { + b1, err := json.Marshal(c.ContactInfoProps) + if err != nil { + return nil, err + } + b2, err := json.Marshal(c.VendorExtensible) + if err != nil { + return nil, err + } + return swag.ConcatJSON(b1, b2), nil +} diff --git a/vendor/github.com/go-openapi/spec/expander.go b/vendor/github.com/go-openapi/spec/expander.go index 1e7fc8c490..043720d7d8 100644 --- a/vendor/github.com/go-openapi/spec/expander.go +++ b/vendor/github.com/go-openapi/spec/expander.go @@ -452,11 +452,12 @@ func expandPathItem(pathItem *PathItem, resolver *schemaLoader, basePath string) return err } if pathItem.Ref.String() != "" { - var err error - resolver, err = resolver.transitiveResolver(basePath, pathItem.Ref) - if resolver.shouldStopOnError(err) { + transitiveResolver, err := resolver.transitiveResolver(basePath, pathItem.Ref) + if transitiveResolver.shouldStopOnError(err) { return err } + basePath = transitiveResolver.updateBasePath(resolver, basePath) + resolver = transitiveResolver } pathItem.Ref = Ref{} diff --git a/vendor/github.com/go-openapi/spec/go.mod b/vendor/github.com/go-openapi/spec/go.mod index 02a142c03c..14e5f2dac3 100644 --- a/vendor/github.com/go-openapi/spec/go.mod +++ b/vendor/github.com/go-openapi/spec/go.mod @@ -4,14 +4,9 @@ require ( github.com/go-openapi/jsonpointer v0.19.3 github.com/go-openapi/jsonreference v0.19.2 github.com/go-openapi/swag v0.19.5 - github.com/kr/pty v1.1.5 // indirect - github.com/stretchr/objx v0.2.0 // indirect github.com/stretchr/testify v1.3.0 - golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8 // indirect golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 // indirect - golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f // indirect - golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59 // indirect - gopkg.in/yaml.v2 v2.2.2 + gopkg.in/yaml.v2 v2.2.4 ) go 1.13 diff --git a/vendor/github.com/go-openapi/spec/go.sum b/vendor/github.com/go-openapi/spec/go.sum index 86db601c97..c209ff9712 100644 --- a/vendor/github.com/go-openapi/spec/go.sum +++ b/vendor/github.com/go-openapi/spec/go.sum @@ -1,5 +1,3 @@ -github.com/PuerkitoBio/purell v1.1.0 h1:rmGxhojJlM0tuKtfdvliR84CFHljx9ag64t2xmVkjK4= -github.com/PuerkitoBio/purell v1.1.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI= github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M= @@ -7,20 +5,12 @@ github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdko github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/go-openapi/jsonpointer v0.17.0 h1:nH6xp8XdXHx8dqveo0ZuJBluCO2qGrPbDNZ0dwoRHP0= -github.com/go-openapi/jsonpointer v0.17.0/go.mod h1:cOnomiV+CVVwFLk0A/MExoFMjwdsUdVpsRhURCKh+3M= -github.com/go-openapi/jsonpointer v0.19.0 h1:FTUMcX77w5rQkClIzDtTxvn6Bsa894CcrzNj2MMfeg8= -github.com/go-openapi/jsonpointer v0.19.0/go.mod h1:cOnomiV+CVVwFLk0A/MExoFMjwdsUdVpsRhURCKh+3M= github.com/go-openapi/jsonpointer v0.19.2 h1:A9+F4Dc/MCNB5jibxf6rRvOvR/iFgQdyNx9eIhnGqq0= github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg= github.com/go-openapi/jsonpointer v0.19.3 h1:gihV7YNZK1iK6Tgwwsxo2rJbD1GTbdm72325Bq8FI3w= github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= -github.com/go-openapi/jsonreference v0.19.0 h1:BqWKpV1dFd+AuiKlgtddwVIFQsuMpxfBDBHGfM2yNpk= -github.com/go-openapi/jsonreference v0.19.0/go.mod h1:g4xxGn04lDIRh0GJb5QlpE3HfopLOL6uZrK/VgnsK9I= github.com/go-openapi/jsonreference v0.19.2 h1:o20suLFB4Ri0tuzpWtyHlh7E7HnkqTNLq6aR6WVNS1w= github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc= -github.com/go-openapi/swag v0.17.0 h1:iqrgMg7Q7SvtbWLlltPrkMs0UBJI6oTSs79JFRUi880= -github.com/go-openapi/swag v0.17.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg= github.com/go-openapi/swag v0.19.2 h1:jvO6bCMBEilGwMfHhrd61zIID4oIFdwb76V17SM88dE= github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= github.com/go-openapi/swag v0.19.5 h1:lTz6Ys4CmqqCQmZPBlbQENR1/GucA2bzYTE12Pw4tFY= @@ -28,11 +18,8 @@ github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= -github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA= github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= -github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329 h1:2gxZ0XQIU/5z3Z3bUBu+FXuk2pFbkN6tcwi/pjyaDic= -github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63 h1:nTT4s92Dgz2HlrB2NaMgvlfqHH39OgMhA7z3PK7PGD4= github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e h1:hB2xlXdHp/pmPZq0y3QnmWAArdw9PqbmotexnWx/FU8= @@ -40,35 +27,23 @@ github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= -github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w= -github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/net v0.0.0-20181005035420-146acd28ed58 h1:otZG8yDCO4LVps5+9bxOeNiCvgmOyt96J3roHTYs7oE= -golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190613194153-d28f0bde5980 h1:dfGZHvZk057jK2MCeWus/TowKpJ8y4AmooUzdBSR9GU= golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM= golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v2 v2.2.1 h1:mUhvW9EsL+naU5Q3cakzfE91YhliOondGd6ZrsDBHQE= -gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.4 h1:/eiJrUcujPVeJ3xlSWaiNi3uSVmDGBK1pDHUHAnao1I= +gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= diff --git a/vendor/github.com/go-openapi/spec/license.go b/vendor/github.com/go-openapi/spec/license.go index f20961b4fd..e1529b401c 100644 --- a/vendor/github.com/go-openapi/spec/license.go +++ b/vendor/github.com/go-openapi/spec/license.go @@ -14,10 +14,40 @@ package spec +import ( + "encoding/json" + + "github.com/go-openapi/swag" +) + // License information for the exposed API. // // For more information: http://goo.gl/8us55a#licenseObject type License struct { + LicenseProps + VendorExtensible +} + +type LicenseProps struct { Name string `json:"name,omitempty"` URL string `json:"url,omitempty"` } + +func (l *License) UnmarshalJSON(data []byte) error { + if err := json.Unmarshal(data, &l.LicenseProps); err != nil { + return err + } + return json.Unmarshal(data, &l.VendorExtensible) +} + +func (l License) MarshalJSON() ([]byte, error) { + b1, err := json.Marshal(l.LicenseProps) + if err != nil { + return nil, err + } + b2, err := json.Marshal(l.VendorExtensible) + if err != nil { + return nil, err + } + return swag.ConcatJSON(b1, b2), nil +} diff --git a/vendor/github.com/go-openapi/spec/ref.go b/vendor/github.com/go-openapi/spec/ref.go index 813dfe71b4..1f31a9ead0 100644 --- a/vendor/github.com/go-openapi/spec/ref.go +++ b/vendor/github.com/go-openapi/spec/ref.go @@ -68,6 +68,7 @@ func (r *Ref) IsValidURI(basepaths ...string) bool { } if r.HasFullURL { + //#nosec rr, err := http.Get(v) if err != nil { return false diff --git a/vendor/github.com/go-openapi/spec/schema_loader.go b/vendor/github.com/go-openapi/spec/schema_loader.go index 9e20e96c2a..961d477571 100644 --- a/vendor/github.com/go-openapi/spec/schema_loader.go +++ b/vendor/github.com/go-openapi/spec/schema_loader.go @@ -86,12 +86,7 @@ func (r *schemaLoader) transitiveResolver(basePath string, ref Ref) (*schemaLoad newOptions := r.options newOptions.RelativeBase = rootURL.String() debugLog("setting new root: %s", newOptions.RelativeBase) - resolver, err := defaultSchemaLoader(root, newOptions, r.cache, r.context) - if err != nil { - return nil, err - } - - return resolver, nil + return defaultSchemaLoader(root, newOptions, r.cache, r.context) } func (r *schemaLoader) updateBasePath(transitive *schemaLoader, basePath string) string { diff --git a/vendor/github.com/go-openapi/swag/convert.go b/vendor/github.com/go-openapi/swag/convert.go index 7da35c316e..fc085aeb8e 100644 --- a/vendor/github.com/go-openapi/swag/convert.go +++ b/vendor/github.com/go-openapi/swag/convert.go @@ -88,7 +88,7 @@ func ConvertFloat64(str string) (float64, error) { return strconv.ParseFloat(str, 64) } -// ConvertInt8 turn a string into int8 boolean +// ConvertInt8 turn a string into an int8 func ConvertInt8(str string) (int8, error) { i, err := strconv.ParseInt(str, 10, 8) if err != nil { @@ -97,7 +97,7 @@ func ConvertInt8(str string) (int8, error) { return int8(i), nil } -// ConvertInt16 turn a string into a int16 +// ConvertInt16 turn a string into an int16 func ConvertInt16(str string) (int16, error) { i, err := strconv.ParseInt(str, 10, 16) if err != nil { @@ -106,7 +106,7 @@ func ConvertInt16(str string) (int16, error) { return int16(i), nil } -// ConvertInt32 turn a string into a int32 +// ConvertInt32 turn a string into an int32 func ConvertInt32(str string) (int32, error) { i, err := strconv.ParseInt(str, 10, 32) if err != nil { @@ -115,12 +115,12 @@ func ConvertInt32(str string) (int32, error) { return int32(i), nil } -// ConvertInt64 turn a string into a int64 +// ConvertInt64 turn a string into an int64 func ConvertInt64(str string) (int64, error) { return strconv.ParseInt(str, 10, 64) } -// ConvertUint8 turn a string into a uint8 +// ConvertUint8 turn a string into an uint8 func ConvertUint8(str string) (uint8, error) { i, err := strconv.ParseUint(str, 10, 8) if err != nil { @@ -129,7 +129,7 @@ func ConvertUint8(str string) (uint8, error) { return uint8(i), nil } -// ConvertUint16 turn a string into a uint16 +// ConvertUint16 turn a string into an uint16 func ConvertUint16(str string) (uint16, error) { i, err := strconv.ParseUint(str, 10, 16) if err != nil { @@ -138,7 +138,7 @@ func ConvertUint16(str string) (uint16, error) { return uint16(i), nil } -// ConvertUint32 turn a string into a uint32 +// ConvertUint32 turn a string into an uint32 func ConvertUint32(str string) (uint32, error) { i, err := strconv.ParseUint(str, 10, 32) if err != nil { @@ -147,7 +147,7 @@ func ConvertUint32(str string) (uint32, error) { return uint32(i), nil } -// ConvertUint64 turn a string into a uint64 +// ConvertUint64 turn a string into an uint64 func ConvertUint64(str string) (uint64, error) { return strconv.ParseUint(str, 10, 64) } diff --git a/vendor/github.com/go-openapi/swag/convert_types.go b/vendor/github.com/go-openapi/swag/convert_types.go index c95e4e78bd..bfba823462 100644 --- a/vendor/github.com/go-openapi/swag/convert_types.go +++ b/vendor/github.com/go-openapi/swag/convert_types.go @@ -181,12 +181,12 @@ func IntValueMap(src map[string]*int) map[string]int { return dst } -// Int32 returns a pointer to of the int64 value passed in. +// Int32 returns a pointer to of the int32 value passed in. func Int32(v int32) *int32 { return &v } -// Int32Value returns the value of the int64 pointer passed in or +// Int32Value returns the value of the int32 pointer passed in or // 0 if the pointer is nil. func Int32Value(v *int32) int32 { if v != nil { @@ -195,7 +195,7 @@ func Int32Value(v *int32) int32 { return 0 } -// Int32Slice converts a slice of int64 values into a slice of +// Int32Slice converts a slice of int32 values into a slice of // int32 pointers func Int32Slice(src []int32) []*int32 { dst := make([]*int32, len(src)) @@ -299,13 +299,13 @@ func Int64ValueMap(src map[string]*int64) map[string]int64 { return dst } -// Uint returns a pouinter to of the uint value passed in. +// Uint returns a pointer to of the uint value passed in. func Uint(v uint) *uint { return &v } -// UintValue returns the value of the uint pouinter passed in or -// 0 if the pouinter is nil. +// UintValue returns the value of the uint pointer passed in or +// 0 if the pointer is nil. func UintValue(v *uint) uint { if v != nil { return *v @@ -313,8 +313,8 @@ func UintValue(v *uint) uint { return 0 } -// UintSlice converts a slice of uint values uinto a slice of -// uint pouinters +// UintSlice converts a slice of uint values into a slice of +// uint pointers func UintSlice(src []uint) []*uint { dst := make([]*uint, len(src)) for i := 0; i < len(src); i++ { @@ -323,7 +323,7 @@ func UintSlice(src []uint) []*uint { return dst } -// UintValueSlice converts a slice of uint pouinters uinto a slice of +// UintValueSlice converts a slice of uint pointers into a slice of // uint values func UintValueSlice(src []*uint) []uint { dst := make([]uint, len(src)) @@ -335,8 +335,8 @@ func UintValueSlice(src []*uint) []uint { return dst } -// UintMap converts a string map of uint values uinto a string -// map of uint pouinters +// UintMap converts a string map of uint values into a string +// map of uint pointers func UintMap(src map[string]uint) map[string]*uint { dst := make(map[string]*uint) for k, val := range src { @@ -346,7 +346,7 @@ func UintMap(src map[string]uint) map[string]*uint { return dst } -// UintValueMap converts a string map of uint pouinters uinto a string +// UintValueMap converts a string map of uint pointers into a string // map of uint values func UintValueMap(src map[string]*uint) map[string]uint { dst := make(map[string]uint) @@ -358,13 +358,13 @@ func UintValueMap(src map[string]*uint) map[string]uint { return dst } -// Uint32 returns a pouinter to of the uint64 value passed in. +// Uint32 returns a pointer to of the uint32 value passed in. func Uint32(v uint32) *uint32 { return &v } -// Uint32Value returns the value of the uint64 pouinter passed in or -// 0 if the pouinter is nil. +// Uint32Value returns the value of the uint32 pointer passed in or +// 0 if the pointer is nil. func Uint32Value(v *uint32) uint32 { if v != nil { return *v @@ -372,8 +372,8 @@ func Uint32Value(v *uint32) uint32 { return 0 } -// Uint32Slice converts a slice of uint64 values uinto a slice of -// uint32 pouinters +// Uint32Slice converts a slice of uint32 values into a slice of +// uint32 pointers func Uint32Slice(src []uint32) []*uint32 { dst := make([]*uint32, len(src)) for i := 0; i < len(src); i++ { @@ -382,7 +382,7 @@ func Uint32Slice(src []uint32) []*uint32 { return dst } -// Uint32ValueSlice converts a slice of uint32 pouinters uinto a slice of +// Uint32ValueSlice converts a slice of uint32 pointers into a slice of // uint32 values func Uint32ValueSlice(src []*uint32) []uint32 { dst := make([]uint32, len(src)) @@ -394,8 +394,8 @@ func Uint32ValueSlice(src []*uint32) []uint32 { return dst } -// Uint32Map converts a string map of uint32 values uinto a string -// map of uint32 pouinters +// Uint32Map converts a string map of uint32 values into a string +// map of uint32 pointers func Uint32Map(src map[string]uint32) map[string]*uint32 { dst := make(map[string]*uint32) for k, val := range src { @@ -405,7 +405,7 @@ func Uint32Map(src map[string]uint32) map[string]*uint32 { return dst } -// Uint32ValueMap converts a string map of uint32 pouinters uinto a string +// Uint32ValueMap converts a string map of uint32 pointers into a string // map of uint32 values func Uint32ValueMap(src map[string]*uint32) map[string]uint32 { dst := make(map[string]uint32) @@ -417,13 +417,13 @@ func Uint32ValueMap(src map[string]*uint32) map[string]uint32 { return dst } -// Uint64 returns a pouinter to of the uint64 value passed in. +// Uint64 returns a pointer to of the uint64 value passed in. func Uint64(v uint64) *uint64 { return &v } -// Uint64Value returns the value of the uint64 pouinter passed in or -// 0 if the pouinter is nil. +// Uint64Value returns the value of the uint64 pointer passed in or +// 0 if the pointer is nil. func Uint64Value(v *uint64) uint64 { if v != nil { return *v @@ -431,8 +431,8 @@ func Uint64Value(v *uint64) uint64 { return 0 } -// Uint64Slice converts a slice of uint64 values uinto a slice of -// uint64 pouinters +// Uint64Slice converts a slice of uint64 values into a slice of +// uint64 pointers func Uint64Slice(src []uint64) []*uint64 { dst := make([]*uint64, len(src)) for i := 0; i < len(src); i++ { @@ -441,7 +441,7 @@ func Uint64Slice(src []uint64) []*uint64 { return dst } -// Uint64ValueSlice converts a slice of uint64 pouinters uinto a slice of +// Uint64ValueSlice converts a slice of uint64 pointers into a slice of // uint64 values func Uint64ValueSlice(src []*uint64) []uint64 { dst := make([]uint64, len(src)) @@ -453,8 +453,8 @@ func Uint64ValueSlice(src []*uint64) []uint64 { return dst } -// Uint64Map converts a string map of uint64 values uinto a string -// map of uint64 pouinters +// Uint64Map converts a string map of uint64 values into a string +// map of uint64 pointers func Uint64Map(src map[string]uint64) map[string]*uint64 { dst := make(map[string]*uint64) for k, val := range src { @@ -464,7 +464,7 @@ func Uint64Map(src map[string]uint64) map[string]*uint64 { return dst } -// Uint64ValueMap converts a string map of uint64 pouinters uinto a string +// Uint64ValueMap converts a string map of uint64 pointers into a string // map of uint64 values func Uint64ValueMap(src map[string]*uint64) map[string]uint64 { dst := make(map[string]uint64) diff --git a/vendor/github.com/go-openapi/swag/go.mod b/vendor/github.com/go-openapi/swag/go.mod index 15bbb08222..4aef463e42 100644 --- a/vendor/github.com/go-openapi/swag/go.mod +++ b/vendor/github.com/go-openapi/swag/go.mod @@ -6,9 +6,11 @@ require ( github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63 github.com/stretchr/testify v1.3.0 gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 // indirect - gopkg.in/yaml.v2 v2.2.2 + gopkg.in/yaml.v2 v2.2.4 ) replace github.com/golang/lint => golang.org/x/lint v0.0.0-20190409202823-959b441ac422 replace sourcegraph.com/sourcegraph/go-diff => github.com/sourcegraph/go-diff v0.5.1 + +go 1.13 diff --git a/vendor/github.com/go-openapi/swag/go.sum b/vendor/github.com/go-openapi/swag/go.sum index 33469f54ac..e8a80bacf0 100644 --- a/vendor/github.com/go-openapi/swag/go.sum +++ b/vendor/github.com/go-openapi/swag/go.sum @@ -16,5 +16,5 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.4 h1:/eiJrUcujPVeJ3xlSWaiNi3uSVmDGBK1pDHUHAnao1I= +gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= diff --git a/vendor/github.com/go-openapi/swag/json.go b/vendor/github.com/go-openapi/swag/json.go index edf93d84c6..7e9902ca31 100644 --- a/vendor/github.com/go-openapi/swag/json.go +++ b/vendor/github.com/go-openapi/swag/json.go @@ -51,7 +51,7 @@ type ejUnmarshaler interface { UnmarshalEasyJSON(w *jlexer.Lexer) } -// WriteJSON writes json data, prefers finding an appropriate interface to short-circuit the marshaller +// WriteJSON writes json data, prefers finding an appropriate interface to short-circuit the marshaler // so it takes the fastest option available. func WriteJSON(data interface{}) ([]byte, error) { if d, ok := data.(ejMarshaler); ok { @@ -65,8 +65,8 @@ func WriteJSON(data interface{}) ([]byte, error) { return json.Marshal(data) } -// ReadJSON reads json data, prefers finding an appropriate interface to short-circuit the unmarshaller -// so it takes the fastes option available +// ReadJSON reads json data, prefers finding an appropriate interface to short-circuit the unmarshaler +// so it takes the fastest option available func ReadJSON(data []byte, value interface{}) error { trimmedData := bytes.Trim(data, "\x00") if d, ok := value.(ejUnmarshaler); ok { @@ -189,7 +189,7 @@ func FromDynamicJSON(data, target interface{}) error { return json.Unmarshal(b, target) } -// NameProvider represents an object capabale of translating from go property names +// NameProvider represents an object capable of translating from go property names // to json property names // This type is thread-safe. type NameProvider struct { diff --git a/vendor/github.com/hashicorp/golang-lru/lru.go b/vendor/github.com/hashicorp/golang-lru/lru.go index 052a38b4c4..4e5e9d8fd0 100644 --- a/vendor/github.com/hashicorp/golang-lru/lru.go +++ b/vendor/github.com/hashicorp/golang-lru/lru.go @@ -37,7 +37,7 @@ func (c *Cache) Purge() { c.lock.Unlock() } -// Add adds a value to the cache. Returns true if an eviction occurred. +// Add adds a value to the cache. Returns true if an eviction occurred. func (c *Cache) Add(key, value interface{}) (evicted bool) { c.lock.Lock() evicted = c.lru.Add(key, value) @@ -71,8 +71,8 @@ func (c *Cache) Peek(key interface{}) (value interface{}, ok bool) { return value, ok } -// ContainsOrAdd checks if a key is in the cache without updating the -// recent-ness or deleting it for being stale, and if not, adds the value. +// ContainsOrAdd checks if a key is in the cache without updating the +// recent-ness or deleting it for being stale, and if not, adds the value. // Returns whether found and whether an eviction occurred. func (c *Cache) ContainsOrAdd(key, value interface{}) (ok, evicted bool) { c.lock.Lock() @@ -85,6 +85,22 @@ func (c *Cache) ContainsOrAdd(key, value interface{}) (ok, evicted bool) { return false, evicted } +// PeekOrAdd checks if a key is in the cache without updating the +// recent-ness or deleting it for being stale, and if not, adds the value. +// Returns whether found and whether an eviction occurred. +func (c *Cache) PeekOrAdd(key, value interface{}) (previous interface{}, ok, evicted bool) { + c.lock.Lock() + defer c.lock.Unlock() + + previous, ok = c.lru.Peek(key) + if ok { + return previous, true, false + } + + evicted = c.lru.Add(key, value) + return nil, false, evicted +} + // Remove removes the provided key from the cache. func (c *Cache) Remove(key interface{}) (present bool) { c.lock.Lock() diff --git a/vendor/github.com/prometheus/client_golang/prometheus/counter.go b/vendor/github.com/prometheus/client_golang/prometheus/counter.go index d463e36d3e..df72fcf364 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/counter.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/counter.go @@ -17,6 +17,7 @@ import ( "errors" "math" "sync/atomic" + "time" dto "github.com/prometheus/client_model/go" ) @@ -42,11 +43,27 @@ type Counter interface { Add(float64) } +// ExemplarAdder is implemented by Counters that offer the option of adding a +// value to the Counter together with an exemplar. Its AddWithExemplar method +// works like the Add method of the Counter interface but also replaces the +// currently saved exemplar (if any) with a new one, created from the provided +// value, the current time as timestamp, and the provided labels. Empty Labels +// will lead to a valid (label-less) exemplar. But if Labels is nil, the current +// exemplar is left in place. AddWithExemplar panics if the value is < 0, if any +// of the provided labels are invalid, or if the provided labels contain more +// than 64 runes in total. +type ExemplarAdder interface { + AddWithExemplar(value float64, exemplar Labels) +} + // CounterOpts is an alias for Opts. See there for doc comments. type CounterOpts Opts // NewCounter creates a new Counter based on the provided CounterOpts. // +// The returned implementation also implements ExemplarAdder. It is safe to +// perform the corresponding type assertion. +// // The returned implementation tracks the counter value in two separate // variables, a float64 and a uint64. The latter is used to track calls of the // Inc method and calls of the Add method with a value that can be represented @@ -61,7 +78,7 @@ func NewCounter(opts CounterOpts) Counter { nil, opts.ConstLabels, ) - result := &counter{desc: desc, labelPairs: desc.constLabelPairs} + result := &counter{desc: desc, labelPairs: desc.constLabelPairs, now: time.Now} result.init(result) // Init self-collection. return result } @@ -78,6 +95,9 @@ type counter struct { desc *Desc labelPairs []*dto.LabelPair + exemplar atomic.Value // Containing nil or a *dto.Exemplar. + + now func() time.Time // To mock out time.Now() for testing. } func (c *counter) Desc() *Desc { @@ -88,6 +108,7 @@ func (c *counter) Add(v float64) { if v < 0 { panic(errors.New("counter cannot decrease in value")) } + ival := uint64(v) if float64(ival) == v { atomic.AddUint64(&c.valInt, ival) @@ -103,6 +124,11 @@ func (c *counter) Add(v float64) { } } +func (c *counter) AddWithExemplar(v float64, e Labels) { + c.Add(v) + c.updateExemplar(v, e) +} + func (c *counter) Inc() { atomic.AddUint64(&c.valInt, 1) } @@ -112,7 +138,23 @@ func (c *counter) Write(out *dto.Metric) error { ival := atomic.LoadUint64(&c.valInt) val := fval + float64(ival) - return populateMetric(CounterValue, val, c.labelPairs, out) + var exemplar *dto.Exemplar + if e := c.exemplar.Load(); e != nil { + exemplar = e.(*dto.Exemplar) + } + + return populateMetric(CounterValue, val, c.labelPairs, exemplar, out) +} + +func (c *counter) updateExemplar(v float64, l Labels) { + if l == nil { + return + } + e, err := newExemplar(v, c.now(), l) + if err != nil { + panic(err) + } + c.exemplar.Store(e) } // CounterVec is a Collector that bundles a set of Counters that all share the @@ -138,7 +180,7 @@ func NewCounterVec(opts CounterOpts, labelNames []string) *CounterVec { if len(lvs) != len(desc.variableLabels) { panic(makeInconsistentCardinalityError(desc.fqName, desc.variableLabels, lvs)) } - result := &counter{desc: desc, labelPairs: makeLabelPairs(desc, lvs)} + result := &counter{desc: desc, labelPairs: makeLabelPairs(desc, lvs), now: time.Now} result.init(result) // Init self-collection. return result }), diff --git a/vendor/github.com/prometheus/client_golang/prometheus/desc.go b/vendor/github.com/prometheus/client_golang/prometheus/desc.go index 1d034f871c..e3232d79f4 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/desc.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/desc.go @@ -19,6 +19,7 @@ import ( "sort" "strings" + "github.com/cespare/xxhash/v2" "github.com/golang/protobuf/proto" "github.com/prometheus/common/model" @@ -126,24 +127,24 @@ func NewDesc(fqName, help string, variableLabels []string, constLabels Labels) * return d } - vh := hashNew() + xxh := xxhash.New() for _, val := range labelValues { - vh = hashAdd(vh, val) - vh = hashAddByte(vh, separatorByte) + xxh.WriteString(val) + xxh.Write(separatorByteSlice) } - d.id = vh + d.id = xxh.Sum64() // Sort labelNames so that order doesn't matter for the hash. sort.Strings(labelNames) // Now hash together (in this order) the help string and the sorted // label names. - lh := hashNew() - lh = hashAdd(lh, help) - lh = hashAddByte(lh, separatorByte) + xxh.Reset() + xxh.WriteString(help) + xxh.Write(separatorByteSlice) for _, labelName := range labelNames { - lh = hashAdd(lh, labelName) - lh = hashAddByte(lh, separatorByte) + xxh.WriteString(labelName) + xxh.Write(separatorByteSlice) } - d.dimHash = lh + d.dimHash = xxh.Sum64() d.constLabelPairs = make([]*dto.LabelPair, 0, len(constLabels)) for n, v := range constLabels { diff --git a/vendor/github.com/prometheus/client_golang/prometheus/doc.go b/vendor/github.com/prometheus/client_golang/prometheus/doc.go index 01977de661..98450125d6 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/doc.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/doc.go @@ -84,25 +84,21 @@ // of those four metric types can be found in the Prometheus docs: // https://prometheus.io/docs/concepts/metric_types/ // -// A fifth "type" of metric is Untyped. It behaves like a Gauge, but signals the -// Prometheus server not to assume anything about its type. -// -// In addition to the fundamental metric types Gauge, Counter, Summary, -// Histogram, and Untyped, a very important part of the Prometheus data model is -// the partitioning of samples along dimensions called labels, which results in +// In addition to the fundamental metric types Gauge, Counter, Summary, and +// Histogram, a very important part of the Prometheus data model is the +// partitioning of samples along dimensions called labels, which results in // metric vectors. The fundamental types are GaugeVec, CounterVec, SummaryVec, -// HistogramVec, and UntypedVec. +// and HistogramVec. // // While only the fundamental metric types implement the Metric interface, both // the metrics and their vector versions implement the Collector interface. A // Collector manages the collection of a number of Metrics, but for convenience, -// a Metric can also “collect itself”. Note that Gauge, Counter, Summary, -// Histogram, and Untyped are interfaces themselves while GaugeVec, CounterVec, -// SummaryVec, HistogramVec, and UntypedVec are not. +// a Metric can also “collect itself”. Note that Gauge, Counter, Summary, and +// Histogram are interfaces themselves while GaugeVec, CounterVec, SummaryVec, +// and HistogramVec are not. // // To create instances of Metrics and their vector versions, you need a suitable -// …Opts struct, i.e. GaugeOpts, CounterOpts, SummaryOpts, HistogramOpts, or -// UntypedOpts. +// …Opts struct, i.e. GaugeOpts, CounterOpts, SummaryOpts, or HistogramOpts. // // Custom Collectors and constant Metrics // @@ -118,13 +114,16 @@ // existing numbers into Prometheus Metrics during collection. An own // implementation of the Collector interface is perfect for that. You can create // Metric instances “on the fly” using NewConstMetric, NewConstHistogram, and -// NewConstSummary (and their respective Must… versions). That will happen in -// the Collect method. The Describe method has to return separate Desc -// instances, representative of the “throw-away” metrics to be created later. -// NewDesc comes in handy to create those Desc instances. Alternatively, you -// could return no Desc at all, which will mark the Collector “unchecked”. No -// checks are performed at registration time, but metric consistency will still -// be ensured at scrape time, i.e. any inconsistencies will lead to scrape +// NewConstSummary (and their respective Must… versions). NewConstMetric is used +// for all metric types with just a float64 as their value: Counter, Gauge, and +// a special “type” called Untyped. Use the latter if you are not sure if the +// mirrored metric is a Counter or a Gauge. Creation of the Metric instance +// happens in the Collect method. The Describe method has to return separate +// Desc instances, representative of the “throw-away” metrics to be created +// later. NewDesc comes in handy to create those Desc instances. Alternatively, +// you could return no Desc at all, which will mark the Collector “unchecked”. +// No checks are performed at registration time, but metric consistency will +// still be ensured at scrape time, i.e. any inconsistencies will lead to scrape // errors. Thus, with unchecked Collectors, the responsibility to not collect // metrics that lead to inconsistencies in the total scrape result lies with the // implementer of the Collector. While this is not a desirable state, it is diff --git a/vendor/github.com/prometheus/client_golang/prometheus/gauge.go b/vendor/github.com/prometheus/client_golang/prometheus/gauge.go index 71d406bd92..d67573f767 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/gauge.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/gauge.go @@ -123,7 +123,7 @@ func (g *gauge) Sub(val float64) { func (g *gauge) Write(out *dto.Metric) error { val := math.Float64frombits(atomic.LoadUint64(&g.valBits)) - return populateMetric(GaugeValue, val, g.labelPairs, out) + return populateMetric(GaugeValue, val, g.labelPairs, nil, out) } // GaugeVec is a Collector that bundles a set of Gauges that all share the same @@ -273,9 +273,12 @@ type GaugeFunc interface { // NewGaugeFunc creates a new GaugeFunc based on the provided GaugeOpts. The // value reported is determined by calling the given function from within the // Write method. Take into account that metric collection may happen -// concurrently. If that results in concurrent calls to Write, like in the case -// where a GaugeFunc is directly registered with Prometheus, the provided -// function must be concurrency-safe. +// concurrently. Therefore, it must be safe to call the provided function +// concurrently. +// +// NewGaugeFunc is a good way to create an “info” style metric with a constant +// value of 1. Example: +// https://github.com/prometheus/common/blob/8558a5b7db3c84fa38b4766966059a7bd5bfa2ee/version/info.go#L36-L56 func NewGaugeFunc(opts GaugeOpts, function func() float64) GaugeFunc { return newValueFunc(NewDesc( BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), diff --git a/vendor/github.com/prometheus/client_golang/prometheus/go_collector.go b/vendor/github.com/prometheus/client_golang/prometheus/go_collector.go index dc9247fed9..ea05cf429f 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/go_collector.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/go_collector.go @@ -73,7 +73,7 @@ func NewGoCollector() Collector { nil, nil), gcDesc: NewDesc( "go_gc_duration_seconds", - "A summary of the GC invocation durations.", + "A summary of the pause duration of garbage collection cycles.", nil, nil), goInfoDesc: NewDesc( "go_info", diff --git a/vendor/github.com/prometheus/client_golang/prometheus/histogram.go b/vendor/github.com/prometheus/client_golang/prometheus/histogram.go index d7ea67bd2b..4271f438ae 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/histogram.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/histogram.go @@ -20,6 +20,7 @@ import ( "sort" "sync" "sync/atomic" + "time" "github.com/golang/protobuf/proto" @@ -138,7 +139,7 @@ type HistogramOpts struct { // better covered by target labels set by the scraping Prometheus // server, or by one specific metric (e.g. a build_info or a // machine_role metric). See also - // https://prometheus.io/docs/instrumenting/writing_exporters/#target-labels,-not-static-scraped-labels + // https://prometheus.io/docs/instrumenting/writing_exporters/#target-labels-not-static-scraped-labels ConstLabels Labels // Buckets defines the buckets into which observations are counted. Each @@ -151,6 +152,10 @@ type HistogramOpts struct { // NewHistogram creates a new Histogram based on the provided HistogramOpts. It // panics if the buckets in HistogramOpts are not in strictly increasing order. +// +// The returned implementation also implements ExemplarObserver. It is safe to +// perform the corresponding type assertion. Exemplars are tracked separately +// for each bucket. func NewHistogram(opts HistogramOpts) Histogram { return newHistogram( NewDesc( @@ -187,7 +192,8 @@ func newHistogram(desc *Desc, opts HistogramOpts, labelValues ...string) Histogr desc: desc, upperBounds: opts.Buckets, labelPairs: makeLabelPairs(desc, labelValues), - counts: [2]*histogramCounts{&histogramCounts{}, &histogramCounts{}}, + counts: [2]*histogramCounts{{}, {}}, + now: time.Now, } for i, upperBound := range h.upperBounds { if i < len(h.upperBounds)-1 { @@ -205,9 +211,10 @@ func newHistogram(desc *Desc, opts HistogramOpts, labelValues ...string) Histogr } } // Finally we know the final length of h.upperBounds and can make buckets - // for both counts: + // for both counts as well as exemplars: h.counts[0].buckets = make([]uint64, len(h.upperBounds)) h.counts[1].buckets = make([]uint64, len(h.upperBounds)) + h.exemplars = make([]atomic.Value, len(h.upperBounds)+1) h.init(h) // Init self-collection. return h @@ -254,6 +261,9 @@ type histogram struct { upperBounds []float64 labelPairs []*dto.LabelPair + exemplars []atomic.Value // One more than buckets (to include +Inf), each a *dto.Exemplar. + + now func() time.Time // To mock out time.Now() for testing. } func (h *histogram) Desc() *Desc { @@ -261,36 +271,13 @@ func (h *histogram) Desc() *Desc { } func (h *histogram) Observe(v float64) { - // TODO(beorn7): For small numbers of buckets (<30), a linear search is - // slightly faster than the binary search. If we really care, we could - // switch from one search strategy to the other depending on the number - // of buckets. - // - // Microbenchmarks (BenchmarkHistogramNoLabels): - // 11 buckets: 38.3 ns/op linear - binary 48.7 ns/op - // 100 buckets: 78.1 ns/op linear - binary 54.9 ns/op - // 300 buckets: 154 ns/op linear - binary 61.6 ns/op - i := sort.SearchFloat64s(h.upperBounds, v) - - // We increment h.countAndHotIdx so that the counter in the lower - // 63 bits gets incremented. At the same time, we get the new value - // back, which we can use to find the currently-hot counts. - n := atomic.AddUint64(&h.countAndHotIdx, 1) - hotCounts := h.counts[n>>63] + h.observe(v, h.findBucket(v)) +} - if i < len(h.upperBounds) { - atomic.AddUint64(&hotCounts.buckets[i], 1) - } - for { - oldBits := atomic.LoadUint64(&hotCounts.sumBits) - newBits := math.Float64bits(math.Float64frombits(oldBits) + v) - if atomic.CompareAndSwapUint64(&hotCounts.sumBits, oldBits, newBits) { - break - } - } - // Increment count last as we take it as a signal that the observation - // is complete. - atomic.AddUint64(&hotCounts.count, 1) +func (h *histogram) ObserveWithExemplar(v float64, e Labels) { + i := h.findBucket(v) + h.observe(v, i) + h.updateExemplar(v, i, e) } func (h *histogram) Write(out *dto.Metric) error { @@ -329,6 +316,18 @@ func (h *histogram) Write(out *dto.Metric) error { CumulativeCount: proto.Uint64(cumCount), UpperBound: proto.Float64(upperBound), } + if e := h.exemplars[i].Load(); e != nil { + his.Bucket[i].Exemplar = e.(*dto.Exemplar) + } + } + // If there is an exemplar for the +Inf bucket, we have to add that bucket explicitly. + if e := h.exemplars[len(h.upperBounds)].Load(); e != nil { + b := &dto.Bucket{ + CumulativeCount: proto.Uint64(count), + UpperBound: proto.Float64(math.Inf(1)), + Exemplar: e.(*dto.Exemplar), + } + his.Bucket = append(his.Bucket, b) } out.Histogram = his @@ -352,6 +351,57 @@ func (h *histogram) Write(out *dto.Metric) error { return nil } +// findBucket returns the index of the bucket for the provided value, or +// len(h.upperBounds) for the +Inf bucket. +func (h *histogram) findBucket(v float64) int { + // TODO(beorn7): For small numbers of buckets (<30), a linear search is + // slightly faster than the binary search. If we really care, we could + // switch from one search strategy to the other depending on the number + // of buckets. + // + // Microbenchmarks (BenchmarkHistogramNoLabels): + // 11 buckets: 38.3 ns/op linear - binary 48.7 ns/op + // 100 buckets: 78.1 ns/op linear - binary 54.9 ns/op + // 300 buckets: 154 ns/op linear - binary 61.6 ns/op + return sort.SearchFloat64s(h.upperBounds, v) +} + +// observe is the implementation for Observe without the findBucket part. +func (h *histogram) observe(v float64, bucket int) { + // We increment h.countAndHotIdx so that the counter in the lower + // 63 bits gets incremented. At the same time, we get the new value + // back, which we can use to find the currently-hot counts. + n := atomic.AddUint64(&h.countAndHotIdx, 1) + hotCounts := h.counts[n>>63] + + if bucket < len(h.upperBounds) { + atomic.AddUint64(&hotCounts.buckets[bucket], 1) + } + for { + oldBits := atomic.LoadUint64(&hotCounts.sumBits) + newBits := math.Float64bits(math.Float64frombits(oldBits) + v) + if atomic.CompareAndSwapUint64(&hotCounts.sumBits, oldBits, newBits) { + break + } + } + // Increment count last as we take it as a signal that the observation + // is complete. + atomic.AddUint64(&hotCounts.count, 1) +} + +// updateExemplar replaces the exemplar for the provided bucket. With empty +// labels, it's a no-op. It panics if any of the labels is invalid. +func (h *histogram) updateExemplar(v float64, bucket int, l Labels) { + if l == nil { + return + } + e, err := newExemplar(v, h.now(), l) + if err != nil { + panic(err) + } + h.exemplars[bucket].Store(e) +} + // HistogramVec is a Collector that bundles a set of Histograms that all share the // same Desc, but have different values for their variable labels. This is used // if you want to count the same thing partitioned by various dimensions diff --git a/vendor/github.com/prometheus/client_golang/prometheus/metric.go b/vendor/github.com/prometheus/client_golang/prometheus/metric.go index 55e6d86d59..0df1eff881 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/metric.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/metric.go @@ -18,11 +18,12 @@ import ( "time" "github.com/golang/protobuf/proto" + "github.com/prometheus/common/model" dto "github.com/prometheus/client_model/go" ) -const separatorByte byte = 255 +var separatorByteSlice = []byte{model.SeparatorByte} // For convenient use with xxhash. // A Metric models a single sample value with its meta data being exported to // Prometheus. Implementations of Metric in this package are Gauge, Counter, diff --git a/vendor/github.com/prometheus/client_golang/prometheus/observer.go b/vendor/github.com/prometheus/client_golang/prometheus/observer.go index 5806cd09e3..44128016fd 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/observer.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/observer.go @@ -50,3 +50,15 @@ type ObserverVec interface { Collector } + +// ExemplarObserver is implemented by Observers that offer the option of +// observing a value together with an exemplar. Its ObserveWithExemplar method +// works like the Observe method of an Observer but also replaces the currently +// saved exemplar (if any) with a new one, created from the provided value, the +// current time as timestamp, and the provided Labels. Empty Labels will lead to +// a valid (label-less) exemplar. But if Labels is nil, the current exemplar is +// left in place. ObserveWithExemplar panics if any of the provided labels are +// invalid or if the provided labels contain more than 64 runes in total. +type ExemplarObserver interface { + ObserveWithExemplar(value float64, exemplar Labels) +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go b/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go index fa535684f9..d1354b1016 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go @@ -62,6 +62,8 @@ func (r *responseWriterDelegator) WriteHeader(code int) { } func (r *responseWriterDelegator) Write(b []byte) (int, error) { + // If applicable, call WriteHeader here so that observeWriteHeader is + // handled appropriately. if !r.wroteHeader { r.WriteHeader(http.StatusOK) } @@ -82,12 +84,19 @@ func (d closeNotifierDelegator) CloseNotify() <-chan bool { return d.ResponseWriter.(http.CloseNotifier).CloseNotify() } func (d flusherDelegator) Flush() { + // If applicable, call WriteHeader here so that observeWriteHeader is + // handled appropriately. + if !d.wroteHeader { + d.WriteHeader(http.StatusOK) + } d.ResponseWriter.(http.Flusher).Flush() } func (d hijackerDelegator) Hijack() (net.Conn, *bufio.ReadWriter, error) { return d.ResponseWriter.(http.Hijacker).Hijack() } func (d readerFromDelegator) ReadFrom(re io.Reader) (int64, error) { + // If applicable, call WriteHeader here so that observeWriteHeader is + // handled appropriately. if !d.wroteHeader { d.WriteHeader(http.StatusOK) } diff --git a/vendor/github.com/prometheus/client_golang/prometheus/promhttp/http.go b/vendor/github.com/prometheus/client_golang/prometheus/promhttp/http.go index cea5a90fd9..b0ee4678e5 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/promhttp/http.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/promhttp/http.go @@ -144,7 +144,12 @@ func HandlerFor(reg prometheus.Gatherer, opts HandlerOpts) http.Handler { } } - contentType := expfmt.Negotiate(req.Header) + var contentType expfmt.Format + if opts.EnableOpenMetrics { + contentType = expfmt.NegotiateIncludingOpenMetrics(req.Header) + } else { + contentType = expfmt.Negotiate(req.Header) + } header := rsp.Header() header.Set(contentTypeHeader, string(contentType)) @@ -163,22 +168,38 @@ func HandlerFor(reg prometheus.Gatherer, opts HandlerOpts) http.Handler { enc := expfmt.NewEncoder(w, contentType) var lastErr error + + // handleError handles the error according to opts.ErrorHandling + // and returns true if we have to abort after the handling. + handleError := func(err error) bool { + if err == nil { + return false + } + lastErr = err + if opts.ErrorLog != nil { + opts.ErrorLog.Println("error encoding and sending metric family:", err) + } + errCnt.WithLabelValues("encoding").Inc() + switch opts.ErrorHandling { + case PanicOnError: + panic(err) + case HTTPErrorOnError: + httpError(rsp, err) + return true + } + // Do nothing in all other cases, including ContinueOnError. + return false + } + for _, mf := range mfs { - if err := enc.Encode(mf); err != nil { - lastErr = err - if opts.ErrorLog != nil { - opts.ErrorLog.Println("error encoding and sending metric family:", err) - } - errCnt.WithLabelValues("encoding").Inc() - switch opts.ErrorHandling { - case PanicOnError: - panic(err) - case ContinueOnError: - // Handled later. - case HTTPErrorOnError: - httpError(rsp, err) - return - } + if handleError(enc.Encode(mf)) { + return + } + } + if closer, ok := enc.(expfmt.Closer); ok { + // This in particular takes care of the final "# EOF\n" line for OpenMetrics. + if handleError(closer.Close()) { + return } } @@ -318,6 +339,16 @@ type HandlerOpts struct { // away). Until the implementation is improved, it is recommended to // implement a separate timeout in potentially slow Collectors. Timeout time.Duration + // If true, the experimental OpenMetrics encoding is added to the + // possible options during content negotiation. Note that Prometheus + // 2.5.0+ will negotiate OpenMetrics as first priority. OpenMetrics is + // the only way to transmit exemplars. However, the move to OpenMetrics + // is not completely transparent. Most notably, the values of "quantile" + // labels of Summaries and "le" labels of Histograms are formatted with + // a trailing ".0" if they would otherwise look like integer numbers + // (which changes the identity of the resulting series on the Prometheus + // server). + EnableOpenMetrics bool } // gzipAccepted returns whether the client will accept gzip-encoded content. diff --git a/vendor/github.com/prometheus/client_golang/prometheus/registry.go b/vendor/github.com/prometheus/client_golang/prometheus/registry.go index 6c32516aa2..c05d6ee1b3 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/registry.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/registry.go @@ -25,6 +25,7 @@ import ( "sync" "unicode/utf8" + "github.com/cespare/xxhash/v2" "github.com/golang/protobuf/proto" "github.com/prometheus/common/expfmt" @@ -74,7 +75,7 @@ func NewRegistry() *Registry { // NewPedanticRegistry returns a registry that checks during collection if each // collected Metric is consistent with its reported Desc, and if the Desc has // actually been registered with the registry. Unchecked Collectors (those whose -// Describe methed does not yield any descriptors) are excluded from the check. +// Describe method does not yield any descriptors) are excluded from the check. // // Usually, a Registry will be happy as long as the union of all collected // Metrics is consistent and valid even if some metrics are not consistent with @@ -266,7 +267,7 @@ func (r *Registry) Register(c Collector) error { descChan = make(chan *Desc, capDescChan) newDescIDs = map[uint64]struct{}{} newDimHashesByName = map[string]uint64{} - collectorID uint64 // Just a sum of all desc IDs. + collectorID uint64 // All desc IDs XOR'd together. duplicateDescErr error ) go func() { @@ -293,12 +294,12 @@ func (r *Registry) Register(c Collector) error { if _, exists := r.descIDs[desc.id]; exists { duplicateDescErr = fmt.Errorf("descriptor %s already exists with the same fully-qualified name and const label values", desc) } - // If it is not a duplicate desc in this collector, add it to + // If it is not a duplicate desc in this collector, XOR it to // the collectorID. (We allow duplicate descs within the same // collector, but their existence must be a no-op.) if _, exists := newDescIDs[desc.id]; !exists { newDescIDs[desc.id] = struct{}{} - collectorID += desc.id + collectorID ^= desc.id } // Are all the label names and the help string consistent with @@ -360,7 +361,7 @@ func (r *Registry) Unregister(c Collector) bool { var ( descChan = make(chan *Desc, capDescChan) descIDs = map[uint64]struct{}{} - collectorID uint64 // Just a sum of the desc IDs. + collectorID uint64 // All desc IDs XOR'd together. ) go func() { c.Describe(descChan) @@ -368,7 +369,7 @@ func (r *Registry) Unregister(c Collector) bool { }() for desc := range descChan { if _, exists := descIDs[desc.id]; !exists { - collectorID += desc.id + collectorID ^= desc.id descIDs[desc.id] = struct{}{} } } @@ -875,9 +876,9 @@ func checkMetricConsistency( } // Is the metric unique (i.e. no other metric with the same name and the same labels)? - h := hashNew() - h = hashAdd(h, name) - h = hashAddByte(h, separatorByte) + h := xxhash.New() + h.WriteString(name) + h.Write(separatorByteSlice) // Make sure label pairs are sorted. We depend on it for the consistency // check. if !sort.IsSorted(labelPairSorter(dtoMetric.Label)) { @@ -888,18 +889,19 @@ func checkMetricConsistency( dtoMetric.Label = copiedLabels } for _, lp := range dtoMetric.Label { - h = hashAdd(h, lp.GetName()) - h = hashAddByte(h, separatorByte) - h = hashAdd(h, lp.GetValue()) - h = hashAddByte(h, separatorByte) + h.WriteString(lp.GetName()) + h.Write(separatorByteSlice) + h.WriteString(lp.GetValue()) + h.Write(separatorByteSlice) } - if _, exists := metricHashes[h]; exists { + hSum := h.Sum64() + if _, exists := metricHashes[hSum]; exists { return fmt.Errorf( "collected metric %q { %s} was collected before with the same name and label values", name, dtoMetric, ) } - metricHashes[h] = struct{}{} + metricHashes[hSum] = struct{}{} return nil } diff --git a/vendor/github.com/prometheus/client_golang/prometheus/summary.go b/vendor/github.com/prometheus/client_golang/prometheus/summary.go index c970fdee0e..ae42e761a1 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/summary.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/summary.go @@ -208,7 +208,7 @@ func newSummary(desc *Desc, opts SummaryOpts, labelValues ...string) Summary { s := &noObjectivesSummary{ desc: desc, labelPairs: makeLabelPairs(desc, labelValues), - counts: [2]*summaryCounts{&summaryCounts{}, &summaryCounts{}}, + counts: [2]*summaryCounts{{}, {}}, } s.init(s) // Init self-collection. return s diff --git a/vendor/github.com/prometheus/client_golang/prometheus/value.go b/vendor/github.com/prometheus/client_golang/prometheus/value.go index eb248f1087..2be470ce15 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/value.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/value.go @@ -16,8 +16,11 @@ package prometheus import ( "fmt" "sort" + "time" + "unicode/utf8" "github.com/golang/protobuf/proto" + "github.com/golang/protobuf/ptypes" dto "github.com/prometheus/client_model/go" ) @@ -25,7 +28,8 @@ import ( // ValueType is an enumeration of metric types that represent a simple value. type ValueType int -// Possible values for the ValueType enum. +// Possible values for the ValueType enum. Use UntypedValue to mark a metric +// with an unknown type. const ( _ ValueType = iota CounterValue @@ -69,7 +73,7 @@ func (v *valueFunc) Desc() *Desc { } func (v *valueFunc) Write(out *dto.Metric) error { - return populateMetric(v.valType, v.function(), v.labelPairs, out) + return populateMetric(v.valType, v.function(), v.labelPairs, nil, out) } // NewConstMetric returns a metric with one fixed value that cannot be @@ -116,19 +120,20 @@ func (m *constMetric) Desc() *Desc { } func (m *constMetric) Write(out *dto.Metric) error { - return populateMetric(m.valType, m.val, m.labelPairs, out) + return populateMetric(m.valType, m.val, m.labelPairs, nil, out) } func populateMetric( t ValueType, v float64, labelPairs []*dto.LabelPair, + e *dto.Exemplar, m *dto.Metric, ) error { m.Label = labelPairs switch t { case CounterValue: - m.Counter = &dto.Counter{Value: proto.Float64(v)} + m.Counter = &dto.Counter{Value: proto.Float64(v), Exemplar: e} case GaugeValue: m.Gauge = &dto.Gauge{Value: proto.Float64(v)} case UntypedValue: @@ -160,3 +165,40 @@ func makeLabelPairs(desc *Desc, labelValues []string) []*dto.LabelPair { sort.Sort(labelPairSorter(labelPairs)) return labelPairs } + +// ExemplarMaxRunes is the max total number of runes allowed in exemplar labels. +const ExemplarMaxRunes = 64 + +// newExemplar creates a new dto.Exemplar from the provided values. An error is +// returned if any of the label names or values are invalid or if the total +// number of runes in the label names and values exceeds ExemplarMaxRunes. +func newExemplar(value float64, ts time.Time, l Labels) (*dto.Exemplar, error) { + e := &dto.Exemplar{} + e.Value = proto.Float64(value) + tsProto, err := ptypes.TimestampProto(ts) + if err != nil { + return nil, err + } + e.Timestamp = tsProto + labelPairs := make([]*dto.LabelPair, 0, len(l)) + var runes int + for name, value := range l { + if !checkLabelName(name) { + return nil, fmt.Errorf("exemplar label name %q is invalid", name) + } + runes += utf8.RuneCountInString(name) + if !utf8.ValidString(value) { + return nil, fmt.Errorf("exemplar label value %q is not valid UTF-8", value) + } + runes += utf8.RuneCountInString(value) + labelPairs = append(labelPairs, &dto.LabelPair{ + Name: proto.String(name), + Value: proto.String(value), + }) + } + if runes > ExemplarMaxRunes { + return nil, fmt.Errorf("exemplar labels have %d runes, exceeding the limit of %d", runes, ExemplarMaxRunes) + } + e.Label = labelPairs + return e, nil +} diff --git a/vendor/github.com/prometheus/client_golang/prometheus/vec.go b/vendor/github.com/prometheus/client_golang/prometheus/vec.go index 14ed9e856d..d53848dc48 100644 --- a/vendor/github.com/prometheus/client_golang/prometheus/vec.go +++ b/vendor/github.com/prometheus/client_golang/prometheus/vec.go @@ -24,7 +24,7 @@ import ( // their label values. metricVec is not used directly (and therefore // unexported). It is used as a building block for implementations of vectors of // a given metric type, like GaugeVec, CounterVec, SummaryVec, and HistogramVec. -// It also handles label currying. It uses basicMetricVec internally. +// It also handles label currying. type metricVec struct { *metricMap @@ -91,6 +91,18 @@ func (m *metricVec) Delete(labels Labels) bool { return m.metricMap.deleteByHashWithLabels(h, labels, m.curry) } +// Without explicit forwarding of Describe, Collect, Reset, those methods won't +// show up in GoDoc. + +// Describe implements Collector. +func (m *metricVec) Describe(ch chan<- *Desc) { m.metricMap.Describe(ch) } + +// Collect implements Collector. +func (m *metricVec) Collect(ch chan<- Metric) { m.metricMap.Collect(ch) } + +// Reset deletes all metrics in this vector. +func (m *metricVec) Reset() { m.metricMap.Reset() } + func (m *metricVec) curryWith(labels Labels) (*metricVec, error) { var ( newCurry []curriedLabelValue diff --git a/vendor/go.uber.org/atomic/.gitignore b/vendor/go.uber.org/atomic/.gitignore index 0a4504f110..c3fa253893 100644 --- a/vendor/go.uber.org/atomic/.gitignore +++ b/vendor/go.uber.org/atomic/.gitignore @@ -1,6 +1,7 @@ +/bin .DS_Store /vendor -/cover +cover.html cover.out lint.log diff --git a/vendor/go.uber.org/atomic/.travis.yml b/vendor/go.uber.org/atomic/.travis.yml index 0f3769e5fa..4e73268b60 100644 --- a/vendor/go.uber.org/atomic/.travis.yml +++ b/vendor/go.uber.org/atomic/.travis.yml @@ -2,26 +2,26 @@ sudo: false language: go go_import_path: go.uber.org/atomic -go: - - 1.11.x - - 1.12.x +env: + global: + - GO111MODULE=on matrix: include: - go: 1.12.x - env: NO_TEST=yes LINT=yes + - go: 1.13.x + env: LINT=1 cache: directories: - vendor -install: - - make install_ci +before_install: + - go version script: - - test -n "$NO_TEST" || make test_ci - - test -n "$NO_TEST" || scripts/test-ubergo.sh - - test -z "$LINT" || make install_lint lint + - test -z "$LINT" || make lint + - make cover after_success: - bash <(curl -s https://codecov.io/bash) diff --git a/vendor/go.uber.org/atomic/CHANGELOG.md b/vendor/go.uber.org/atomic/CHANGELOG.md new file mode 100644 index 0000000000..aef8b6ebc4 --- /dev/null +++ b/vendor/go.uber.org/atomic/CHANGELOG.md @@ -0,0 +1,64 @@ +# Changelog +All notable changes to this project will be documented in this file. + +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), +and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). + +## [1.6.0] - 2020-02-24 +### Changed +- Drop library dependency on `golang.org/x/{lint, tools}`. + +## [1.5.1] - 2019-11-19 +- Fix bug where `Bool.CAS` and `Bool.Toggle` do work correctly together + causing `CAS` to fail even though the old value matches. + +## [1.5.0] - 2019-10-29 +### Changed +- With Go modules, only the `go.uber.org/atomic` import path is supported now. + If you need to use the old import path, please add a `replace` directive to + your `go.mod`. + +## [1.4.0] - 2019-05-01 +### Added + - Add `atomic.Error` type for atomic operations on `error` values. + +## [1.3.2] - 2018-05-02 +### Added +- Add `atomic.Duration` type for atomic operations on `time.Duration` values. + +## [1.3.1] - 2017-11-14 +### Fixed +- Revert optimization for `atomic.String.Store("")` which caused data races. + +## [1.3.0] - 2017-11-13 +### Added +- Add `atomic.Bool.CAS` for compare-and-swap semantics on bools. + +### Changed +- Optimize `atomic.String.Store("")` by avoiding an allocation. + +## [1.2.0] - 2017-04-12 +### Added +- Shadow `atomic.Value` from `sync/atomic`. + +## [1.1.0] - 2017-03-10 +### Added +- Add atomic `Float64` type. + +### Changed +- Support new `go.uber.org/atomic` import path. + +## [1.0.0] - 2016-07-18 + +- Initial release. + +[1.6.0]: https://github.com/uber-go/atomic/compare/v1.5.1...v1.6.0 +[1.5.1]: https://github.com/uber-go/atomic/compare/v1.5.0...v1.5.1 +[1.5.0]: https://github.com/uber-go/atomic/compare/v1.4.0...v1.5.0 +[1.4.0]: https://github.com/uber-go/atomic/compare/v1.3.2...v1.4.0 +[1.3.2]: https://github.com/uber-go/atomic/compare/v1.3.1...v1.3.2 +[1.3.1]: https://github.com/uber-go/atomic/compare/v1.3.0...v1.3.1 +[1.3.0]: https://github.com/uber-go/atomic/compare/v1.2.0...v1.3.0 +[1.2.0]: https://github.com/uber-go/atomic/compare/v1.1.0...v1.2.0 +[1.1.0]: https://github.com/uber-go/atomic/compare/v1.0.0...v1.1.0 +[1.0.0]: https://github.com/uber-go/atomic/releases/tag/v1.0.0 diff --git a/vendor/go.uber.org/atomic/Makefile b/vendor/go.uber.org/atomic/Makefile index 1ef263075d..39af0fb63f 100644 --- a/vendor/go.uber.org/atomic/Makefile +++ b/vendor/go.uber.org/atomic/Makefile @@ -1,51 +1,35 @@ -# Many Go tools take file globs or directories as arguments instead of packages. -PACKAGE_FILES ?= *.go +# Directory to place `go install`ed binaries into. +export GOBIN ?= $(shell pwd)/bin -# For pre go1.6 -export GO15VENDOREXPERIMENT=1 +GOLINT = $(GOBIN)/golint +GO_FILES ?= *.go .PHONY: build build: - go build -i ./... - - -.PHONY: install -install: - glide --version || go get github.com/Masterminds/glide - glide install - + go build ./... .PHONY: test test: - go test -cover -race ./... + go test -race ./... +.PHONY: gofmt +gofmt: + $(eval FMT_LOG := $(shell mktemp -t gofmt.XXXXX)) + gofmt -e -s -l $(GO_FILES) > $(FMT_LOG) || true + @[ ! -s "$(FMT_LOG)" ] || (echo "gofmt failed:" && cat $(FMT_LOG) && false) -.PHONY: install_ci -install_ci: install - go get github.com/wadey/gocovmerge - go get github.com/mattn/goveralls - go get golang.org/x/tools/cmd/cover - -.PHONY: install_lint -install_lint: - go get golang.org/x/lint/golint +$(GOLINT): + go install golang.org/x/lint/golint +.PHONY: golint +golint: $(GOLINT) + $(GOLINT) ./... .PHONY: lint -lint: - @rm -rf lint.log - @echo "Checking formatting..." - @gofmt -d -s $(PACKAGE_FILES) 2>&1 | tee lint.log - @echo "Checking vet..." - @go vet ./... 2>&1 | tee -a lint.log;) - @echo "Checking lint..." - @golint $$(go list ./...) 2>&1 | tee -a lint.log - @echo "Checking for unresolved FIXMEs..." - @git grep -i fixme | grep -v -e vendor -e Makefile | tee -a lint.log - @[ ! -s lint.log ] - - -.PHONY: test_ci -test_ci: install_ci build - ./scripts/cover.sh $(shell go list $(PACKAGES)) +lint: gofmt golint + +.PHONY: cover +cover: + go test -coverprofile=cover.out -coverpkg ./... -v ./... + go tool cover -html=cover.out -o cover.html diff --git a/vendor/go.uber.org/atomic/README.md b/vendor/go.uber.org/atomic/README.md index 62eb8e5760..ade0c20f16 100644 --- a/vendor/go.uber.org/atomic/README.md +++ b/vendor/go.uber.org/atomic/README.md @@ -3,9 +3,34 @@ Simple wrappers for primitive types to enforce atomic access. ## Installation -`go get -u go.uber.org/atomic` + +```shell +$ go get -u go.uber.org/atomic@v1 +``` + +### Legacy Import Path + +As of v1.5.0, the import path `go.uber.org/atomic` is the only supported way +of using this package. If you are using Go modules, this package will fail to +compile with the legacy import path path `github.com/uber-go/atomic`. + +We recommend migrating your code to the new import path but if you're unable +to do so, or if your dependencies are still using the old import path, you +will have to add a `replace` directive to your `go.mod` file downgrading the +legacy import path to an older version. + +``` +replace github.com/uber-go/atomic => github.com/uber-go/atomic v1.4.0 +``` + +You can do so automatically by running the following command. + +```shell +$ go mod edit -replace github.com/uber-go/atomic=github.com/uber-go/atomic@v1.4.0 +``` ## Usage + The standard library's `sync/atomic` is powerful, but it's easy to forget which variables must be accessed atomically. `go.uber.org/atomic` preserves all the functionality of the standard library, but wraps the primitive types to @@ -21,9 +46,11 @@ atom.CAS(40, 11) See the [documentation][doc] for a complete API specification. ## Development Status + Stable. -___ +--- + Released under the [MIT License](LICENSE.txt). [doc-img]: https://godoc.org/github.com/uber-go/atomic?status.svg diff --git a/vendor/go.uber.org/atomic/atomic.go b/vendor/go.uber.org/atomic/atomic.go index 1db6849fca..ad5fa0980a 100644 --- a/vendor/go.uber.org/atomic/atomic.go +++ b/vendor/go.uber.org/atomic/atomic.go @@ -250,11 +250,16 @@ func (b *Bool) Swap(new bool) bool { // Toggle atomically negates the Boolean and returns the previous value. func (b *Bool) Toggle() bool { - return truthy(atomic.AddUint32(&b.v, 1) - 1) + for { + old := b.Load() + if b.CAS(old, !old) { + return old + } + } } func truthy(n uint32) bool { - return n&1 == 1 + return n == 1 } func boolToInt(b bool) uint32 { diff --git a/vendor/go.uber.org/atomic/glide.lock b/vendor/go.uber.org/atomic/glide.lock deleted file mode 100644 index 3c72c59976..0000000000 --- a/vendor/go.uber.org/atomic/glide.lock +++ /dev/null @@ -1,17 +0,0 @@ -hash: f14d51408e3e0e4f73b34e4039484c78059cd7fc5f4996fdd73db20dc8d24f53 -updated: 2016-10-27T00:10:51.16960137-07:00 -imports: [] -testImports: -- name: github.com/davecgh/go-spew - version: 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d - subpackages: - - spew -- name: github.com/pmezard/go-difflib - version: d8ed2627bdf02c080bf22230dbb337003b7aba2d - subpackages: - - difflib -- name: github.com/stretchr/testify - version: d77da356e56a7428ad25149ca77381849a6a5232 - subpackages: - - assert - - require diff --git a/vendor/go.uber.org/atomic/glide.yaml b/vendor/go.uber.org/atomic/glide.yaml deleted file mode 100644 index 4cf608ec0f..0000000000 --- a/vendor/go.uber.org/atomic/glide.yaml +++ /dev/null @@ -1,6 +0,0 @@ -package: go.uber.org/atomic -testImport: -- package: github.com/stretchr/testify - subpackages: - - assert - - require diff --git a/vendor/go.uber.org/atomic/go.mod b/vendor/go.uber.org/atomic/go.mod new file mode 100644 index 0000000000..a935daebb9 --- /dev/null +++ b/vendor/go.uber.org/atomic/go.mod @@ -0,0 +1,10 @@ +module go.uber.org/atomic + +require ( + github.com/davecgh/go-spew v1.1.1 // indirect + github.com/stretchr/testify v1.3.0 + golang.org/x/lint v0.0.0-20190930215403-16217165b5de + golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c // indirect +) + +go 1.13 diff --git a/vendor/go.uber.org/atomic/go.sum b/vendor/go.uber.org/atomic/go.sum new file mode 100644 index 0000000000..51b2b62afb --- /dev/null +++ b/vendor/go.uber.org/atomic/go.sum @@ -0,0 +1,22 @@ +github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs= +golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= +golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/tools v0.0.0-20190311212946-11955173bddd h1:/e+gpKk9r3dJobndpTytxS2gOy6m5uvpg+ISQoEcusQ= +golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c h1:IGkKhmfzcztjm6gYkykvu/NiS8kaqbCWAEWWAyf8J5U= +golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= diff --git a/vendor/go.uber.org/multierr/.gitignore b/vendor/go.uber.org/multierr/.gitignore index 61ead86667..b9a05e3da0 100644 --- a/vendor/go.uber.org/multierr/.gitignore +++ b/vendor/go.uber.org/multierr/.gitignore @@ -1 +1,4 @@ /vendor +cover.html +cover.out +/bin diff --git a/vendor/go.uber.org/multierr/.travis.yml b/vendor/go.uber.org/multierr/.travis.yml index a6412b7fed..786c917a39 100644 --- a/vendor/go.uber.org/multierr/.travis.yml +++ b/vendor/go.uber.org/multierr/.travis.yml @@ -5,6 +5,7 @@ go_import_path: go.uber.org/multierr env: global: - GO15VENDOREXPERIMENT=1 + - GO111MODULE=on go: - 1.11.x @@ -18,16 +19,11 @@ cache: before_install: - go version -install: -- | - set -e - make install_ci - script: - | set -e make lint - make test_ci + make cover after_success: - bash <(curl -s https://codecov.io/bash) diff --git a/vendor/go.uber.org/multierr/CHANGELOG.md b/vendor/go.uber.org/multierr/CHANGELOG.md index f1b852cf3f..3110c5af0b 100644 --- a/vendor/go.uber.org/multierr/CHANGELOG.md +++ b/vendor/go.uber.org/multierr/CHANGELOG.md @@ -1,6 +1,25 @@ Releases ======== +v1.5.0 (2020-02-24) +=================== + +- Drop library dependency on development-time tooling. + + +v1.4.0 (2019-11-04) +=================== + +- Add `AppendInto` function to more ergonomically build errors inside a + loop. + + +v1.3.0 (2019-10-29) +=================== + +- Switch to Go modules. + + v1.2.0 (2019-09-26) =================== diff --git a/vendor/go.uber.org/multierr/Makefile b/vendor/go.uber.org/multierr/Makefile index b4bf73d8c3..416018237e 100644 --- a/vendor/go.uber.org/multierr/Makefile +++ b/vendor/go.uber.org/multierr/Makefile @@ -1,23 +1,17 @@ -export GO15VENDOREXPERIMENT=1 - -PACKAGES := $(shell glide nv) +# Directory to put `go install`ed binaries in. +export GOBIN ?= $(shell pwd)/bin GO_FILES := $(shell \ find . '(' -path '*/.*' -o -path './vendor' ')' -prune \ -o -name '*.go' -print | cut -b3-) -.PHONY: install -install: - glide --version || go get github.com/Masterminds/glide - glide install - .PHONY: build build: - go build -i $(PACKAGES) + go build ./... .PHONY: test test: - go test -cover -race $(PACKAGES) + go test -race ./... .PHONY: gofmt gofmt: @@ -25,50 +19,24 @@ gofmt: @gofmt -e -s -l $(GO_FILES) > $(FMT_LOG) || true @[ ! -s "$(FMT_LOG)" ] || (echo "gofmt failed:" | cat - $(FMT_LOG) && false) -.PHONY: govet -govet: - $(eval VET_LOG := $(shell mktemp -t govet.XXXXX)) - @go vet $(PACKAGES) 2>&1 \ - | grep -v '^exit status' > $(VET_LOG) || true - @[ ! -s "$(VET_LOG)" ] || (echo "govet failed:" | cat - $(VET_LOG) && false) - .PHONY: golint golint: - @go get golang.org/x/lint/golint - $(eval LINT_LOG := $(shell mktemp -t golint.XXXXX)) - @cat /dev/null > $(LINT_LOG) - @$(foreach pkg, $(PACKAGES), golint $(pkg) >> $(LINT_LOG) || true;) - @[ ! -s "$(LINT_LOG)" ] || (echo "golint failed:" | cat - $(LINT_LOG) && false) + @go install golang.org/x/lint/golint + @$(GOBIN)/golint ./... .PHONY: staticcheck staticcheck: - @go get honnef.co/go/tools/cmd/staticcheck - $(eval STATICCHECK_LOG := $(shell mktemp -t staticcheck.XXXXX)) - @staticcheck $(PACKAGES) 2>&1 > $(STATICCHECK_LOG) || true - @[ ! -s "$(STATICCHECK_LOG)" ] || (echo "staticcheck failed:" | cat - $(STATICCHECK_LOG) && false) + @go install honnef.co/go/tools/cmd/staticcheck + @$(GOBIN)/staticcheck ./... .PHONY: lint -lint: gofmt govet golint staticcheck +lint: gofmt golint staticcheck .PHONY: cover cover: - ./scripts/cover.sh $(shell go list $(PACKAGES)) + go test -coverprofile=cover.out -coverpkg=./... -v ./... go tool cover -html=cover.out -o cover.html update-license: - @go get go.uber.org/tools/update-license - @update-license \ - $(shell go list -json $(PACKAGES) | \ - jq -r '.Dir + "/" + (.GoFiles | .[])') - -############################################################################## - -.PHONY: install_ci -install_ci: install - go get github.com/wadey/gocovmerge - go get github.com/mattn/goveralls - go get golang.org/x/tools/cmd/cover - -.PHONY: test_ci -test_ci: install_ci - ./scripts/cover.sh $(shell go list $(PACKAGES)) + @go install go.uber.org/tools/update-license + @$(GOBIN)/update-license $(GO_FILES) diff --git a/vendor/go.uber.org/multierr/error.go b/vendor/go.uber.org/multierr/error.go index d4be183448..04eb9618c1 100644 --- a/vendor/go.uber.org/multierr/error.go +++ b/vendor/go.uber.org/multierr/error.go @@ -1,4 +1,4 @@ -// Copyright (c) 2017 Uber Technologies, Inc. +// Copyright (c) 2019 Uber Technologies, Inc. // // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the "Software"), to deal @@ -130,7 +130,7 @@ type errorGroup interface { } // Errors returns a slice containing zero or more errors that the supplied -// error is composed of. If the error is nil, the returned slice is empty. +// error is composed of. If the error is nil, a nil slice is returned. // // err := multierr.Append(r.Close(), w.Close()) // errors := multierr.Errors(err) @@ -397,3 +397,53 @@ func Append(left error, right error) error { errors := [2]error{left, right} return fromSlice(errors[0:]) } + +// AppendInto appends an error into the destination of an error pointer and +// returns whether the error being appended was non-nil. +// +// var err error +// multierr.AppendInto(&err, r.Close()) +// multierr.AppendInto(&err, w.Close()) +// +// The above is equivalent to, +// +// err := multierr.Append(r.Close(), w.Close()) +// +// As AppendInto reports whether the provided error was non-nil, it may be +// used to build a multierr error in a loop more ergonomically. For example: +// +// var err error +// for line := range lines { +// var item Item +// if multierr.AppendInto(&err, parse(line, &item)) { +// continue +// } +// items = append(items, item) +// } +// +// Compare this with a verison that relies solely on Append: +// +// var err error +// for line := range lines { +// var item Item +// if parseErr := parse(line, &item); parseErr != nil { +// err = multierr.Append(err, parseErr) +// continue +// } +// items = append(items, item) +// } +func AppendInto(into *error, err error) (errored bool) { + if into == nil { + // We panic if 'into' is nil. This is not documented above + // because suggesting that the pointer must be non-nil may + // confuse users into thinking that the error that it points + // to must be non-nil. + panic("misuse of multierr.AppendInto: into pointer must not be nil") + } + + if err == nil { + return false + } + *into = Append(*into, err) + return true +} diff --git a/vendor/go.uber.org/multierr/glide.lock b/vendor/go.uber.org/multierr/glide.lock deleted file mode 100644 index f9ea94c334..0000000000 --- a/vendor/go.uber.org/multierr/glide.lock +++ /dev/null @@ -1,19 +0,0 @@ -hash: b53b5e9a84b9cb3cc4b2d0499e23da2feca1eec318ce9bb717ecf35bf24bf221 -updated: 2017-04-10T13:34:45.671678062-07:00 -imports: -- name: go.uber.org/atomic - version: 3b8db5e93c4c02efbc313e17b2e796b0914a01fb -testImports: -- name: github.com/davecgh/go-spew - version: 6d212800a42e8ab5c146b8ace3490ee17e5225f9 - subpackages: - - spew -- name: github.com/pmezard/go-difflib - version: d8ed2627bdf02c080bf22230dbb337003b7aba2d - subpackages: - - difflib -- name: github.com/stretchr/testify - version: 69483b4bd14f5845b5a1e55bca19e954e827f1d0 - subpackages: - - assert - - require diff --git a/vendor/go.uber.org/multierr/go.mod b/vendor/go.uber.org/multierr/go.mod new file mode 100644 index 0000000000..58d5f90bbd --- /dev/null +++ b/vendor/go.uber.org/multierr/go.mod @@ -0,0 +1,12 @@ +module go.uber.org/multierr + +go 1.12 + +require ( + github.com/stretchr/testify v1.3.0 + go.uber.org/atomic v1.6.0 + go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee + golang.org/x/lint v0.0.0-20190930215403-16217165b5de + golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5 // indirect + honnef.co/go/tools v0.0.1-2019.2.3 +) diff --git a/vendor/go.uber.org/multierr/go.sum b/vendor/go.uber.org/multierr/go.sum new file mode 100644 index 0000000000..557fbba28f --- /dev/null +++ b/vendor/go.uber.org/multierr/go.sum @@ -0,0 +1,45 @@ +github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ= +github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= +github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= +github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= +github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +go.uber.org/atomic v1.6.0 h1:Ezj3JGmsOnG1MoRWQkPBsKLe9DwWD9QeXzTRzzldNVk= +go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ= +go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee h1:0mgffUl7nfd+FpvXMVz4IDEaUSmT1ysygQC7qYo7sG4= +go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs= +golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= +golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc= +golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190620200207-3b0461eec859 h1:R/3boaszxrf1GEUWTVDzSKVwLmSJpwZ1yqXm8j0v2QI= +golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c h1:IGkKhmfzcztjm6gYkykvu/NiS8kaqbCWAEWWAyf8J5U= +golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5 h1:hKsoRgsbwY1NafxrwTs+k64bikrLBkAgPir1TNCj3Zs= +golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM= +honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/deepcopy.go b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/deepcopy.go index 3c7ac0060f..761e27cc42 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/deepcopy.go +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/deepcopy.go @@ -284,5 +284,11 @@ func (in *JSONSchemaProps) DeepCopy() *JSONSchemaProps { } } + if in.XMapType != nil { + in, out := &in.XMapType, &out.XMapType + *out = new(string) + **out = **in + } + return out } diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/types_jsonschema.go b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/types_jsonschema.go index 44169c767d..c0ac63e575 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/types_jsonschema.go +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/types_jsonschema.go @@ -103,12 +103,25 @@ type JSONSchemaProps struct { // may be used on any type of list (struct, scalar, ...). // 2) `set`: // Sets are lists that must not have multiple items with the same value. Each - // value must be a scalar or an array with x-kubernetes-list-type `atomic`. + // value must be a scalar, an object with x-kubernetes-map-type `atomic` or an + // array with x-kubernetes-list-type `atomic`. // 3) `map`: // These lists are like maps in that their elements have a non-index key // used to identify them. Order is preserved upon merge. The map tag // must only be used on a list with elements of type object. XListType *string + + // x-kubernetes-map-type annotates an object to further describe its topology. + // This extension must only be used when type is object and may have 2 possible values: + // + // 1) `granular`: + // These maps are actual maps (key-value pairs) and each fields are independent + // from each other (they can each be manipulated by separate actors). This is + // the default behaviour for all maps. + // 2) `atomic`: the list is treated as a single entity, like a scalar. + // Atomic maps will be entirely replaced when updated. + // +optional + XMapType *string } // JSON represents any valid JSON value. diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/conversion.go b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/conversion.go index 70a2265c8c..c056dd91ff 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/conversion.go +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/conversion.go @@ -20,23 +20,9 @@ import ( "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions" apiequality "k8s.io/apimachinery/pkg/api/equality" "k8s.io/apimachinery/pkg/conversion" - "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/util/json" ) -func addConversionFuncs(scheme *runtime.Scheme) error { - // Add non-generated conversion functions - err := scheme.AddConversionFuncs( - Convert_apiextensions_JSONSchemaProps_To_v1_JSONSchemaProps, - Convert_apiextensions_JSON_To_v1_JSON, - Convert_v1_JSON_To_apiextensions_JSON, - ) - if err != nil { - return err - } - return nil -} - func Convert_apiextensions_JSONSchemaProps_To_v1_JSONSchemaProps(in *apiextensions.JSONSchemaProps, out *JSONSchemaProps, s conversion.Scope) error { if err := autoConvert_apiextensions_JSONSchemaProps_To_v1_JSONSchemaProps(in, out, s); err != nil { return err diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/deepcopy.go b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/deepcopy.go index b8c44c6966..84dda976b2 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/deepcopy.go +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/deepcopy.go @@ -244,5 +244,11 @@ func (in *JSONSchemaProps) DeepCopy() *JSONSchemaProps { } } + if in.XMapType != nil { + in, out := &in.XMapType, &out.XMapType + *out = new(string) + **out = **in + } + return out } diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/generated.pb.go b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/generated.pb.go index 1eba9e372e..5099b4144b 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/generated.pb.go +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/generated.pb.go @@ -785,190 +785,191 @@ func init() { } var fileDescriptor_f5a35c9667703937 = []byte{ - // 2919 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xc4, 0x5a, 0xcd, 0x6f, 0x24, 0x47, - 0x15, 0xdf, 0x1e, 0x7f, 0x8d, 0xcb, 0xf6, 0xda, 0xae, 0x5d, 0x9b, 0x5e, 0x67, 0xd7, 0xe3, 0x9d, - 0x90, 0xe0, 0x84, 0xcd, 0x38, 0xbb, 0x24, 0x24, 0xe4, 0x00, 0xf2, 0xd8, 0x4e, 0x70, 0xd6, 0x5e, - 0x5b, 0x35, 0xbb, 0x1b, 0x27, 0x41, 0x4a, 0xca, 0xdd, 0xe5, 0x71, 0xc7, 0xfd, 0xb5, 0x5d, 0xdd, - 0x63, 0x5b, 0x02, 0x29, 0x02, 0x45, 0x40, 0x24, 0x08, 0x07, 0x04, 0xe2, 0x80, 0x10, 0x42, 0x39, - 0xc0, 0x01, 0x6e, 0xf0, 0x2f, 0xe4, 0x82, 0x94, 0x03, 0x42, 0x91, 0x90, 0x46, 0x64, 0xf8, 0x13, - 0x00, 0x21, 0x7c, 0x40, 0xa8, 0x3e, 0xba, 0xba, 0xa6, 0x67, 0x66, 0x77, 0xb5, 0x1e, 0x27, 0xb7, - 0x99, 0xf7, 0xf5, 0x7b, 0xf5, 0xea, 0xd5, 0xab, 0xf7, 0x6a, 0x06, 0xe0, 0x83, 0x17, 0x69, 0xc5, - 0x09, 0x96, 0x0e, 0x92, 0x5d, 0x12, 0xf9, 0x24, 0x26, 0x74, 0xa9, 0x41, 0x7c, 0x3b, 0x88, 0x96, - 0x24, 0x03, 0x87, 0x0e, 0x39, 0x8a, 0x89, 0x4f, 0x9d, 0xc0, 0xa7, 0xcf, 0xe0, 0xd0, 0xa1, 0x24, - 0x6a, 0x90, 0x68, 0x29, 0x3c, 0xa8, 0x33, 0x1e, 0x6d, 0x17, 0x58, 0x6a, 0x5c, 0x5f, 0xaa, 0x13, - 0x9f, 0x44, 0x38, 0x26, 0x76, 0x25, 0x8c, 0x82, 0x38, 0x80, 0x2f, 0x0a, 0x4b, 0x95, 0x36, 0xc1, - 0xb7, 0x94, 0xa5, 0x4a, 0x78, 0x50, 0x67, 0x3c, 0xda, 0x2e, 0x50, 0x69, 0x5c, 0x9f, 0x7b, 0xa6, - 0xee, 0xc4, 0xfb, 0xc9, 0x6e, 0xc5, 0x0a, 0xbc, 0xa5, 0x7a, 0x50, 0x0f, 0x96, 0xb8, 0xc1, 0xdd, - 0x64, 0x8f, 0x7f, 0xe3, 0x5f, 0xf8, 0x27, 0x01, 0x34, 0xf7, 0x5c, 0xe6, 0xb2, 0x87, 0xad, 0x7d, - 0xc7, 0x27, 0xd1, 0x71, 0xe6, 0xa7, 0x47, 0x62, 0xdc, 0xc5, 0xbd, 0xb9, 0xa5, 0x5e, 0x5a, 0x51, - 0xe2, 0xc7, 0x8e, 0x47, 0x3a, 0x14, 0xbe, 0xfa, 0x20, 0x05, 0x6a, 0xed, 0x13, 0x0f, 0xe7, 0xf5, - 0xca, 0x27, 0x06, 0x98, 0x5e, 0x09, 0xfc, 0x06, 0x89, 0xd8, 0x02, 0x11, 0xb9, 0x97, 0x10, 0x1a, - 0xc3, 0x2a, 0x18, 0x48, 0x1c, 0xdb, 0x34, 0x16, 0x8c, 0xc5, 0xd1, 0xea, 0xb3, 0x1f, 0x35, 0x4b, - 0xe7, 0x5a, 0xcd, 0xd2, 0xc0, 0x9d, 0xf5, 0xd5, 0x93, 0x66, 0xe9, 0x6a, 0x2f, 0xa4, 0xf8, 0x38, - 0x24, 0xb4, 0x72, 0x67, 0x7d, 0x15, 0x31, 0x65, 0xf8, 0x0a, 0x98, 0xb6, 0x09, 0x75, 0x22, 0x62, - 0x2f, 0x6f, 0xaf, 0xdf, 0x15, 0xf6, 0xcd, 0x02, 0xb7, 0x78, 0x49, 0x5a, 0x9c, 0x5e, 0xcd, 0x0b, - 0xa0, 0x4e, 0x1d, 0xb8, 0x03, 0x46, 0x82, 0xdd, 0x77, 0x88, 0x15, 0x53, 0x73, 0x60, 0x61, 0x60, - 0x71, 0xec, 0xc6, 0x33, 0x95, 0x6c, 0xf3, 0x94, 0x0b, 0x7c, 0xc7, 0xe4, 0x62, 0x2b, 0x08, 0x1f, - 0xae, 0xa5, 0x9b, 0x56, 0x9d, 0x94, 0x68, 0x23, 0x5b, 0xc2, 0x0a, 0x4a, 0xcd, 0x95, 0x7f, 0x53, - 0x00, 0x50, 0x5f, 0x3c, 0x0d, 0x03, 0x9f, 0x92, 0xbe, 0xac, 0x9e, 0x82, 0x29, 0x8b, 0x5b, 0x8e, - 0x89, 0x2d, 0x71, 0xcd, 0xc2, 0xa3, 0x78, 0x6f, 0x4a, 0xfc, 0xa9, 0x95, 0x9c, 0x39, 0xd4, 0x01, - 0x00, 0x6f, 0x83, 0xe1, 0x88, 0xd0, 0xc4, 0x8d, 0xcd, 0x81, 0x05, 0x63, 0x71, 0xec, 0xc6, 0xb5, - 0x9e, 0x50, 0x3c, 0xb5, 0x59, 0xf2, 0x55, 0x1a, 0xd7, 0x2b, 0xb5, 0x18, 0xc7, 0x09, 0xad, 0x9e, - 0x97, 0x48, 0xc3, 0x88, 0xdb, 0x40, 0xd2, 0x56, 0xf9, 0x7f, 0x06, 0x98, 0xd2, 0xa3, 0xd4, 0x70, - 0xc8, 0x21, 0x8c, 0xc0, 0x48, 0x24, 0x92, 0x85, 0xc7, 0x69, 0xec, 0xc6, 0xcd, 0xca, 0xa3, 0x9e, - 0xa8, 0x4a, 0x47, 0xfe, 0x55, 0xc7, 0xd8, 0x76, 0xc9, 0x2f, 0x28, 0x05, 0x82, 0x0d, 0x50, 0x8c, - 0xe4, 0x1e, 0xf1, 0x44, 0x1a, 0xbb, 0xb1, 0xd1, 0x1f, 0x50, 0x61, 0xb3, 0x3a, 0xde, 0x6a, 0x96, - 0x8a, 0xe9, 0x37, 0xa4, 0xb0, 0xca, 0xbf, 0x2a, 0x80, 0xf9, 0x95, 0x84, 0xc6, 0x81, 0x87, 0x08, - 0x0d, 0x92, 0xc8, 0x22, 0x2b, 0x81, 0x9b, 0x78, 0xfe, 0x2a, 0xd9, 0x73, 0x7c, 0x27, 0x66, 0x39, - 0xba, 0x00, 0x06, 0x7d, 0xec, 0x11, 0x99, 0x33, 0xe3, 0x32, 0x92, 0x83, 0xb7, 0xb0, 0x47, 0x10, - 0xe7, 0x30, 0x09, 0x96, 0x22, 0xf2, 0x04, 0x28, 0x89, 0xdb, 0xc7, 0x21, 0x41, 0x9c, 0x03, 0x9f, - 0x04, 0xc3, 0x7b, 0x41, 0xe4, 0x61, 0xb1, 0x7b, 0xa3, 0xd9, 0x7e, 0xbc, 0xcc, 0xa9, 0x48, 0x72, - 0xe1, 0xf3, 0x60, 0xcc, 0x26, 0xd4, 0x8a, 0x9c, 0x90, 0x41, 0x9b, 0x83, 0x5c, 0xf8, 0x82, 0x14, - 0x1e, 0x5b, 0xcd, 0x58, 0x48, 0x97, 0x83, 0xd7, 0x40, 0x31, 0x8c, 0x9c, 0x20, 0x72, 0xe2, 0x63, - 0x73, 0x68, 0xc1, 0x58, 0x1c, 0xaa, 0x4e, 0x49, 0x9d, 0xe2, 0xb6, 0xa4, 0x23, 0x25, 0xc1, 0xa4, - 0xdf, 0xa1, 0x81, 0xbf, 0x8d, 0xe3, 0x7d, 0x73, 0x98, 0x23, 0x28, 0xe9, 0x57, 0x6b, 0x5b, 0xb7, - 0x18, 0x1d, 0x29, 0x89, 0xf2, 0x5f, 0x0d, 0x60, 0xe6, 0x23, 0x94, 0x86, 0x17, 0xbe, 0x0c, 0x8a, - 0x34, 0x66, 0x35, 0xa7, 0x7e, 0x2c, 0xe3, 0xf3, 0x74, 0x6a, 0xaa, 0x26, 0xe9, 0x27, 0xcd, 0xd2, - 0x6c, 0xa6, 0x91, 0x52, 0x79, 0x6c, 0x94, 0x2e, 0x4b, 0xb9, 0x43, 0xb2, 0xbb, 0x1f, 0x04, 0x07, - 0x72, 0xf7, 0x4f, 0x91, 0x72, 0xaf, 0x09, 0x43, 0x19, 0xa6, 0x48, 0x39, 0x49, 0x46, 0x29, 0x50, - 0xf9, 0xbf, 0x85, 0xfc, 0xc2, 0xb4, 0x4d, 0x7f, 0x1b, 0x14, 0xd9, 0x11, 0xb2, 0x71, 0x8c, 0xe5, - 0x21, 0x78, 0xf6, 0xe1, 0x0e, 0x9c, 0x38, 0xaf, 0x9b, 0x24, 0xc6, 0x55, 0x28, 0x43, 0x01, 0x32, - 0x1a, 0x52, 0x56, 0xe1, 0x11, 0x18, 0xa4, 0x21, 0xb1, 0xe4, 0x7a, 0xef, 0x9e, 0x22, 0xdb, 0x7b, - 0xac, 0xa1, 0x16, 0x12, 0x2b, 0x4b, 0x46, 0xf6, 0x0d, 0x71, 0x44, 0xf8, 0xae, 0x01, 0x86, 0x29, - 0xaf, 0x0b, 0xb2, 0x96, 0xec, 0x9c, 0x01, 0x78, 0xae, 0xee, 0x88, 0xef, 0x48, 0xe2, 0x96, 0xff, - 0x55, 0x00, 0x57, 0x7b, 0xa9, 0xae, 0x04, 0xbe, 0x2d, 0x36, 0x61, 0x5d, 0x9e, 0x2b, 0x91, 0x59, - 0xcf, 0xeb, 0xe7, 0xea, 0xa4, 0x59, 0x7a, 0xe2, 0x81, 0x06, 0xb4, 0x03, 0xf8, 0x35, 0xb5, 0x64, - 0x71, 0x48, 0xaf, 0xb6, 0x3b, 0x76, 0xd2, 0x2c, 0x4d, 0x2a, 0xb5, 0x76, 0x5f, 0x61, 0x03, 0x40, - 0x17, 0xd3, 0xf8, 0x76, 0x84, 0x7d, 0x2a, 0xcc, 0x3a, 0x1e, 0x91, 0x91, 0x7b, 0xfa, 0xe1, 0x92, - 0x82, 0x69, 0x54, 0xe7, 0x24, 0x24, 0xdc, 0xe8, 0xb0, 0x86, 0xba, 0x20, 0xb0, 0x9a, 0x11, 0x11, - 0x4c, 0x55, 0x19, 0xd0, 0x6a, 0x38, 0xa3, 0x22, 0xc9, 0x85, 0x4f, 0x81, 0x11, 0x8f, 0x50, 0x8a, - 0xeb, 0x84, 0x9f, 0xfd, 0xd1, 0xec, 0x52, 0xdc, 0x14, 0x64, 0x94, 0xf2, 0xcb, 0xff, 0x36, 0xc0, - 0xe5, 0x5e, 0x51, 0xdb, 0x70, 0x68, 0x0c, 0xbf, 0xd5, 0x91, 0xf6, 0x95, 0x87, 0x5b, 0x21, 0xd3, - 0xe6, 0x49, 0xaf, 0x4a, 0x49, 0x4a, 0xd1, 0x52, 0xfe, 0x10, 0x0c, 0x39, 0x31, 0xf1, 0xd2, 0xdb, - 0x12, 0xf5, 0x3f, 0xed, 0xaa, 0x13, 0x12, 0x7e, 0x68, 0x9d, 0x01, 0x21, 0x81, 0x57, 0xfe, 0xb0, - 0x00, 0xae, 0xf4, 0x52, 0x61, 0x75, 0x9c, 0xb2, 0x60, 0x87, 0x6e, 0x12, 0x61, 0x57, 0x26, 0x9b, - 0x0a, 0xf6, 0x36, 0xa7, 0x22, 0xc9, 0x65, 0xb5, 0x93, 0x3a, 0x7e, 0x3d, 0x71, 0x71, 0x24, 0x33, - 0x49, 0x2d, 0xb8, 0x26, 0xe9, 0x48, 0x49, 0xc0, 0x0a, 0x00, 0x74, 0x3f, 0x88, 0x62, 0x8e, 0xc1, - 0x3b, 0x9c, 0xd1, 0xea, 0x79, 0x56, 0x11, 0x6a, 0x8a, 0x8a, 0x34, 0x09, 0x76, 0x91, 0x1c, 0x38, - 0xbe, 0x2d, 0x37, 0x5c, 0x9d, 0xdd, 0x9b, 0x8e, 0x6f, 0x23, 0xce, 0x61, 0xf8, 0xae, 0x43, 0x63, - 0x46, 0x91, 0xbb, 0xdd, 0x16, 0x70, 0x2e, 0xa9, 0x24, 0x18, 0xbe, 0xc5, 0x0a, 0x6c, 0x10, 0x39, - 0x84, 0x9a, 0xc3, 0x19, 0xfe, 0x8a, 0xa2, 0x22, 0x4d, 0xa2, 0xfc, 0xb7, 0xc1, 0xde, 0xf9, 0xc1, - 0x0a, 0x08, 0x7c, 0x1c, 0x0c, 0xd5, 0xa3, 0x20, 0x09, 0x65, 0x94, 0x54, 0xb4, 0x5f, 0x61, 0x44, - 0x24, 0x78, 0xf0, 0xdb, 0x60, 0xc8, 0x97, 0x0b, 0x66, 0x19, 0xf4, 0x5a, 0xff, 0xb7, 0x99, 0x47, - 0x2b, 0x43, 0x17, 0x81, 0x14, 0xa0, 0xf0, 0x39, 0x30, 0x44, 0xad, 0x20, 0x24, 0x32, 0x88, 0xf3, - 0xa9, 0x50, 0x8d, 0x11, 0x4f, 0x9a, 0xa5, 0x89, 0xd4, 0x1c, 0x27, 0x20, 0x21, 0x0c, 0xbf, 0x6f, - 0x80, 0xa2, 0xbc, 0x2e, 0xa8, 0x39, 0xc2, 0xd3, 0xf3, 0xf5, 0xfe, 0xfb, 0x2d, 0xdb, 0xde, 0x6c, - 0xcf, 0x24, 0x81, 0x22, 0x05, 0x0e, 0xbf, 0x6b, 0x00, 0x60, 0xa9, 0xbb, 0xcb, 0x1c, 0xe5, 0x31, - 0xec, 0xdb, 0x51, 0xd1, 0x6e, 0x45, 0x91, 0x08, 0x59, 0xab, 0xa4, 0xa1, 0xc2, 0x1a, 0x98, 0x09, - 0x23, 0xc2, 0x6d, 0xdf, 0xf1, 0x0f, 0xfc, 0xe0, 0xd0, 0x7f, 0xd9, 0x21, 0xae, 0x4d, 0x4d, 0xb0, - 0x60, 0x2c, 0x16, 0xab, 0x57, 0xa4, 0xff, 0x33, 0xdb, 0xdd, 0x84, 0x50, 0x77, 0xdd, 0xf2, 0x7b, - 0x03, 0xf9, 0x5e, 0x2b, 0x7f, 0x5f, 0xc0, 0x0f, 0xc4, 0xe2, 0x45, 0x1d, 0xa6, 0xa6, 0xc1, 0x37, - 0xe2, 0xcd, 0xfe, 0x6f, 0x84, 0xaa, 0xf5, 0xd9, 0x25, 0xad, 0x48, 0x14, 0x69, 0x2e, 0xc0, 0x9f, - 0x1a, 0x60, 0x02, 0x5b, 0x16, 0x09, 0x63, 0x62, 0x8b, 0x63, 0x5c, 0x38, 0xdb, 0xac, 0x9e, 0x91, - 0x0e, 0x4d, 0x2c, 0xeb, 0xa8, 0xa8, 0xdd, 0x09, 0xf8, 0x12, 0x38, 0x4f, 0xe3, 0x20, 0x22, 0x76, - 0x9a, 0x41, 0xb2, 0xba, 0xc0, 0x56, 0xb3, 0x74, 0xbe, 0xd6, 0xc6, 0x41, 0x39, 0xc9, 0xf2, 0x5f, - 0x06, 0x41, 0xe9, 0x01, 0x19, 0xfa, 0x10, 0x4d, 0xef, 0x93, 0x60, 0x98, 0xaf, 0xd4, 0xe6, 0x01, - 0x29, 0x6a, 0x57, 0x3d, 0xa7, 0x22, 0xc9, 0x65, 0xd7, 0x13, 0xc3, 0x67, 0xd7, 0xd3, 0x00, 0x17, - 0x54, 0xd7, 0x53, 0x4d, 0x90, 0x51, 0xca, 0x87, 0x0d, 0x30, 0x2c, 0x46, 0x59, 0x7e, 0x76, 0xfb, - 0x98, 0xf5, 0x77, 0xb1, 0xeb, 0xd8, 0x98, 0xef, 0x37, 0xe0, 0x2e, 0x72, 0x14, 0x24, 0xd1, 0xe0, - 0xfb, 0x06, 0x18, 0xa7, 0xc9, 0x6e, 0x24, 0xa5, 0x29, 0xaf, 0xac, 0x63, 0x37, 0x6e, 0xf7, 0x0b, - 0xbe, 0xa6, 0xd9, 0xae, 0x4e, 0xb5, 0x9a, 0xa5, 0x71, 0x9d, 0x82, 0xda, 0xb0, 0xe1, 0x1f, 0x0d, - 0x60, 0x62, 0x5b, 0xa4, 0x1f, 0x76, 0xb7, 0x23, 0xc7, 0x8f, 0x49, 0x24, 0x86, 0x12, 0x51, 0xc2, - 0xfb, 0xd8, 0xaf, 0xe5, 0x67, 0x9d, 0xea, 0x82, 0xdc, 0x1b, 0x73, 0xb9, 0x87, 0x07, 0xa8, 0xa7, - 0x6f, 0xe5, 0xff, 0x18, 0xf9, 0xe3, 0xad, 0xad, 0xb2, 0x66, 0x61, 0x97, 0xc0, 0x55, 0x30, 0xc5, - 0x3a, 0x50, 0x44, 0x42, 0xd7, 0xb1, 0x30, 0xe5, 0x13, 0x88, 0xc8, 0x30, 0x35, 0x0a, 0xd7, 0x72, - 0x7c, 0xd4, 0xa1, 0x01, 0x5f, 0x05, 0x50, 0xb4, 0x66, 0x6d, 0x76, 0xc4, 0x6d, 0xac, 0x9a, 0xac, - 0x5a, 0x87, 0x04, 0xea, 0xa2, 0x05, 0x57, 0xc0, 0xb4, 0x8b, 0x77, 0x89, 0x5b, 0x23, 0x2e, 0xb1, - 0xe2, 0x20, 0xe2, 0xa6, 0xc4, 0x8c, 0x36, 0xd3, 0x6a, 0x96, 0xa6, 0x37, 0xf2, 0x4c, 0xd4, 0x29, - 0x5f, 0xbe, 0x9a, 0x3f, 0x4f, 0xfa, 0xc2, 0x45, 0xc3, 0xfb, 0xb3, 0x02, 0x98, 0xeb, 0x9d, 0x14, - 0xf0, 0x3b, 0xaa, 0x3d, 0x15, 0x5d, 0xd7, 0xeb, 0x67, 0x90, 0x7a, 0xb2, 0x25, 0x07, 0x9d, 0xed, - 0x38, 0x3c, 0x66, 0x77, 0x26, 0x76, 0xd3, 0xd1, 0x7b, 0xe7, 0x2c, 0xd0, 0x99, 0xfd, 0xea, 0xa8, - 0xb8, 0x89, 0xb1, 0xcb, 0x2f, 0x5e, 0xec, 0x92, 0xf2, 0x87, 0x1d, 0xe3, 0x65, 0x76, 0x58, 0xe1, - 0x0f, 0x0c, 0x30, 0x19, 0x84, 0xc4, 0x5f, 0xde, 0x5e, 0xbf, 0xfb, 0x15, 0x71, 0x68, 0x65, 0x80, - 0xd6, 0x1f, 0xdd, 0x45, 0x36, 0xe3, 0x0a, 0x5b, 0xdb, 0x51, 0x10, 0xd2, 0xea, 0x85, 0x56, 0xb3, - 0x34, 0xb9, 0xd5, 0x8e, 0x82, 0xf2, 0xb0, 0x65, 0x0f, 0xcc, 0xac, 0x1d, 0xc5, 0x24, 0xf2, 0xb1, - 0xbb, 0x1a, 0x58, 0x89, 0x47, 0xfc, 0x58, 0xf8, 0x98, 0x1b, 0xd9, 0x8d, 0x87, 0x1c, 0xd9, 0xaf, - 0x80, 0x81, 0x24, 0x72, 0x65, 0xd6, 0x8e, 0xa9, 0x87, 0x28, 0xb4, 0x81, 0x18, 0xbd, 0x7c, 0x15, - 0x0c, 0x32, 0x3f, 0xe1, 0x25, 0x30, 0x10, 0xe1, 0x43, 0x6e, 0x75, 0xbc, 0x3a, 0xc2, 0x44, 0x10, - 0x3e, 0x44, 0x8c, 0x56, 0xfe, 0x45, 0x09, 0x4c, 0xe6, 0xd6, 0x02, 0xe7, 0x40, 0x41, 0xbd, 0x6e, - 0x01, 0x69, 0xb4, 0xb0, 0xbe, 0x8a, 0x0a, 0x8e, 0x0d, 0x5f, 0x50, 0xd5, 0x55, 0x80, 0x96, 0x54, - 0xc1, 0xe6, 0x54, 0xd6, 0x1a, 0x65, 0xe6, 0x98, 0x23, 0x69, 0x79, 0x64, 0x3e, 0x90, 0x3d, 0x79, - 0x2a, 0x84, 0x0f, 0x64, 0x0f, 0x31, 0xda, 0xa3, 0xbe, 0x57, 0xa4, 0x0f, 0x26, 0x43, 0x0f, 0xf1, - 0x60, 0x32, 0x7c, 0xdf, 0x07, 0x93, 0xc7, 0xc1, 0x50, 0xec, 0xc4, 0x2e, 0x31, 0x47, 0xda, 0x1b, - 0xd2, 0xdb, 0x8c, 0x88, 0x04, 0x0f, 0x12, 0x30, 0x62, 0x93, 0x3d, 0x9c, 0xb8, 0xb1, 0x59, 0xe4, - 0xd9, 0xf3, 0xf5, 0xd3, 0x65, 0x8f, 0x78, 0x50, 0x58, 0x15, 0x26, 0x51, 0x6a, 0x1b, 0x3e, 0x01, - 0x46, 0x3c, 0x7c, 0xe4, 0x78, 0x89, 0xc7, 0xbb, 0x36, 0x43, 0x88, 0x6d, 0x0a, 0x12, 0x4a, 0x79, - 0xac, 0x08, 0x92, 0x23, 0xcb, 0x4d, 0xa8, 0xd3, 0x20, 0x92, 0x29, 0xdb, 0x2a, 0x55, 0x04, 0xd7, - 0x72, 0x7c, 0xd4, 0xa1, 0xc1, 0xc1, 0x1c, 0x9f, 0x2b, 0x8f, 0x69, 0x60, 0x82, 0x84, 0x52, 0x5e, - 0x3b, 0x98, 0x94, 0x1f, 0xef, 0x05, 0x26, 0x95, 0x3b, 0x34, 0xe0, 0x97, 0xc1, 0xa8, 0x87, 0x8f, - 0x36, 0x88, 0x5f, 0x8f, 0xf7, 0xcd, 0x89, 0x05, 0x63, 0x71, 0xa0, 0x3a, 0xd1, 0x6a, 0x96, 0x46, - 0x37, 0x53, 0x22, 0xca, 0xf8, 0x5c, 0xd8, 0xf1, 0xa5, 0xf0, 0x79, 0x4d, 0x38, 0x25, 0xa2, 0x8c, - 0xcf, 0xba, 0x83, 0x10, 0xc7, 0xec, 0x5c, 0x99, 0x93, 0xed, 0xc3, 0xeb, 0xb6, 0x20, 0xa3, 0x94, - 0x0f, 0x17, 0x41, 0xd1, 0xc3, 0x47, 0x7c, 0xae, 0x33, 0xa7, 0xb8, 0x59, 0xfe, 0xa8, 0xb7, 0x29, - 0x69, 0x48, 0x71, 0xb9, 0xa4, 0xe3, 0x0b, 0xc9, 0x69, 0x4d, 0x52, 0xd2, 0x90, 0xe2, 0xb2, 0xfc, - 0x4d, 0x7c, 0xe7, 0x5e, 0x42, 0x84, 0x30, 0xe4, 0x91, 0x51, 0xf9, 0x7b, 0x27, 0x63, 0x21, 0x5d, - 0x8e, 0xcd, 0x55, 0x5e, 0xe2, 0xc6, 0x4e, 0xe8, 0x92, 0xad, 0x3d, 0xf3, 0x02, 0x8f, 0x3f, 0x6f, - 0xa7, 0x37, 0x15, 0x15, 0x69, 0x12, 0xf0, 0x6d, 0x30, 0x48, 0xfc, 0xc4, 0x33, 0x2f, 0xf2, 0xeb, - 0xfb, 0xb4, 0xd9, 0xa7, 0xce, 0xcb, 0x9a, 0x9f, 0x78, 0x88, 0x5b, 0x86, 0x2f, 0x80, 0x09, 0x0f, - 0x1f, 0xb1, 0x22, 0x40, 0xa2, 0x98, 0x0d, 0x7b, 0x33, 0x7c, 0xdd, 0xd3, 0xac, 0x91, 0xdc, 0xd4, - 0x19, 0xa8, 0x5d, 0x8e, 0x2b, 0x3a, 0xbe, 0xa6, 0x38, 0xab, 0x29, 0xea, 0x0c, 0xd4, 0x2e, 0xc7, - 0x82, 0x1c, 0x91, 0x7b, 0x89, 0x13, 0x11, 0xdb, 0xfc, 0x02, 0xef, 0x3d, 0xe5, 0x1b, 0xab, 0xa0, - 0x21, 0xc5, 0x85, 0xf7, 0xd2, 0xb1, 0xdf, 0xe4, 0x87, 0x6f, 0xbb, 0x6f, 0xa5, 0x7b, 0x2b, 0x5a, - 0x8e, 0x22, 0x7c, 0x2c, 0x6e, 0x15, 0x7d, 0xe0, 0x87, 0x3e, 0x18, 0xc2, 0xae, 0xbb, 0xb5, 0x67, - 0x5e, 0xe2, 0x11, 0xef, 0xe3, 0x6d, 0xa1, 0x2a, 0xcc, 0x32, 0xb3, 0x8f, 0x04, 0x0c, 0xc3, 0x0b, - 0x7c, 0x96, 0x0b, 0x73, 0x67, 0x86, 0xb7, 0xc5, 0xec, 0x23, 0x01, 0xc3, 0xd7, 0xe7, 0x1f, 0x6f, - 0xed, 0x99, 0x8f, 0x9d, 0xdd, 0xfa, 0x98, 0x7d, 0x24, 0x60, 0xa0, 0x0d, 0x06, 0xfc, 0x20, 0x36, - 0x2f, 0xf7, 0xfb, 0xee, 0xe5, 0xb7, 0xc9, 0xad, 0x20, 0x46, 0xcc, 0x3c, 0xfc, 0x91, 0x01, 0x40, - 0x98, 0x65, 0xe2, 0x95, 0xd3, 0x8e, 0xe1, 0x39, 0xb4, 0x4a, 0x96, 0xbd, 0x6b, 0x7e, 0x1c, 0x1d, - 0x67, 0xb3, 0x9f, 0x96, 0xe5, 0x9a, 0x03, 0xf0, 0x97, 0x06, 0xb8, 0xa8, 0xb7, 0xbb, 0xca, 0xb3, - 0x79, 0x1e, 0x87, 0xad, 0x3e, 0x26, 0x72, 0x35, 0x08, 0xdc, 0xaa, 0xd9, 0x6a, 0x96, 0x2e, 0x2e, - 0x77, 0x01, 0x44, 0x5d, 0xdd, 0x80, 0xbf, 0x35, 0xc0, 0xb4, 0xac, 0x8e, 0x9a, 0x73, 0x25, 0x1e, - 0xb6, 0xb7, 0xfb, 0x18, 0xb6, 0x3c, 0x84, 0x88, 0x9e, 0xfa, 0xa5, 0xaf, 0x83, 0x8f, 0x3a, 0xbd, - 0x82, 0x7f, 0x30, 0xc0, 0xb8, 0x4d, 0x42, 0xe2, 0xdb, 0xc4, 0xb7, 0x98, 0x9b, 0x0b, 0xa7, 0x9d, - 0xed, 0xf3, 0x6e, 0xae, 0x6a, 0xd6, 0x85, 0x87, 0x15, 0xe9, 0xe1, 0xb8, 0xce, 0x3a, 0x69, 0x96, - 0x66, 0x33, 0x55, 0x9d, 0x83, 0xda, 0x1c, 0x84, 0x3f, 0x36, 0xc0, 0x64, 0x16, 0x76, 0x71, 0x41, - 0x5c, 0x3d, 0x9b, 0x8d, 0xe7, 0x2d, 0xe8, 0x72, 0x3b, 0x16, 0xca, 0x83, 0xc3, 0xdf, 0x19, 0xac, - 0xdb, 0x4a, 0x67, 0x35, 0x6a, 0x96, 0x79, 0x04, 0xdf, 0xe8, 0x67, 0x04, 0x95, 0x71, 0x11, 0xc0, - 0x6b, 0x59, 0x27, 0xa7, 0x38, 0x27, 0xcd, 0xd2, 0x8c, 0x1e, 0x3f, 0xc5, 0x40, 0xba, 0x73, 0xf0, - 0x3d, 0x03, 0x8c, 0x93, 0xac, 0x61, 0xa6, 0xe6, 0xe3, 0xa7, 0x0d, 0x5d, 0xd7, 0xf6, 0x5b, 0x8c, - 0xd3, 0x1a, 0x8b, 0xa2, 0x36, 0x58, 0xd6, 0xfb, 0x91, 0x23, 0xec, 0x85, 0x2e, 0x31, 0xbf, 0xd8, - 0xbf, 0xde, 0x6f, 0x4d, 0x98, 0x44, 0xa9, 0x6d, 0x78, 0x0d, 0x14, 0xfd, 0xc4, 0x75, 0xf1, 0xae, - 0x4b, 0xcc, 0x27, 0x78, 0x17, 0xa1, 0xde, 0xf8, 0x6e, 0x49, 0x3a, 0x52, 0x12, 0x70, 0x0f, 0x2c, - 0x1c, 0xdd, 0x54, 0x7f, 0x80, 0xe8, 0xfa, 0x88, 0x66, 0x3e, 0xc9, 0xad, 0xcc, 0xb5, 0x9a, 0xa5, - 0xd9, 0x9d, 0xee, 0xcf, 0x6c, 0x0f, 0xb4, 0x01, 0xdf, 0x04, 0x8f, 0x69, 0x32, 0x6b, 0xde, 0x2e, - 0xb1, 0x6d, 0x62, 0xa7, 0x83, 0x96, 0xf9, 0x25, 0x0e, 0xa1, 0xce, 0xf1, 0x4e, 0x5e, 0x00, 0xdd, - 0x4f, 0x1b, 0x6e, 0x80, 0x59, 0x8d, 0xbd, 0xee, 0xc7, 0x5b, 0x51, 0x2d, 0x8e, 0x1c, 0xbf, 0x6e, - 0x2e, 0x72, 0xbb, 0x17, 0xd3, 0xd3, 0xb7, 0xa3, 0xf1, 0x50, 0x0f, 0x1d, 0xf8, 0xcd, 0x36, 0x6b, - 0xfc, 0xc7, 0x03, 0x1c, 0xde, 0x24, 0xc7, 0xd4, 0x7c, 0x8a, 0x37, 0x17, 0x7c, 0x9f, 0x77, 0x34, - 0x3a, 0xea, 0x21, 0x0f, 0xbf, 0x01, 0x2e, 0xe4, 0x38, 0x6c, 0xae, 0x30, 0x9f, 0x16, 0x03, 0x02, - 0xeb, 0x44, 0x77, 0x52, 0x22, 0xea, 0x26, 0x39, 0xc7, 0xa6, 0xce, 0x5c, 0xb1, 0x83, 0x53, 0x60, - 0xe0, 0x80, 0xc8, 0xdf, 0x38, 0x11, 0xfb, 0x08, 0xdf, 0x02, 0x43, 0x0d, 0xec, 0x26, 0xe9, 0xcc, - 0xdc, 0xbf, 0x4b, 0x11, 0x09, 0xbb, 0x2f, 0x15, 0x5e, 0x34, 0xe6, 0x3e, 0x30, 0xc0, 0x6c, 0xf7, - 0xf2, 0xfb, 0x79, 0x79, 0xf4, 0x73, 0x03, 0x4c, 0x77, 0x54, 0xda, 0x2e, 0xce, 0xb8, 0xed, 0xce, - 0xdc, 0xed, 0x63, 0xc9, 0x14, 0x19, 0xc3, 0x5b, 0x3f, 0xdd, 0xb3, 0x1f, 0x1a, 0x60, 0x2a, 0x5f, - 0xc1, 0x3e, 0xa7, 0x28, 0x95, 0xdf, 0x2f, 0x80, 0xd9, 0xee, 0xcd, 0x2a, 0xf4, 0xd4, 0x18, 0xde, - 0xf7, 0x97, 0x8c, 0x6e, 0x6f, 0x9b, 0xef, 0x1a, 0x60, 0xec, 0x1d, 0x25, 0x97, 0xfe, 0xf4, 0xd6, - 0xcf, 0xe7, 0x93, 0xf4, 0x8e, 0xc8, 0x18, 0x14, 0xe9, 0x90, 0xe5, 0xdf, 0x1b, 0x60, 0xa6, 0xeb, - 0xbd, 0xc7, 0xa6, 0x7c, 0xec, 0xba, 0xc1, 0xa1, 0x78, 0xf6, 0xd2, 0xde, 0x90, 0x97, 0x39, 0x15, - 0x49, 0xae, 0x16, 0xb3, 0xc2, 0x67, 0x10, 0xb3, 0xf2, 0x9f, 0x0c, 0x70, 0xf9, 0x7e, 0x59, 0xf7, - 0x59, 0xef, 0xe1, 0x22, 0x28, 0xca, 0xae, 0xf4, 0x98, 0xef, 0x9f, 0x1c, 0xb5, 0x64, 0x45, 0xe0, - 0x7f, 0xed, 0x10, 0x9f, 0xca, 0xbf, 0x36, 0xc0, 0x54, 0x8d, 0x44, 0x0d, 0xc7, 0x22, 0x88, 0xec, - 0x91, 0x88, 0xf8, 0x16, 0x81, 0x4b, 0x60, 0x94, 0xff, 0x34, 0x16, 0x62, 0x2b, 0x7d, 0xd0, 0x9f, - 0x96, 0x81, 0x1e, 0xbd, 0x95, 0x32, 0x50, 0x26, 0xa3, 0x1e, 0xff, 0x0b, 0x3d, 0x1f, 0xff, 0x2f, - 0x83, 0xc1, 0x30, 0x7b, 0x29, 0x2d, 0x32, 0x2e, 0x7f, 0x1c, 0xe5, 0x54, 0xce, 0x0d, 0xa2, 0x98, - 0x3f, 0x07, 0x0d, 0x49, 0x6e, 0x10, 0xc5, 0x88, 0x53, 0xcb, 0x7f, 0x36, 0xc0, 0x85, 0xf4, 0x3f, - 0x1a, 0xae, 0x43, 0xfc, 0x78, 0x25, 0xf0, 0xf7, 0x9c, 0x3a, 0xbc, 0x24, 0x5e, 0xc4, 0xb4, 0x67, - 0xa6, 0xf4, 0x35, 0x0c, 0xde, 0x03, 0x23, 0x54, 0xac, 0x4a, 0x06, 0xfc, 0xd5, 0x47, 0x0f, 0x78, - 0x3e, 0x3c, 0xe2, 0x42, 0x4f, 0xa9, 0x29, 0x0e, 0x8b, 0xb9, 0x85, 0xab, 0x89, 0x6f, 0xcb, 0x57, - 0xd1, 0x71, 0x11, 0xf3, 0x95, 0x65, 0x41, 0x43, 0x8a, 0x5b, 0xfe, 0xa7, 0x01, 0xa6, 0x3b, 0xfe, - 0x73, 0x02, 0xbf, 0x67, 0x80, 0x71, 0x4b, 0x5b, 0x9e, 0xcc, 0xdc, 0xcd, 0xd3, 0xff, 0xaf, 0x45, - 0x33, 0x2a, 0x6e, 0x45, 0x9d, 0x82, 0xda, 0x40, 0xe1, 0x0e, 0x30, 0xad, 0xdc, 0xdf, 0xbb, 0x72, - 0x3f, 0x18, 0x5d, 0x6e, 0x35, 0x4b, 0xe6, 0x4a, 0x0f, 0x19, 0xd4, 0x53, 0xbb, 0xba, 0xf8, 0xd1, - 0xa7, 0xf3, 0xe7, 0x3e, 0xfe, 0x74, 0xfe, 0xdc, 0x27, 0x9f, 0xce, 0x9f, 0x7b, 0xb7, 0x35, 0x6f, - 0x7c, 0xd4, 0x9a, 0x37, 0x3e, 0x6e, 0xcd, 0x1b, 0x9f, 0xb4, 0xe6, 0x8d, 0xbf, 0xb7, 0xe6, 0x8d, - 0x9f, 0xfc, 0x63, 0xfe, 0xdc, 0x1b, 0x85, 0xc6, 0xf5, 0xff, 0x07, 0x00, 0x00, 0xff, 0xff, 0xcd, - 0xae, 0x89, 0xe9, 0xf2, 0x29, 0x00, 0x00, + // 2943 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xc4, 0x5a, 0xdb, 0x6f, 0x24, 0x47, + 0xd5, 0xdf, 0x1e, 0xdf, 0xc6, 0x65, 0x7b, 0x6d, 0xd7, 0xae, 0xfd, 0xf5, 0x3a, 0xbb, 0x1e, 0xef, + 0xe4, 0xcb, 0x7e, 0x4e, 0xb2, 0x19, 0x67, 0xf7, 0x4b, 0x48, 0x88, 0x10, 0xc8, 0x63, 0x3b, 0xc1, + 0x59, 0x7b, 0x6d, 0xd5, 0xec, 0x6e, 0x9c, 0x04, 0x29, 0x29, 0x77, 0x97, 0xc7, 0x1d, 0xf7, 0x6d, + 0xbb, 0xba, 0xc7, 0xb6, 0x04, 0x52, 0x04, 0x8a, 0x80, 0x48, 0x10, 0x1e, 0x10, 0x3c, 0x21, 0x84, + 0x50, 0x1e, 0xe0, 0x01, 0xde, 0xe0, 0x5f, 0xc8, 0x0b, 0x52, 0x1e, 0x10, 0x44, 0x42, 0x1a, 0x91, + 0xe1, 0x4f, 0x00, 0x84, 0xf0, 0x03, 0x42, 0x75, 0xe9, 0xea, 0x9a, 0x9e, 0x99, 0xec, 0x6a, 0x3d, + 0x4e, 0xde, 0xec, 0x73, 0xfb, 0x9d, 0x3a, 0x75, 0xea, 0xd4, 0x39, 0xd5, 0x03, 0xf0, 0xc1, 0x8b, + 0xb4, 0xe2, 0x04, 0x4b, 0x07, 0xc9, 0x2e, 0x89, 0x7c, 0x12, 0x13, 0xba, 0xd4, 0x20, 0xbe, 0x1d, + 0x44, 0x4b, 0x92, 0x81, 0x43, 0x87, 0x1c, 0xc5, 0xc4, 0xa7, 0x4e, 0xe0, 0xd3, 0x67, 0x70, 0xe8, + 0x50, 0x12, 0x35, 0x48, 0xb4, 0x14, 0x1e, 0xd4, 0x19, 0x8f, 0xb6, 0x0b, 0x2c, 0x35, 0x6e, 0x2c, + 0xd5, 0x89, 0x4f, 0x22, 0x1c, 0x13, 0xbb, 0x12, 0x46, 0x41, 0x1c, 0xc0, 0x17, 0x85, 0xa5, 0x4a, + 0x9b, 0xe0, 0x5b, 0xca, 0x52, 0x25, 0x3c, 0xa8, 0x33, 0x1e, 0x6d, 0x17, 0xa8, 0x34, 0x6e, 0xcc, + 0x3d, 0x53, 0x77, 0xe2, 0xfd, 0x64, 0xb7, 0x62, 0x05, 0xde, 0x52, 0x3d, 0xa8, 0x07, 0x4b, 0xdc, + 0xe0, 0x6e, 0xb2, 0xc7, 0xff, 0xe3, 0xff, 0xf0, 0xbf, 0x04, 0xd0, 0xdc, 0x73, 0x99, 0xcb, 0x1e, + 0xb6, 0xf6, 0x1d, 0x9f, 0x44, 0xc7, 0x99, 0x9f, 0x1e, 0x89, 0x71, 0x17, 0xf7, 0xe6, 0x96, 0x7a, + 0x69, 0x45, 0x89, 0x1f, 0x3b, 0x1e, 0xe9, 0x50, 0xf8, 0xd2, 0x83, 0x14, 0xa8, 0xb5, 0x4f, 0x3c, + 0x9c, 0xd7, 0x2b, 0x9f, 0x18, 0x60, 0x7a, 0x25, 0xf0, 0x1b, 0x24, 0x62, 0x0b, 0x44, 0xe4, 0x7e, + 0x42, 0x68, 0x0c, 0xab, 0x60, 0x20, 0x71, 0x6c, 0xd3, 0x58, 0x30, 0x16, 0x47, 0xab, 0xcf, 0x7e, + 0xd4, 0x2c, 0x9d, 0x6b, 0x35, 0x4b, 0x03, 0x77, 0xd7, 0x57, 0x4f, 0x9a, 0xa5, 0xab, 0xbd, 0x90, + 0xe2, 0xe3, 0x90, 0xd0, 0xca, 0xdd, 0xf5, 0x55, 0xc4, 0x94, 0xe1, 0x2b, 0x60, 0xda, 0x26, 0xd4, + 0x89, 0x88, 0xbd, 0xbc, 0xbd, 0x7e, 0x4f, 0xd8, 0x37, 0x0b, 0xdc, 0xe2, 0x25, 0x69, 0x71, 0x7a, + 0x35, 0x2f, 0x80, 0x3a, 0x75, 0xe0, 0x0e, 0x18, 0x09, 0x76, 0xdf, 0x21, 0x56, 0x4c, 0xcd, 0x81, + 0x85, 0x81, 0xc5, 0xb1, 0x9b, 0xcf, 0x54, 0xb2, 0xcd, 0x53, 0x2e, 0xf0, 0x1d, 0x93, 0x8b, 0xad, + 0x20, 0x7c, 0xb8, 0x96, 0x6e, 0x5a, 0x75, 0x52, 0xa2, 0x8d, 0x6c, 0x09, 0x2b, 0x28, 0x35, 0x57, + 0xfe, 0x65, 0x01, 0x40, 0x7d, 0xf1, 0x34, 0x0c, 0x7c, 0x4a, 0xfa, 0xb2, 0x7a, 0x0a, 0xa6, 0x2c, + 0x6e, 0x39, 0x26, 0xb6, 0xc4, 0x35, 0x0b, 0x8f, 0xe2, 0xbd, 0x29, 0xf1, 0xa7, 0x56, 0x72, 0xe6, + 0x50, 0x07, 0x00, 0xbc, 0x03, 0x86, 0x23, 0x42, 0x13, 0x37, 0x36, 0x07, 0x16, 0x8c, 0xc5, 0xb1, + 0x9b, 0xd7, 0x7b, 0x42, 0xf1, 0xd4, 0x66, 0xc9, 0x57, 0x69, 0xdc, 0xa8, 0xd4, 0x62, 0x1c, 0x27, + 0xb4, 0x7a, 0x5e, 0x22, 0x0d, 0x23, 0x6e, 0x03, 0x49, 0x5b, 0xe5, 0xff, 0x18, 0x60, 0x4a, 0x8f, + 0x52, 0xc3, 0x21, 0x87, 0x30, 0x02, 0x23, 0x91, 0x48, 0x16, 0x1e, 0xa7, 0xb1, 0x9b, 0xb7, 0x2a, + 0x8f, 0x7a, 0xa2, 0x2a, 0x1d, 0xf9, 0x57, 0x1d, 0x63, 0xdb, 0x25, 0xff, 0x41, 0x29, 0x10, 0x6c, + 0x80, 0x62, 0x24, 0xf7, 0x88, 0x27, 0xd2, 0xd8, 0xcd, 0x8d, 0xfe, 0x80, 0x0a, 0x9b, 0xd5, 0xf1, + 0x56, 0xb3, 0x54, 0x4c, 0xff, 0x43, 0x0a, 0xab, 0xfc, 0xf3, 0x02, 0x98, 0x5f, 0x49, 0x68, 0x1c, + 0x78, 0x88, 0xd0, 0x20, 0x89, 0x2c, 0xb2, 0x12, 0xb8, 0x89, 0xe7, 0xaf, 0x92, 0x3d, 0xc7, 0x77, + 0x62, 0x96, 0xa3, 0x0b, 0x60, 0xd0, 0xc7, 0x1e, 0x91, 0x39, 0x33, 0x2e, 0x23, 0x39, 0x78, 0x1b, + 0x7b, 0x04, 0x71, 0x0e, 0x93, 0x60, 0x29, 0x22, 0x4f, 0x80, 0x92, 0xb8, 0x73, 0x1c, 0x12, 0xc4, + 0x39, 0xf0, 0x1a, 0x18, 0xde, 0x0b, 0x22, 0x0f, 0x8b, 0xdd, 0x1b, 0xcd, 0xf6, 0xe3, 0x65, 0x4e, + 0x45, 0x92, 0x0b, 0x9f, 0x07, 0x63, 0x36, 0xa1, 0x56, 0xe4, 0x84, 0x0c, 0xda, 0x1c, 0xe4, 0xc2, + 0x17, 0xa4, 0xf0, 0xd8, 0x6a, 0xc6, 0x42, 0xba, 0x1c, 0xbc, 0x0e, 0x8a, 0x61, 0xe4, 0x04, 0x91, + 0x13, 0x1f, 0x9b, 0x43, 0x0b, 0xc6, 0xe2, 0x50, 0x75, 0x4a, 0xea, 0x14, 0xb7, 0x25, 0x1d, 0x29, + 0x09, 0x26, 0xfd, 0x0e, 0x0d, 0xfc, 0x6d, 0x1c, 0xef, 0x9b, 0xc3, 0x1c, 0x41, 0x49, 0xbf, 0x5a, + 0xdb, 0xba, 0xcd, 0xe8, 0x48, 0x49, 0x94, 0xff, 0x64, 0x00, 0x33, 0x1f, 0xa1, 0x34, 0xbc, 0xf0, + 0x65, 0x50, 0xa4, 0x31, 0xab, 0x39, 0xf5, 0x63, 0x19, 0x9f, 0xa7, 0x52, 0x53, 0x35, 0x49, 0x3f, + 0x69, 0x96, 0x66, 0x33, 0x8d, 0x94, 0xca, 0x63, 0xa3, 0x74, 0x59, 0xca, 0x1d, 0x92, 0xdd, 0xfd, + 0x20, 0x38, 0x90, 0xbb, 0x7f, 0x8a, 0x94, 0x7b, 0x4d, 0x18, 0xca, 0x30, 0x45, 0xca, 0x49, 0x32, + 0x4a, 0x81, 0xca, 0xff, 0x2e, 0xe4, 0x17, 0xa6, 0x6d, 0xfa, 0xdb, 0xa0, 0xc8, 0x8e, 0x90, 0x8d, + 0x63, 0x2c, 0x0f, 0xc1, 0xb3, 0x0f, 0x77, 0xe0, 0xc4, 0x79, 0xdd, 0x24, 0x31, 0xae, 0x42, 0x19, + 0x0a, 0x90, 0xd1, 0x90, 0xb2, 0x0a, 0x8f, 0xc0, 0x20, 0x0d, 0x89, 0x25, 0xd7, 0x7b, 0xef, 0x14, + 0xd9, 0xde, 0x63, 0x0d, 0xb5, 0x90, 0x58, 0x59, 0x32, 0xb2, 0xff, 0x10, 0x47, 0x84, 0xef, 0x1a, + 0x60, 0x98, 0xf2, 0xba, 0x20, 0x6b, 0xc9, 0xce, 0x19, 0x80, 0xe7, 0xea, 0x8e, 0xf8, 0x1f, 0x49, + 0xdc, 0xf2, 0x3f, 0x0a, 0xe0, 0x6a, 0x2f, 0xd5, 0x95, 0xc0, 0xb7, 0xc5, 0x26, 0xac, 0xcb, 0x73, + 0x25, 0x32, 0xeb, 0x79, 0xfd, 0x5c, 0x9d, 0x34, 0x4b, 0x4f, 0x3c, 0xd0, 0x80, 0x76, 0x00, 0xbf, + 0xac, 0x96, 0x2c, 0x0e, 0xe9, 0xd5, 0x76, 0xc7, 0x4e, 0x9a, 0xa5, 0x49, 0xa5, 0xd6, 0xee, 0x2b, + 0x6c, 0x00, 0xe8, 0x62, 0x1a, 0xdf, 0x89, 0xb0, 0x4f, 0x85, 0x59, 0xc7, 0x23, 0x32, 0x72, 0x4f, + 0x3d, 0x5c, 0x52, 0x30, 0x8d, 0xea, 0x9c, 0x84, 0x84, 0x1b, 0x1d, 0xd6, 0x50, 0x17, 0x04, 0x56, + 0x33, 0x22, 0x82, 0xa9, 0x2a, 0x03, 0x5a, 0x0d, 0x67, 0x54, 0x24, 0xb9, 0xf0, 0x49, 0x30, 0xe2, + 0x11, 0x4a, 0x71, 0x9d, 0xf0, 0xb3, 0x3f, 0x9a, 0x5d, 0x8a, 0x9b, 0x82, 0x8c, 0x52, 0x7e, 0xf9, + 0x9f, 0x06, 0xb8, 0xdc, 0x2b, 0x6a, 0x1b, 0x0e, 0x8d, 0xe1, 0x37, 0x3a, 0xd2, 0xbe, 0xf2, 0x70, + 0x2b, 0x64, 0xda, 0x3c, 0xe9, 0x55, 0x29, 0x49, 0x29, 0x5a, 0xca, 0x1f, 0x82, 0x21, 0x27, 0x26, + 0x5e, 0x7a, 0x5b, 0xa2, 0xfe, 0xa7, 0x5d, 0x75, 0x42, 0xc2, 0x0f, 0xad, 0x33, 0x20, 0x24, 0xf0, + 0xca, 0x1f, 0x16, 0xc0, 0x95, 0x5e, 0x2a, 0xac, 0x8e, 0x53, 0x16, 0xec, 0xd0, 0x4d, 0x22, 0xec, + 0xca, 0x64, 0x53, 0xc1, 0xde, 0xe6, 0x54, 0x24, 0xb9, 0xac, 0x76, 0x52, 0xc7, 0xaf, 0x27, 0x2e, + 0x8e, 0x64, 0x26, 0xa9, 0x05, 0xd7, 0x24, 0x1d, 0x29, 0x09, 0x58, 0x01, 0x80, 0xee, 0x07, 0x51, + 0xcc, 0x31, 0x78, 0x87, 0x33, 0x5a, 0x3d, 0xcf, 0x2a, 0x42, 0x4d, 0x51, 0x91, 0x26, 0xc1, 0x2e, + 0x92, 0x03, 0xc7, 0xb7, 0xe5, 0x86, 0xab, 0xb3, 0x7b, 0xcb, 0xf1, 0x6d, 0xc4, 0x39, 0x0c, 0xdf, + 0x75, 0x68, 0xcc, 0x28, 0x72, 0xb7, 0xdb, 0x02, 0xce, 0x25, 0x95, 0x04, 0xc3, 0xb7, 0x58, 0x81, + 0x0d, 0x22, 0x87, 0x50, 0x73, 0x38, 0xc3, 0x5f, 0x51, 0x54, 0xa4, 0x49, 0x94, 0xff, 0x32, 0xd8, + 0x3b, 0x3f, 0x58, 0x01, 0x81, 0x8f, 0x83, 0xa1, 0x7a, 0x14, 0x24, 0xa1, 0x8c, 0x92, 0x8a, 0xf6, + 0x2b, 0x8c, 0x88, 0x04, 0x0f, 0x7e, 0x13, 0x0c, 0xf9, 0x72, 0xc1, 0x2c, 0x83, 0x5e, 0xeb, 0xff, + 0x36, 0xf3, 0x68, 0x65, 0xe8, 0x22, 0x90, 0x02, 0x14, 0x3e, 0x07, 0x86, 0xa8, 0x15, 0x84, 0x44, + 0x06, 0x71, 0x3e, 0x15, 0xaa, 0x31, 0xe2, 0x49, 0xb3, 0x34, 0x91, 0x9a, 0xe3, 0x04, 0x24, 0x84, + 0xe1, 0x77, 0x0d, 0x50, 0x94, 0xd7, 0x05, 0x35, 0x47, 0x78, 0x7a, 0xbe, 0xde, 0x7f, 0xbf, 0x65, + 0xdb, 0x9b, 0xed, 0x99, 0x24, 0x50, 0xa4, 0xc0, 0xe1, 0xb7, 0x0d, 0x00, 0x2c, 0x75, 0x77, 0x99, + 0xa3, 0x3c, 0x86, 0x7d, 0x3b, 0x2a, 0xda, 0xad, 0x28, 0x12, 0x21, 0x6b, 0x95, 0x34, 0x54, 0x58, + 0x03, 0x33, 0x61, 0x44, 0xb8, 0xed, 0xbb, 0xfe, 0x81, 0x1f, 0x1c, 0xfa, 0x2f, 0x3b, 0xc4, 0xb5, + 0xa9, 0x09, 0x16, 0x8c, 0xc5, 0x62, 0xf5, 0x8a, 0xf4, 0x7f, 0x66, 0xbb, 0x9b, 0x10, 0xea, 0xae, + 0x5b, 0x7e, 0x6f, 0x20, 0xdf, 0x6b, 0xe5, 0xef, 0x0b, 0xf8, 0x81, 0x58, 0xbc, 0xa8, 0xc3, 0xd4, + 0x34, 0xf8, 0x46, 0xbc, 0xd9, 0xff, 0x8d, 0x50, 0xb5, 0x3e, 0xbb, 0xa4, 0x15, 0x89, 0x22, 0xcd, + 0x05, 0xf8, 0x63, 0x03, 0x4c, 0x60, 0xcb, 0x22, 0x61, 0x4c, 0x6c, 0x71, 0x8c, 0x0b, 0x67, 0x9b, + 0xd5, 0x33, 0xd2, 0xa1, 0x89, 0x65, 0x1d, 0x15, 0xb5, 0x3b, 0x01, 0x5f, 0x02, 0xe7, 0x69, 0x1c, + 0x44, 0xc4, 0x4e, 0x33, 0x48, 0x56, 0x17, 0xd8, 0x6a, 0x96, 0xce, 0xd7, 0xda, 0x38, 0x28, 0x27, + 0x59, 0xfe, 0xe3, 0x20, 0x28, 0x3d, 0x20, 0x43, 0x1f, 0xa2, 0xe9, 0xbd, 0x06, 0x86, 0xf9, 0x4a, + 0x6d, 0x1e, 0x90, 0xa2, 0x76, 0xd5, 0x73, 0x2a, 0x92, 0x5c, 0x76, 0x3d, 0x31, 0x7c, 0x76, 0x3d, + 0x0d, 0x70, 0x41, 0x75, 0x3d, 0xd5, 0x04, 0x19, 0xa5, 0x7c, 0xd8, 0x00, 0xc3, 0x62, 0x94, 0xe5, + 0x67, 0xb7, 0x8f, 0x59, 0x7f, 0x0f, 0xbb, 0x8e, 0x8d, 0xf9, 0x7e, 0x03, 0xee, 0x22, 0x47, 0x41, + 0x12, 0x0d, 0xbe, 0x6f, 0x80, 0x71, 0x9a, 0xec, 0x46, 0x52, 0x9a, 0xf2, 0xca, 0x3a, 0x76, 0xf3, + 0x4e, 0xbf, 0xe0, 0x6b, 0x9a, 0xed, 0xea, 0x54, 0xab, 0x59, 0x1a, 0xd7, 0x29, 0xa8, 0x0d, 0x1b, + 0xfe, 0xce, 0x00, 0x26, 0xb6, 0x45, 0xfa, 0x61, 0x77, 0x3b, 0x72, 0xfc, 0x98, 0x44, 0x62, 0x28, + 0x11, 0x25, 0xbc, 0x8f, 0xfd, 0x5a, 0x7e, 0xd6, 0xa9, 0x2e, 0xc8, 0xbd, 0x31, 0x97, 0x7b, 0x78, + 0x80, 0x7a, 0xfa, 0x56, 0xfe, 0x97, 0x91, 0x3f, 0xde, 0xda, 0x2a, 0x6b, 0x16, 0x76, 0x09, 0x5c, + 0x05, 0x53, 0xac, 0x03, 0x45, 0x24, 0x74, 0x1d, 0x0b, 0x53, 0x3e, 0x81, 0x88, 0x0c, 0x53, 0xa3, + 0x70, 0x2d, 0xc7, 0x47, 0x1d, 0x1a, 0xf0, 0x55, 0x00, 0x45, 0x6b, 0xd6, 0x66, 0x47, 0xdc, 0xc6, + 0xaa, 0xc9, 0xaa, 0x75, 0x48, 0xa0, 0x2e, 0x5a, 0x70, 0x05, 0x4c, 0xbb, 0x78, 0x97, 0xb8, 0x35, + 0xe2, 0x12, 0x2b, 0x0e, 0x22, 0x6e, 0x4a, 0xcc, 0x68, 0x33, 0xad, 0x66, 0x69, 0x7a, 0x23, 0xcf, + 0x44, 0x9d, 0xf2, 0xe5, 0xab, 0xf9, 0xf3, 0xa4, 0x2f, 0x5c, 0x34, 0xbc, 0x3f, 0x29, 0x80, 0xb9, + 0xde, 0x49, 0x01, 0xbf, 0xa5, 0xda, 0x53, 0xd1, 0x75, 0xbd, 0x7e, 0x06, 0xa9, 0x27, 0x5b, 0x72, + 0xd0, 0xd9, 0x8e, 0xc3, 0x63, 0x76, 0x67, 0x62, 0x37, 0x1d, 0xbd, 0x77, 0xce, 0x02, 0x9d, 0xd9, + 0xaf, 0x8e, 0x8a, 0x9b, 0x18, 0xbb, 0xfc, 0xe2, 0xc5, 0x2e, 0x29, 0x7f, 0xd8, 0x31, 0x5e, 0x66, + 0x87, 0x15, 0x7e, 0xcf, 0x00, 0x93, 0x41, 0x48, 0xfc, 0xe5, 0xed, 0xf5, 0x7b, 0xff, 0x2f, 0x0e, + 0xad, 0x0c, 0xd0, 0xfa, 0xa3, 0xbb, 0xc8, 0x66, 0x5c, 0x61, 0x6b, 0x3b, 0x0a, 0x42, 0x5a, 0xbd, + 0xd0, 0x6a, 0x96, 0x26, 0xb7, 0xda, 0x51, 0x50, 0x1e, 0xb6, 0xec, 0x81, 0x99, 0xb5, 0xa3, 0x98, + 0x44, 0x3e, 0x76, 0x57, 0x03, 0x2b, 0xf1, 0x88, 0x1f, 0x0b, 0x1f, 0x73, 0x23, 0xbb, 0xf1, 0x90, + 0x23, 0xfb, 0x15, 0x30, 0x90, 0x44, 0xae, 0xcc, 0xda, 0x31, 0xf5, 0x10, 0x85, 0x36, 0x10, 0xa3, + 0x97, 0xaf, 0x82, 0x41, 0xe6, 0x27, 0xbc, 0x04, 0x06, 0x22, 0x7c, 0xc8, 0xad, 0x8e, 0x57, 0x47, + 0x98, 0x08, 0xc2, 0x87, 0x88, 0xd1, 0xca, 0x7f, 0x2e, 0x81, 0xc9, 0xdc, 0x5a, 0xe0, 0x1c, 0x28, + 0xa8, 0xd7, 0x2d, 0x20, 0x8d, 0x16, 0xd6, 0x57, 0x51, 0xc1, 0xb1, 0xe1, 0x0b, 0xaa, 0xba, 0x0a, + 0xd0, 0x92, 0x2a, 0xd8, 0x9c, 0xca, 0x5a, 0xa3, 0xcc, 0x1c, 0x73, 0x24, 0x2d, 0x8f, 0xcc, 0x07, + 0xb2, 0x27, 0x4f, 0x85, 0xf0, 0x81, 0xec, 0x21, 0x46, 0x7b, 0xd4, 0xf7, 0x8a, 0xf4, 0xc1, 0x64, + 0xe8, 0x21, 0x1e, 0x4c, 0x86, 0x3f, 0xf3, 0xc1, 0xe4, 0x71, 0x30, 0x14, 0x3b, 0xb1, 0x4b, 0xcc, + 0x91, 0xf6, 0x86, 0xf4, 0x0e, 0x23, 0x22, 0xc1, 0x83, 0x04, 0x8c, 0xd8, 0x64, 0x0f, 0x27, 0x6e, + 0x6c, 0x16, 0x79, 0xf6, 0x7c, 0xf5, 0x74, 0xd9, 0x23, 0x1e, 0x14, 0x56, 0x85, 0x49, 0x94, 0xda, + 0x86, 0x4f, 0x80, 0x11, 0x0f, 0x1f, 0x39, 0x5e, 0xe2, 0xf1, 0xae, 0xcd, 0x10, 0x62, 0x9b, 0x82, + 0x84, 0x52, 0x1e, 0x2b, 0x82, 0xe4, 0xc8, 0x72, 0x13, 0xea, 0x34, 0x88, 0x64, 0xca, 0xb6, 0x4a, + 0x15, 0xc1, 0xb5, 0x1c, 0x1f, 0x75, 0x68, 0x70, 0x30, 0xc7, 0xe7, 0xca, 0x63, 0x1a, 0x98, 0x20, + 0xa1, 0x94, 0xd7, 0x0e, 0x26, 0xe5, 0xc7, 0x7b, 0x81, 0x49, 0xe5, 0x0e, 0x0d, 0xf8, 0x34, 0x18, + 0xf5, 0xf0, 0xd1, 0x06, 0xf1, 0xeb, 0xf1, 0xbe, 0x39, 0xb1, 0x60, 0x2c, 0x0e, 0x54, 0x27, 0x5a, + 0xcd, 0xd2, 0xe8, 0x66, 0x4a, 0x44, 0x19, 0x9f, 0x0b, 0x3b, 0xbe, 0x14, 0x3e, 0xaf, 0x09, 0xa7, + 0x44, 0x94, 0xf1, 0x59, 0x77, 0x10, 0xe2, 0x98, 0x9d, 0x2b, 0x73, 0xb2, 0x7d, 0x78, 0xdd, 0x16, + 0x64, 0x94, 0xf2, 0xe1, 0x22, 0x28, 0x7a, 0xf8, 0x88, 0xcf, 0x75, 0xe6, 0x14, 0x37, 0xcb, 0x1f, + 0xf5, 0x36, 0x25, 0x0d, 0x29, 0x2e, 0x97, 0x74, 0x7c, 0x21, 0x39, 0xad, 0x49, 0x4a, 0x1a, 0x52, + 0x5c, 0x96, 0xbf, 0x89, 0xef, 0xdc, 0x4f, 0x88, 0x10, 0x86, 0x3c, 0x32, 0x2a, 0x7f, 0xef, 0x66, + 0x2c, 0xa4, 0xcb, 0xb1, 0xb9, 0xca, 0x4b, 0xdc, 0xd8, 0x09, 0x5d, 0xb2, 0xb5, 0x67, 0x5e, 0xe0, + 0xf1, 0xe7, 0xed, 0xf4, 0xa6, 0xa2, 0x22, 0x4d, 0x02, 0xbe, 0x0d, 0x06, 0x89, 0x9f, 0x78, 0xe6, + 0x45, 0x7e, 0x7d, 0x9f, 0x36, 0xfb, 0xd4, 0x79, 0x59, 0xf3, 0x13, 0x0f, 0x71, 0xcb, 0xf0, 0x05, + 0x30, 0xe1, 0xe1, 0x23, 0x56, 0x04, 0x48, 0x14, 0xb3, 0x61, 0x6f, 0x86, 0xaf, 0x7b, 0x9a, 0x35, + 0x92, 0x9b, 0x3a, 0x03, 0xb5, 0xcb, 0x71, 0x45, 0xc7, 0xd7, 0x14, 0x67, 0x35, 0x45, 0x9d, 0x81, + 0xda, 0xe5, 0x58, 0x90, 0x23, 0x72, 0x3f, 0x71, 0x22, 0x62, 0x9b, 0xff, 0xc3, 0x7b, 0x4f, 0xf9, + 0xc6, 0x2a, 0x68, 0x48, 0x71, 0xe1, 0xfd, 0x74, 0xec, 0x37, 0xf9, 0xe1, 0xdb, 0xee, 0x5b, 0xe9, + 0xde, 0x8a, 0x96, 0xa3, 0x08, 0x1f, 0x8b, 0x5b, 0x45, 0x1f, 0xf8, 0xa1, 0x0f, 0x86, 0xb0, 0xeb, + 0x6e, 0xed, 0x99, 0x97, 0x78, 0xc4, 0xfb, 0x78, 0x5b, 0xa8, 0x0a, 0xb3, 0xcc, 0xec, 0x23, 0x01, + 0xc3, 0xf0, 0x02, 0x9f, 0xe5, 0xc2, 0xdc, 0x99, 0xe1, 0x6d, 0x31, 0xfb, 0x48, 0xc0, 0xf0, 0xf5, + 0xf9, 0xc7, 0x5b, 0x7b, 0xe6, 0x63, 0x67, 0xb7, 0x3e, 0x66, 0x1f, 0x09, 0x18, 0x68, 0x83, 0x01, + 0x3f, 0x88, 0xcd, 0xcb, 0xfd, 0xbe, 0x7b, 0xf9, 0x6d, 0x72, 0x3b, 0x88, 0x11, 0x33, 0x0f, 0x7f, + 0x60, 0x00, 0x10, 0x66, 0x99, 0x78, 0xe5, 0xb4, 0x63, 0x78, 0x0e, 0xad, 0x92, 0x65, 0xef, 0x9a, + 0x1f, 0x47, 0xc7, 0xd9, 0xec, 0xa7, 0x65, 0xb9, 0xe6, 0x00, 0xfc, 0x99, 0x01, 0x2e, 0xea, 0xed, + 0xae, 0xf2, 0x6c, 0x9e, 0xc7, 0x61, 0xab, 0x8f, 0x89, 0x5c, 0x0d, 0x02, 0xb7, 0x6a, 0xb6, 0x9a, + 0xa5, 0x8b, 0xcb, 0x5d, 0x00, 0x51, 0x57, 0x37, 0xe0, 0xaf, 0x0c, 0x30, 0x2d, 0xab, 0xa3, 0xe6, + 0x5c, 0x89, 0x87, 0xed, 0xed, 0x3e, 0x86, 0x2d, 0x0f, 0x21, 0xa2, 0xa7, 0xbe, 0xf4, 0x75, 0xf0, + 0x51, 0xa7, 0x57, 0xf0, 0xb7, 0x06, 0x18, 0xb7, 0x49, 0x48, 0x7c, 0x9b, 0xf8, 0x16, 0x73, 0x73, + 0xe1, 0xb4, 0xb3, 0x7d, 0xde, 0xcd, 0x55, 0xcd, 0xba, 0xf0, 0xb0, 0x22, 0x3d, 0x1c, 0xd7, 0x59, + 0x27, 0xcd, 0xd2, 0x6c, 0xa6, 0xaa, 0x73, 0x50, 0x9b, 0x83, 0xf0, 0x87, 0x06, 0x98, 0xcc, 0xc2, + 0x2e, 0x2e, 0x88, 0xab, 0x67, 0xb3, 0xf1, 0xbc, 0x05, 0x5d, 0x6e, 0xc7, 0x42, 0x79, 0x70, 0xf8, + 0x6b, 0x83, 0x75, 0x5b, 0xe9, 0xac, 0x46, 0xcd, 0x32, 0x8f, 0xe0, 0x1b, 0xfd, 0x8c, 0xa0, 0x32, + 0x2e, 0x02, 0x78, 0x3d, 0xeb, 0xe4, 0x14, 0xe7, 0xa4, 0x59, 0x9a, 0xd1, 0xe3, 0xa7, 0x18, 0x48, + 0x77, 0x0e, 0xbe, 0x67, 0x80, 0x71, 0x92, 0x35, 0xcc, 0xd4, 0x7c, 0xfc, 0xb4, 0xa1, 0xeb, 0xda, + 0x7e, 0x8b, 0x71, 0x5a, 0x63, 0x51, 0xd4, 0x06, 0xcb, 0x7a, 0x3f, 0x72, 0x84, 0xbd, 0xd0, 0x25, + 0xe6, 0xff, 0xf6, 0xaf, 0xf7, 0x5b, 0x13, 0x26, 0x51, 0x6a, 0x1b, 0x5e, 0x07, 0x45, 0x3f, 0x71, + 0x5d, 0xbc, 0xeb, 0x12, 0xf3, 0x09, 0xde, 0x45, 0xa8, 0x37, 0xbe, 0xdb, 0x92, 0x8e, 0x94, 0x04, + 0xdc, 0x03, 0x0b, 0x47, 0xb7, 0xd4, 0x0f, 0x20, 0xba, 0x3e, 0xa2, 0x99, 0xd7, 0xb8, 0x95, 0xb9, + 0x56, 0xb3, 0x34, 0xbb, 0xd3, 0xfd, 0x99, 0xed, 0x81, 0x36, 0xe0, 0x9b, 0xe0, 0x31, 0x4d, 0x66, + 0xcd, 0xdb, 0x25, 0xb6, 0x4d, 0xec, 0x74, 0xd0, 0x32, 0xff, 0x8f, 0x43, 0xa8, 0x73, 0xbc, 0x93, + 0x17, 0x40, 0x9f, 0xa5, 0x0d, 0x37, 0xc0, 0xac, 0xc6, 0x5e, 0xf7, 0xe3, 0xad, 0xa8, 0x16, 0x47, + 0x8e, 0x5f, 0x37, 0x17, 0xb9, 0xdd, 0x8b, 0xe9, 0xe9, 0xdb, 0xd1, 0x78, 0xa8, 0x87, 0x0e, 0xfc, + 0x7a, 0x9b, 0x35, 0xfe, 0xf1, 0x00, 0x87, 0xb7, 0xc8, 0x31, 0x35, 0x9f, 0xe4, 0xcd, 0x05, 0xdf, + 0xe7, 0x1d, 0x8d, 0x8e, 0x7a, 0xc8, 0xc3, 0xaf, 0x81, 0x0b, 0x39, 0x0e, 0x9b, 0x2b, 0xcc, 0xa7, + 0xc4, 0x80, 0xc0, 0x3a, 0xd1, 0x9d, 0x94, 0x88, 0xba, 0x49, 0xc2, 0xaf, 0x00, 0xa8, 0x91, 0x37, + 0x71, 0xc8, 0xf5, 0x9f, 0x16, 0xb3, 0x0a, 0xdb, 0xd1, 0x1d, 0x49, 0x43, 0x5d, 0xe4, 0xe6, 0xd8, + 0xcc, 0x9a, 0x2b, 0x95, 0x70, 0x0a, 0x0c, 0x1c, 0x10, 0xf9, 0x85, 0x14, 0xb1, 0x3f, 0xe1, 0x5b, + 0x60, 0xa8, 0x81, 0xdd, 0x24, 0x9d, 0xb8, 0xfb, 0x77, 0xa5, 0x22, 0x61, 0xf7, 0xa5, 0xc2, 0x8b, + 0xc6, 0xdc, 0x07, 0x06, 0x98, 0xed, 0x5e, 0xbc, 0xbf, 0x28, 0x8f, 0x7e, 0x6a, 0x80, 0xe9, 0x8e, + 0x3a, 0xdd, 0xc5, 0x19, 0xb7, 0xdd, 0x99, 0x7b, 0x7d, 0x2c, 0xb8, 0x22, 0xdf, 0x78, 0xe3, 0xa8, + 0x7b, 0xf6, 0x7d, 0x03, 0x4c, 0xe5, 0xeb, 0xdf, 0x17, 0x14, 0xa5, 0xf2, 0xfb, 0x05, 0x30, 0xdb, + 0xbd, 0xd5, 0x85, 0x9e, 0x1a, 0xe2, 0xfb, 0xfe, 0x0e, 0xd2, 0xed, 0x65, 0xf4, 0x5d, 0x03, 0x8c, + 0xbd, 0xa3, 0xe4, 0xd2, 0x0f, 0x77, 0xfd, 0x7c, 0x7c, 0x49, 0x6f, 0x98, 0x8c, 0x41, 0x91, 0x0e, + 0x59, 0xfe, 0x8d, 0x01, 0x66, 0xba, 0xde, 0x9a, 0xf0, 0x1a, 0x18, 0xc6, 0xae, 0x1b, 0x1c, 0x8a, + 0x47, 0x33, 0xed, 0x05, 0x7a, 0x99, 0x53, 0x91, 0xe4, 0x6a, 0x31, 0x2b, 0x7c, 0x0e, 0x31, 0x2b, + 0xff, 0xde, 0x00, 0x97, 0x3f, 0x2b, 0xeb, 0x3e, 0xef, 0x3d, 0x5c, 0x04, 0x45, 0xd9, 0xd3, 0x1e, + 0xf3, 0xfd, 0x93, 0x45, 0x4c, 0x56, 0x04, 0xfe, 0xc3, 0x10, 0xf1, 0x57, 0xf9, 0x17, 0x06, 0x98, + 0xaa, 0x91, 0xa8, 0xe1, 0x58, 0x04, 0x91, 0x3d, 0x12, 0x11, 0xdf, 0x22, 0x70, 0x09, 0x8c, 0xf2, + 0x0f, 0x6b, 0x21, 0xb6, 0xd2, 0xcf, 0x01, 0xd3, 0x32, 0xd0, 0xa3, 0xb7, 0x53, 0x06, 0xca, 0x64, + 0xd4, 0xa7, 0x83, 0x42, 0xcf, 0x4f, 0x07, 0x97, 0xc1, 0x60, 0x98, 0xbd, 0xb3, 0x16, 0x19, 0x97, + 0x3f, 0xad, 0x72, 0x2a, 0xe7, 0x06, 0x51, 0xcc, 0x1f, 0x93, 0x86, 0x24, 0x37, 0x88, 0x62, 0xc4, + 0xa9, 0xe5, 0x3f, 0x18, 0xe0, 0x42, 0xfa, 0x0b, 0x0f, 0xd7, 0x21, 0x7e, 0xbc, 0x12, 0xf8, 0x7b, + 0x4e, 0x1d, 0x5e, 0x12, 0xef, 0x69, 0xda, 0x23, 0x55, 0xfa, 0x96, 0x06, 0xef, 0x83, 0x11, 0x2a, + 0x56, 0x25, 0x03, 0xfe, 0xea, 0xa3, 0x07, 0x3c, 0x1f, 0x1e, 0xd1, 0x0e, 0xa4, 0xd4, 0x14, 0x87, + 0xc5, 0xdc, 0xc2, 0xd5, 0xc4, 0xb7, 0xe5, 0x9b, 0xea, 0xb8, 0x88, 0xf9, 0xca, 0xb2, 0xa0, 0x21, + 0xc5, 0x2d, 0xff, 0xdd, 0x00, 0xd3, 0x1d, 0xbf, 0x58, 0x81, 0xdf, 0x31, 0xc0, 0xb8, 0xa5, 0x2d, + 0x4f, 0x66, 0xee, 0xe6, 0xe9, 0x7f, 0x15, 0xa3, 0x19, 0x15, 0x77, 0xaa, 0x4e, 0x41, 0x6d, 0xa0, + 0x70, 0x07, 0x98, 0x56, 0xee, 0xc7, 0x61, 0xb9, 0xcf, 0x4d, 0x97, 0x5b, 0xcd, 0x92, 0xb9, 0xd2, + 0x43, 0x06, 0xf5, 0xd4, 0xae, 0x2e, 0x7e, 0xf4, 0xe9, 0xfc, 0xb9, 0x8f, 0x3f, 0x9d, 0x3f, 0xf7, + 0xc9, 0xa7, 0xf3, 0xe7, 0xde, 0x6d, 0xcd, 0x1b, 0x1f, 0xb5, 0xe6, 0x8d, 0x8f, 0x5b, 0xf3, 0xc6, + 0x27, 0xad, 0x79, 0xe3, 0xaf, 0xad, 0x79, 0xe3, 0x47, 0x7f, 0x9b, 0x3f, 0xf7, 0x46, 0xa1, 0x71, + 0xe3, 0xbf, 0x01, 0x00, 0x00, 0xff, 0xff, 0x9b, 0x09, 0x4a, 0x32, 0x30, 0x2a, 0x00, 0x00, } func (m *ConversionRequest) Marshal() (dAtA []byte, err error) { @@ -1865,6 +1866,15 @@ func (m *JSONSchemaProps) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l + if m.XMapType != nil { + i -= len(*m.XMapType) + copy(dAtA[i:], *m.XMapType) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.XMapType))) + i-- + dAtA[i] = 0x2 + i-- + dAtA[i] = 0xda + } if m.XListType != nil { i -= len(*m.XListType) copy(dAtA[i:], *m.XListType) @@ -3128,6 +3138,10 @@ func (m *JSONSchemaProps) Size() (n int) { l = len(*m.XListType) n += 2 + l + sovGenerated(uint64(l)) } + if m.XMapType != nil { + l = len(*m.XMapType) + n += 2 + l + sovGenerated(uint64(l)) + } return n } @@ -3604,6 +3618,7 @@ func (this *JSONSchemaProps) String() string { `XIntOrString:` + fmt.Sprintf("%v", this.XIntOrString) + `,`, `XListMapKeys:` + fmt.Sprintf("%v", this.XListMapKeys) + `,`, `XListType:` + valueToStringGenerated(this.XListType) + `,`, + `XMapType:` + valueToStringGenerated(this.XMapType) + `,`, `}`, }, "") return s @@ -8030,6 +8045,39 @@ func (m *JSONSchemaProps) Unmarshal(dAtA []byte) error { s := string(dAtA[iNdEx:postIndex]) m.XListType = &s iNdEx = postIndex + case 43: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field XMapType", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := string(dAtA[iNdEx:postIndex]) + m.XMapType = &s + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/generated.proto b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/generated.proto index 2fbed9c14e..de4229cd86 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/generated.proto +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/generated.proto @@ -204,7 +204,7 @@ message CustomResourceDefinitionSpec { optional CustomResourceDefinitionNames names = 3; // scope indicates whether the defined custom resource is cluster- or namespace-scoped. - // Allowed values are `Cluster` and `Namespaced`. Default is `Namespaced`. + // Allowed values are `Cluster` and `Namespaced`. optional string scope = 4; // versions is the list of all API versions of the defined custom resource. @@ -359,6 +359,32 @@ message JSONSchemaProps { optional string type = 5; + // format is an OpenAPI v3 format string. Unknown formats are ignored. The following formats are validated: + // + // - bsonobjectid: a bson object ID, i.e. a 24 characters hex string + // - uri: an URI as parsed by Golang net/url.ParseRequestURI + // - email: an email address as parsed by Golang net/mail.ParseAddress + // - hostname: a valid representation for an Internet host name, as defined by RFC 1034, section 3.1 [RFC1034]. + // - ipv4: an IPv4 IP as parsed by Golang net.ParseIP + // - ipv6: an IPv6 IP as parsed by Golang net.ParseIP + // - cidr: a CIDR as parsed by Golang net.ParseCIDR + // - mac: a MAC address as parsed by Golang net.ParseMAC + // - uuid: an UUID that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{12}$ + // - uuid3: an UUID3 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?3[0-9a-f]{3}-?[0-9a-f]{4}-?[0-9a-f]{12}$ + // - uuid4: an UUID4 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?4[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$ + // - uuid5: an UUID5 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?5[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$ + // - isbn: an ISBN10 or ISBN13 number string like "0321751043" or "978-0321751041" + // - isbn10: an ISBN10 number string like "0321751043" + // - isbn13: an ISBN13 number string like "978-0321751041" + // - creditcard: a credit card number defined by the regex ^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\\d{3})\\d{11})$ with any non digit characters mixed in + // - ssn: a U.S. social security number following the regex ^\\d{3}[- ]?\\d{2}[- ]?\\d{4}$ + // - hexcolor: an hexadecimal color code like "#FFFFFF: following the regex ^#?([0-9a-fA-F]{3}|[0-9a-fA-F]{6})$ + // - rgbcolor: an RGB color code like rgb like "rgb(255,255,2559" + // - byte: base64 encoded binary data + // - password: any kind of string + // - date: a date string like "2006-01-02" as defined by full-date in RFC3339 + // - duration: a duration string like "22 ns" as parsed by Golang time.ParseDuration or compatible with Scala duration format + // - datetime: a date time string like "2014-12-15T19:30:20.000Z" as defined by date-time in RFC3339. optional string format = 6; optional string title = 7; @@ -485,6 +511,18 @@ message JSONSchemaProps { // Defaults to atomic for arrays. // +optional optional string xKubernetesListType = 42; + + // x-kubernetes-map-type annotates an object to further describe its topology. + // This extension must only be used when type is object and may have 2 possible values: + // + // 1) `granular`: + // These maps are actual maps (key-value pairs) and each fields are independent + // from each other (they can each be manipulated by separate actors). This is + // the default behaviour for all maps. + // 2) `atomic`: the list is treated as a single entity, like a scalar. + // Atomic maps will be entirely replaced when updated. + // +optional + optional string xKubernetesMapType = 43; } // JSONSchemaPropsOrArray represents a value that can either be a JSONSchemaProps diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/register.go b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/register.go index a1b2b60a61..bd6a6ed006 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/register.go +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/register.go @@ -38,7 +38,7 @@ func Resource(resource string) schema.GroupResource { } var ( - SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes, addDefaultingFuncs, addConversionFuncs) + SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes, addDefaultingFuncs) localSchemeBuilder = &SchemeBuilder AddToScheme = localSchemeBuilder.AddToScheme ) @@ -58,5 +58,5 @@ func init() { // We only register manually written functions here. The registration of the // generated functions takes place in the generated files. The separation // makes the code compile even when the generated files are missing. - localSchemeBuilder.Register(addDefaultingFuncs, addConversionFuncs) + localSchemeBuilder.Register(addDefaultingFuncs) } diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/types.go b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/types.go index 000f3fa1cc..d0c41c6c46 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/types.go +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/types.go @@ -46,7 +46,7 @@ type CustomResourceDefinitionSpec struct { // names specify the resource and kind names for the custom resource. Names CustomResourceDefinitionNames `json:"names" protobuf:"bytes,3,opt,name=names"` // scope indicates whether the defined custom resource is cluster- or namespace-scoped. - // Allowed values are `Cluster` and `Namespaced`. Default is `Namespaced`. + // Allowed values are `Cluster` and `Namespaced`. Scope ResourceScope `json:"scope" protobuf:"bytes,4,opt,name=scope,casttype=ResourceScope"` // versions is the list of all API versions of the defined custom resource. // Version names are used to compute the order in which served versions are listed in API discovery. diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/types_jsonschema.go b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/types_jsonschema.go index cd6021bae2..cd60312617 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/types_jsonschema.go +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/types_jsonschema.go @@ -23,8 +23,36 @@ type JSONSchemaProps struct { Ref *string `json:"$ref,omitempty" protobuf:"bytes,3,opt,name=ref"` Description string `json:"description,omitempty" protobuf:"bytes,4,opt,name=description"` Type string `json:"type,omitempty" protobuf:"bytes,5,opt,name=type"` - Format string `json:"format,omitempty" protobuf:"bytes,6,opt,name=format"` - Title string `json:"title,omitempty" protobuf:"bytes,7,opt,name=title"` + + // format is an OpenAPI v3 format string. Unknown formats are ignored. The following formats are validated: + // + // - bsonobjectid: a bson object ID, i.e. a 24 characters hex string + // - uri: an URI as parsed by Golang net/url.ParseRequestURI + // - email: an email address as parsed by Golang net/mail.ParseAddress + // - hostname: a valid representation for an Internet host name, as defined by RFC 1034, section 3.1 [RFC1034]. + // - ipv4: an IPv4 IP as parsed by Golang net.ParseIP + // - ipv6: an IPv6 IP as parsed by Golang net.ParseIP + // - cidr: a CIDR as parsed by Golang net.ParseCIDR + // - mac: a MAC address as parsed by Golang net.ParseMAC + // - uuid: an UUID that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{12}$ + // - uuid3: an UUID3 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?3[0-9a-f]{3}-?[0-9a-f]{4}-?[0-9a-f]{12}$ + // - uuid4: an UUID4 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?4[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$ + // - uuid5: an UUID5 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?5[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$ + // - isbn: an ISBN10 or ISBN13 number string like "0321751043" or "978-0321751041" + // - isbn10: an ISBN10 number string like "0321751043" + // - isbn13: an ISBN13 number string like "978-0321751041" + // - creditcard: a credit card number defined by the regex ^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\\d{3})\\d{11})$ with any non digit characters mixed in + // - ssn: a U.S. social security number following the regex ^\\d{3}[- ]?\\d{2}[- ]?\\d{4}$ + // - hexcolor: an hexadecimal color code like "#FFFFFF: following the regex ^#?([0-9a-fA-F]{3}|[0-9a-fA-F]{6})$ + // - rgbcolor: an RGB color code like rgb like "rgb(255,255,2559" + // - byte: base64 encoded binary data + // - password: any kind of string + // - date: a date string like "2006-01-02" as defined by full-date in RFC3339 + // - duration: a duration string like "22 ns" as parsed by Golang time.ParseDuration or compatible with Scala duration format + // - datetime: a date time string like "2014-12-15T19:30:20.000Z" as defined by date-time in RFC3339. + Format string `json:"format,omitempty" protobuf:"bytes,6,opt,name=format"` + + Title string `json:"title,omitempty" protobuf:"bytes,7,opt,name=title"` // default is a default value for undefined object fields. // Defaulting is a beta feature under the CustomResourceDefaulting feature gate. // Defaulting requires spec.preserveUnknownFields to be false. @@ -118,6 +146,18 @@ type JSONSchemaProps struct { // Defaults to atomic for arrays. // +optional XListType *string `json:"x-kubernetes-list-type,omitempty" protobuf:"bytes,42,opt,name=xKubernetesListType"` + + // x-kubernetes-map-type annotates an object to further describe its topology. + // This extension must only be used when type is object and may have 2 possible values: + // + // 1) `granular`: + // These maps are actual maps (key-value pairs) and each fields are independent + // from each other (they can each be manipulated by separate actors). This is + // the default behaviour for all maps. + // 2) `atomic`: the list is treated as a single entity, like a scalar. + // Atomic maps will be entirely replaced when updated. + // +optional + XMapType *string `json:"x-kubernetes-map-type,omitempty" protobuf:"bytes,43,opt,name=xKubernetesMapType"` } // JSON represents any valid JSON value. diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/zz_generated.conversion.go b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/zz_generated.conversion.go index de5c8089a1..11fb2b1e6d 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/zz_generated.conversion.go +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1/zz_generated.conversion.go @@ -46,16 +46,6 @@ func RegisterConversions(s *runtime.Scheme) error { }); err != nil { return err } - if err := s.AddGeneratedConversionFunc((*CustomResourceConversion)(nil), (*apiextensions.CustomResourceConversion)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1_CustomResourceConversion_To_apiextensions_CustomResourceConversion(a.(*CustomResourceConversion), b.(*apiextensions.CustomResourceConversion), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*apiextensions.CustomResourceConversion)(nil), (*CustomResourceConversion)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_apiextensions_CustomResourceConversion_To_v1_CustomResourceConversion(a.(*apiextensions.CustomResourceConversion), b.(*CustomResourceConversion), scope) - }); err != nil { - return err - } if err := s.AddGeneratedConversionFunc((*CustomResourceDefinition)(nil), (*apiextensions.CustomResourceDefinition)(nil), func(a, b interface{}, scope conversion.Scope) error { return Convert_v1_CustomResourceDefinition_To_apiextensions_CustomResourceDefinition(a.(*CustomResourceDefinition), b.(*apiextensions.CustomResourceDefinition), scope) }); err != nil { @@ -96,16 +86,6 @@ func RegisterConversions(s *runtime.Scheme) error { }); err != nil { return err } - if err := s.AddGeneratedConversionFunc((*CustomResourceDefinitionSpec)(nil), (*apiextensions.CustomResourceDefinitionSpec)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1_CustomResourceDefinitionSpec_To_apiextensions_CustomResourceDefinitionSpec(a.(*CustomResourceDefinitionSpec), b.(*apiextensions.CustomResourceDefinitionSpec), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*apiextensions.CustomResourceDefinitionSpec)(nil), (*CustomResourceDefinitionSpec)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_apiextensions_CustomResourceDefinitionSpec_To_v1_CustomResourceDefinitionSpec(a.(*apiextensions.CustomResourceDefinitionSpec), b.(*CustomResourceDefinitionSpec), scope) - }); err != nil { - return err - } if err := s.AddGeneratedConversionFunc((*CustomResourceDefinitionStatus)(nil), (*apiextensions.CustomResourceDefinitionStatus)(nil), func(a, b interface{}, scope conversion.Scope) error { return Convert_v1_CustomResourceDefinitionStatus_To_apiextensions_CustomResourceDefinitionStatus(a.(*CustomResourceDefinitionStatus), b.(*apiextensions.CustomResourceDefinitionStatus), scope) }); err != nil { @@ -176,26 +156,11 @@ func RegisterConversions(s *runtime.Scheme) error { }); err != nil { return err } - if err := s.AddGeneratedConversionFunc((*JSON)(nil), (*apiextensions.JSON)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1_JSON_To_apiextensions_JSON(a.(*JSON), b.(*apiextensions.JSON), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*apiextensions.JSON)(nil), (*JSON)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_apiextensions_JSON_To_v1_JSON(a.(*apiextensions.JSON), b.(*JSON), scope) - }); err != nil { - return err - } if err := s.AddGeneratedConversionFunc((*JSONSchemaProps)(nil), (*apiextensions.JSONSchemaProps)(nil), func(a, b interface{}, scope conversion.Scope) error { return Convert_v1_JSONSchemaProps_To_apiextensions_JSONSchemaProps(a.(*JSONSchemaProps), b.(*apiextensions.JSONSchemaProps), scope) }); err != nil { return err } - if err := s.AddGeneratedConversionFunc((*apiextensions.JSONSchemaProps)(nil), (*JSONSchemaProps)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_apiextensions_JSONSchemaProps_To_v1_JSONSchemaProps(a.(*apiextensions.JSONSchemaProps), b.(*JSONSchemaProps), scope) - }); err != nil { - return err - } if err := s.AddGeneratedConversionFunc((*JSONSchemaPropsOrArray)(nil), (*apiextensions.JSONSchemaPropsOrArray)(nil), func(a, b interface{}, scope conversion.Scope) error { return Convert_v1_JSONSchemaPropsOrArray_To_apiextensions_JSONSchemaPropsOrArray(a.(*JSONSchemaPropsOrArray), b.(*apiextensions.JSONSchemaPropsOrArray), scope) }); err != nil { @@ -912,6 +877,7 @@ func autoConvert_v1_JSONSchemaProps_To_apiextensions_JSONSchemaProps(in *JSONSch out.XIntOrString = in.XIntOrString out.XListMapKeys = *(*[]string)(unsafe.Pointer(&in.XListMapKeys)) out.XListType = (*string)(unsafe.Pointer(in.XListType)) + out.XMapType = (*string)(unsafe.Pointer(in.XMapType)) return nil } @@ -1099,6 +1065,7 @@ func autoConvert_apiextensions_JSONSchemaProps_To_v1_JSONSchemaProps(in *apiexte out.XIntOrString = in.XIntOrString out.XListMapKeys = *(*[]string)(unsafe.Pointer(&in.XListMapKeys)) out.XListType = (*string)(unsafe.Pointer(in.XListType)) + out.XMapType = (*string)(unsafe.Pointer(in.XMapType)) return nil } diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/conversion.go b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/conversion.go index f9951009dc..e014ce62fd 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/conversion.go +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/conversion.go @@ -18,25 +18,11 @@ package v1beta1 import ( "k8s.io/apimachinery/pkg/conversion" - "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/util/json" "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions" ) -func addConversionFuncs(scheme *runtime.Scheme) error { - // Add non-generated conversion functions - err := scheme.AddConversionFuncs( - Convert_apiextensions_JSONSchemaProps_To_v1beta1_JSONSchemaProps, - Convert_apiextensions_JSON_To_v1beta1_JSON, - Convert_v1beta1_JSON_To_apiextensions_JSON, - ) - if err != nil { - return err - } - return nil -} - func Convert_apiextensions_JSONSchemaProps_To_v1beta1_JSONSchemaProps(in *apiextensions.JSONSchemaProps, out *JSONSchemaProps, s conversion.Scope) error { if err := autoConvert_apiextensions_JSONSchemaProps_To_v1beta1_JSONSchemaProps(in, out, s); err != nil { return err diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/deepcopy.go b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/deepcopy.go index a4560dc5f6..857beac4ab 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/deepcopy.go +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/deepcopy.go @@ -260,5 +260,11 @@ func (in *JSONSchemaProps) DeepCopy() *JSONSchemaProps { } } + if in.XMapType != nil { + in, out := &in.XMapType, &out.XMapType + *out = new(string) + **out = **in + } + return out } diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/generated.pb.go b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/generated.pb.go index c28384c22a..6e11dcc9f5 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/generated.pb.go +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/generated.pb.go @@ -756,192 +756,193 @@ func init() { } var fileDescriptor_98a4cc6918394e53 = []byte{ - // 2955 bytes of a gzipped FileDescriptorProto + // 2976 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x5a, 0xcf, 0x73, 0x23, 0x47, 0xf5, 0xdf, 0x91, 0x2c, 0x5b, 0x6e, 0xdb, 0x6b, 0xbb, 0x77, 0xed, 0xcc, 0x3a, 0x1b, 0xcb, 0xab, - 0x7c, 0xb3, 0x5f, 0x27, 0xec, 0xca, 0xc9, 0x92, 0x90, 0x90, 0x2a, 0x8a, 0xb2, 0x6c, 0x27, 0x38, - 0x59, 0x5b, 0xa6, 0xb5, 0x9b, 0x18, 0xf2, 0xb3, 0xad, 0x69, 0xc9, 0xb3, 0x9e, 0x5f, 0x3b, 0x3d, - 0x23, 0xdb, 0x15, 0xa0, 0xf8, 0x51, 0x29, 0x28, 0x0a, 0x08, 0x45, 0x72, 0xa1, 0x0a, 0x0e, 0x81, - 0xe2, 0xc2, 0x01, 0x0e, 0x70, 0x83, 0x3f, 0x20, 0xc7, 0x14, 0xa7, 0x1c, 0x28, 0x15, 0xab, 0x5c, - 0x39, 0x52, 0x05, 0xe5, 0x13, 0xd5, 0x3f, 0xa6, 0x67, 0x34, 0x92, 0x76, 0x5d, 0x59, 0x29, 0xcb, - 0xcd, 0x7a, 0xbf, 0x3e, 0xaf, 0x5f, 0xbf, 0x7e, 0xfd, 0xfa, 0x8d, 0x41, 0xfd, 0xe0, 0x39, 0x5a, - 0x32, 0xdd, 0x95, 0x83, 0x70, 0x8f, 0xf8, 0x0e, 0x09, 0x08, 0x5d, 0x69, 0x12, 0xc7, 0x70, 0xfd, - 0x15, 0xc9, 0xc0, 0x9e, 0x49, 0x8e, 0x02, 0xe2, 0x50, 0xd3, 0x75, 0xe8, 0x55, 0xec, 0x99, 0x94, - 0xf8, 0x4d, 0xe2, 0xaf, 0x78, 0x07, 0x0d, 0xc6, 0xa3, 0x9d, 0x02, 0x2b, 0xcd, 0xa7, 0xf6, 0x48, - 0x80, 0x9f, 0x5a, 0x69, 0x10, 0x87, 0xf8, 0x38, 0x20, 0x46, 0xc9, 0xf3, 0xdd, 0xc0, 0x85, 0x5f, - 0x11, 0xe6, 0x4a, 0x1d, 0xd2, 0x6f, 0x29, 0x73, 0x25, 0xef, 0xa0, 0xc1, 0x78, 0xb4, 0x53, 0xa0, - 0x24, 0xcd, 0x2d, 0x5c, 0x6d, 0x98, 0xc1, 0x7e, 0xb8, 0x57, 0xaa, 0xb9, 0xf6, 0x4a, 0xc3, 0x6d, - 0xb8, 0x2b, 0xdc, 0xea, 0x5e, 0x58, 0xe7, 0xbf, 0xf8, 0x0f, 0xfe, 0x97, 0x40, 0x5b, 0x78, 0x3a, - 0x76, 0xde, 0xc6, 0xb5, 0x7d, 0xd3, 0x21, 0xfe, 0x71, 0xec, 0xb1, 0x4d, 0x02, 0xbc, 0xd2, 0xec, - 0xf2, 0x71, 0x61, 0xa5, 0x9f, 0x96, 0x1f, 0x3a, 0x81, 0x69, 0x93, 0x2e, 0x85, 0x2f, 0xdd, 0x4b, - 0x81, 0xd6, 0xf6, 0x89, 0x8d, 0xd3, 0x7a, 0xc5, 0x13, 0x0d, 0xcc, 0xae, 0xb9, 0x4e, 0x93, 0xf8, - 0x6c, 0x95, 0x88, 0xdc, 0x0e, 0x09, 0x0d, 0x60, 0x19, 0x64, 0x43, 0xd3, 0xd0, 0xb5, 0x25, 0x6d, - 0x79, 0xbc, 0xfc, 0xe4, 0x47, 0xad, 0xc2, 0x99, 0x76, 0xab, 0x90, 0xbd, 0xb9, 0xb9, 0x7e, 0xd2, - 0x2a, 0x5c, 0xea, 0x87, 0x14, 0x1c, 0x7b, 0x84, 0x96, 0x6e, 0x6e, 0xae, 0x23, 0xa6, 0x0c, 0x5f, - 0x04, 0xb3, 0x06, 0xa1, 0xa6, 0x4f, 0x8c, 0xd5, 0x9d, 0xcd, 0x57, 0x84, 0x7d, 0x3d, 0xc3, 0x2d, - 0x5e, 0x90, 0x16, 0x67, 0xd7, 0xd3, 0x02, 0xa8, 0x5b, 0x07, 0xee, 0x82, 0x31, 0x77, 0xef, 0x16, - 0xa9, 0x05, 0x54, 0xcf, 0x2e, 0x65, 0x97, 0x27, 0xae, 0x5d, 0x2d, 0xc5, 0x3b, 0xa8, 0x5c, 0xe0, - 0xdb, 0x26, 0x17, 0x5b, 0x42, 0xf8, 0x70, 0x23, 0xda, 0xb9, 0xf2, 0xb4, 0x44, 0x1b, 0xab, 0x08, - 0x2b, 0x28, 0x32, 0x57, 0xfc, 0x6d, 0x06, 0xc0, 0xe4, 0xe2, 0xa9, 0xe7, 0x3a, 0x94, 0x0c, 0x64, - 0xf5, 0x14, 0xcc, 0xd4, 0xb8, 0xe5, 0x80, 0x18, 0x12, 0x57, 0xcf, 0x7c, 0x16, 0xef, 0x75, 0x89, - 0x3f, 0xb3, 0x96, 0x32, 0x87, 0xba, 0x00, 0xe0, 0x0d, 0x30, 0xea, 0x13, 0x1a, 0x5a, 0x81, 0x9e, - 0x5d, 0xd2, 0x96, 0x27, 0xae, 0x5d, 0xe9, 0x0b, 0xc5, 0xf3, 0x9b, 0x25, 0x5f, 0xa9, 0xf9, 0x54, - 0xa9, 0x1a, 0xe0, 0x20, 0xa4, 0xe5, 0xb3, 0x12, 0x69, 0x14, 0x71, 0x1b, 0x48, 0xda, 0x2a, 0xfe, - 0x28, 0x03, 0x66, 0x92, 0x51, 0x6a, 0x9a, 0xe4, 0x10, 0x1e, 0x82, 0x31, 0x5f, 0x24, 0x0b, 0x8f, - 0xd3, 0xc4, 0xb5, 0x9d, 0xd2, 0x7d, 0x1d, 0xab, 0x52, 0x57, 0x12, 0x96, 0x27, 0xd8, 0x9e, 0xc9, - 0x1f, 0x28, 0x42, 0x83, 0xef, 0x80, 0xbc, 0x2f, 0x37, 0x8a, 0x67, 0xd3, 0xc4, 0xb5, 0xaf, 0x0f, - 0x10, 0x59, 0x18, 0x2e, 0x4f, 0xb6, 0x5b, 0x85, 0x7c, 0xf4, 0x0b, 0x29, 0xc0, 0xe2, 0xfb, 0x19, - 0xb0, 0xb8, 0x16, 0xd2, 0xc0, 0xb5, 0x11, 0xa1, 0x6e, 0xe8, 0xd7, 0xc8, 0x9a, 0x6b, 0x85, 0xb6, - 0xb3, 0x4e, 0xea, 0xa6, 0x63, 0x06, 0x2c, 0x5b, 0x97, 0xc0, 0x88, 0x83, 0x6d, 0x22, 0xb3, 0x67, - 0x52, 0xc6, 0x74, 0x64, 0x1b, 0xdb, 0x04, 0x71, 0x0e, 0x93, 0x60, 0xc9, 0x22, 0xcf, 0x82, 0x92, - 0xb8, 0x71, 0xec, 0x11, 0xc4, 0x39, 0xf0, 0x32, 0x18, 0xad, 0xbb, 0xbe, 0x8d, 0xc5, 0x3e, 0x8e, - 0xc7, 0x3b, 0xf3, 0x02, 0xa7, 0x22, 0xc9, 0x85, 0xcf, 0x80, 0x09, 0x83, 0xd0, 0x9a, 0x6f, 0x7a, - 0x0c, 0x5a, 0x1f, 0xe1, 0xc2, 0xe7, 0xa4, 0xf0, 0xc4, 0x7a, 0xcc, 0x42, 0x49, 0x39, 0x78, 0x05, - 0xe4, 0x3d, 0xdf, 0x74, 0x7d, 0x33, 0x38, 0xd6, 0x73, 0x4b, 0xda, 0x72, 0xae, 0x3c, 0x23, 0x75, - 0xf2, 0x3b, 0x92, 0x8e, 0x94, 0x04, 0x5c, 0x02, 0xf9, 0x97, 0xaa, 0x95, 0xed, 0x1d, 0x1c, 0xec, - 0xeb, 0xa3, 0x1c, 0x61, 0x84, 0x49, 0xa3, 0xfc, 0x2d, 0x49, 0x2d, 0xfe, 0x3d, 0x03, 0xf4, 0x74, - 0x54, 0xa2, 0x90, 0xc2, 0x17, 0x40, 0x9e, 0x06, 0xac, 0xe2, 0x34, 0x8e, 0x65, 0x4c, 0x9e, 0x88, - 0xc0, 0xaa, 0x92, 0x7e, 0xd2, 0x2a, 0xcc, 0xc7, 0x1a, 0x11, 0x95, 0xc7, 0x43, 0xe9, 0xc2, 0x5f, - 0x6b, 0xe0, 0xdc, 0x21, 0xd9, 0xdb, 0x77, 0xdd, 0x83, 0x35, 0xcb, 0x24, 0x4e, 0xb0, 0xe6, 0x3a, - 0x75, 0xb3, 0x21, 0x73, 0x00, 0xdd, 0x67, 0x0e, 0xbc, 0xda, 0x6d, 0xb9, 0xfc, 0x50, 0xbb, 0x55, - 0x38, 0xd7, 0x83, 0x81, 0x7a, 0xf9, 0x01, 0x77, 0x81, 0x5e, 0x4b, 0x1d, 0x12, 0x59, 0xc0, 0x44, - 0xd9, 0x1a, 0x2f, 0x5f, 0x6c, 0xb7, 0x0a, 0xfa, 0x5a, 0x1f, 0x19, 0xd4, 0x57, 0xbb, 0xf8, 0x83, - 0x6c, 0x3a, 0xbc, 0x89, 0x74, 0x7b, 0x1b, 0xe4, 0xd9, 0x31, 0x36, 0x70, 0x80, 0xe5, 0x41, 0x7c, - 0xf2, 0x74, 0x87, 0x5e, 0xd4, 0x8c, 0x2d, 0x12, 0xe0, 0x32, 0x94, 0x1b, 0x02, 0x62, 0x1a, 0x52, - 0x56, 0xe1, 0xb7, 0xc1, 0x08, 0xf5, 0x48, 0x4d, 0x06, 0xfa, 0xb5, 0xfb, 0x3d, 0x6c, 0x7d, 0x16, - 0x52, 0xf5, 0x48, 0x2d, 0x3e, 0x0b, 0xec, 0x17, 0xe2, 0xb0, 0xf0, 0x5d, 0x0d, 0x8c, 0x52, 0x5e, - 0xa0, 0x64, 0x51, 0x7b, 0x63, 0x58, 0x1e, 0xa4, 0xaa, 0xa0, 0xf8, 0x8d, 0x24, 0x78, 0xf1, 0x5f, - 0x19, 0x70, 0xa9, 0x9f, 0xea, 0x9a, 0xeb, 0x18, 0x62, 0x3b, 0x36, 0xe5, 0xd9, 0x16, 0x99, 0xfe, - 0x4c, 0xf2, 0x6c, 0x9f, 0xb4, 0x0a, 0x8f, 0xdd, 0xd3, 0x40, 0xa2, 0x08, 0x7c, 0x59, 0xad, 0x5b, - 0x14, 0x8a, 0x4b, 0x9d, 0x8e, 0x9d, 0xb4, 0x0a, 0xd3, 0x4a, 0xad, 0xd3, 0x57, 0xd8, 0x04, 0xd0, - 0xc2, 0x34, 0xb8, 0xe1, 0x63, 0x87, 0x0a, 0xb3, 0xa6, 0x4d, 0x64, 0xf8, 0x9e, 0x38, 0x5d, 0x7a, - 0x30, 0x8d, 0xf2, 0x82, 0x84, 0x84, 0xd7, 0xbb, 0xac, 0xa1, 0x1e, 0x08, 0xac, 0x6e, 0xf9, 0x04, - 0x53, 0x55, 0x8a, 0x12, 0x37, 0x0a, 0xa3, 0x22, 0xc9, 0x85, 0x8f, 0x83, 0x31, 0x9b, 0x50, 0x8a, - 0x1b, 0x84, 0xd7, 0x9f, 0xf1, 0xf8, 0x8a, 0xde, 0x12, 0x64, 0x14, 0xf1, 0x59, 0x7f, 0x72, 0xb1, - 0x5f, 0xd4, 0xae, 0x9b, 0x34, 0x80, 0xaf, 0x77, 0x1d, 0x80, 0xd2, 0xe9, 0x56, 0xc8, 0xb4, 0x79, - 0xfa, 0xab, 0xe2, 0x17, 0x51, 0x12, 0xc9, 0xff, 0x2d, 0x90, 0x33, 0x03, 0x62, 0x47, 0x77, 0xf7, - 0xab, 0x43, 0xca, 0xbd, 0xf2, 0x94, 0xf4, 0x21, 0xb7, 0xc9, 0xd0, 0x90, 0x00, 0x2d, 0xfe, 0x2e, - 0x03, 0x1e, 0xe9, 0xa7, 0xc2, 0x2e, 0x14, 0xca, 0x22, 0xee, 0x59, 0xa1, 0x8f, 0x2d, 0x99, 0x71, - 0x2a, 0xe2, 0x3b, 0x9c, 0x8a, 0x24, 0x97, 0x95, 0x7c, 0x6a, 0x3a, 0x8d, 0xd0, 0xc2, 0xbe, 0x4c, - 0x27, 0xb5, 0xea, 0xaa, 0xa4, 0x23, 0x25, 0x01, 0x4b, 0x00, 0xd0, 0x7d, 0xd7, 0x0f, 0x38, 0x86, - 0xac, 0x5e, 0x67, 0x59, 0x81, 0xa8, 0x2a, 0x2a, 0x4a, 0x48, 0xb0, 0x1b, 0xed, 0xc0, 0x74, 0x0c, - 0xb9, 0xeb, 0xea, 0x14, 0xbf, 0x6c, 0x3a, 0x06, 0xe2, 0x1c, 0x86, 0x6f, 0x99, 0x34, 0x60, 0x14, - 0xb9, 0xe5, 0x1d, 0x51, 0xe7, 0x92, 0x4a, 0x82, 0xe1, 0xd7, 0x58, 0xd5, 0x77, 0x7d, 0x93, 0x50, - 0x7d, 0x34, 0xc6, 0x5f, 0x53, 0x54, 0x94, 0x90, 0x28, 0xfe, 0x33, 0xdf, 0x3f, 0x49, 0x58, 0x29, - 0x81, 0x8f, 0x82, 0x5c, 0xc3, 0x77, 0x43, 0x4f, 0x46, 0x49, 0x45, 0xfb, 0x45, 0x46, 0x44, 0x82, - 0xc7, 0xb2, 0xb2, 0xd9, 0xd1, 0xa6, 0xaa, 0xac, 0x8c, 0x9a, 0xd3, 0x88, 0x0f, 0xbf, 0xa7, 0x81, - 0x9c, 0x23, 0x83, 0xc3, 0x52, 0xee, 0xf5, 0x21, 0xe5, 0x05, 0x0f, 0x6f, 0xec, 0xae, 0x88, 0xbc, - 0x40, 0x86, 0x4f, 0x83, 0x1c, 0xad, 0xb9, 0x1e, 0x91, 0x51, 0x5f, 0x8c, 0x84, 0xaa, 0x8c, 0x78, - 0xd2, 0x2a, 0x4c, 0x45, 0xe6, 0x38, 0x01, 0x09, 0x61, 0xf8, 0x43, 0x0d, 0x80, 0x26, 0xb6, 0x4c, - 0x03, 0xf3, 0x96, 0x21, 0xc7, 0xdd, 0x1f, 0x6c, 0x5a, 0xbf, 0xa2, 0xcc, 0x8b, 0x4d, 0x8b, 0x7f, - 0xa3, 0x04, 0x34, 0x7c, 0x4f, 0x03, 0x93, 0x34, 0xdc, 0xf3, 0xa5, 0x16, 0xe5, 0xcd, 0xc5, 0xc4, - 0xb5, 0x6f, 0x0c, 0xd4, 0x97, 0x6a, 0x02, 0xa0, 0x3c, 0xd3, 0x6e, 0x15, 0x26, 0x93, 0x14, 0xd4, - 0xe1, 0x00, 0xfc, 0x89, 0x06, 0xf2, 0xcd, 0xe8, 0xce, 0x1e, 0xe3, 0x07, 0xfe, 0xcd, 0x21, 0x6d, - 0xac, 0xcc, 0xa8, 0xf8, 0x14, 0xa8, 0x3e, 0x40, 0x79, 0x00, 0xff, 0xa2, 0x01, 0x1d, 0x1b, 0xa2, - 0xc0, 0x63, 0x6b, 0xc7, 0x37, 0x9d, 0x80, 0xf8, 0xa2, 0xdf, 0xa4, 0x7a, 0x9e, 0xbb, 0x37, 0xd8, - 0xbb, 0x30, 0xdd, 0xcb, 0x96, 0x97, 0xa4, 0x77, 0xfa, 0x6a, 0x1f, 0x37, 0x50, 0x5f, 0x07, 0x79, - 0xa2, 0xc5, 0x2d, 0x8d, 0x3e, 0x3e, 0x84, 0x44, 0x8b, 0x7b, 0x29, 0x59, 0x1d, 0xe2, 0x0e, 0x2a, - 0x01, 0x0d, 0x2b, 0x60, 0xce, 0xf3, 0x09, 0x07, 0xb8, 0xe9, 0x1c, 0x38, 0xee, 0xa1, 0xf3, 0x82, - 0x49, 0x2c, 0x83, 0xea, 0x60, 0x49, 0x5b, 0xce, 0x97, 0x2f, 0xb4, 0x5b, 0x85, 0xb9, 0x9d, 0x5e, - 0x02, 0xa8, 0xb7, 0x5e, 0xf1, 0xbd, 0x6c, 0xfa, 0x15, 0x90, 0xee, 0x22, 0xe0, 0x07, 0x62, 0xf5, - 0x22, 0x36, 0x54, 0xd7, 0xf8, 0x6e, 0xbd, 0x3d, 0xa4, 0x64, 0x52, 0x6d, 0x40, 0xdc, 0xc9, 0x29, - 0x12, 0x45, 0x09, 0x3f, 0xe0, 0x2f, 0x35, 0x30, 0x85, 0x6b, 0x35, 0xe2, 0x05, 0xc4, 0x10, 0xc5, - 0x3d, 0xf3, 0x39, 0xd4, 0xaf, 0x39, 0xe9, 0xd5, 0xd4, 0x6a, 0x12, 0x1a, 0x75, 0x7a, 0x02, 0x9f, - 0x07, 0x67, 0x69, 0xe0, 0xfa, 0xc4, 0x48, 0xb5, 0xcd, 0xb0, 0xdd, 0x2a, 0x9c, 0xad, 0x76, 0x70, - 0x50, 0x4a, 0xb2, 0xf8, 0xe9, 0x08, 0x28, 0xdc, 0xe3, 0xa8, 0x9d, 0xe2, 0x61, 0x76, 0x19, 0x8c, - 0xf2, 0xe5, 0x1a, 0x3c, 0x2a, 0xf9, 0x44, 0x2b, 0xc8, 0xa9, 0x48, 0x72, 0xd9, 0x45, 0xc1, 0xf0, - 0x59, 0xfb, 0x92, 0xe5, 0x82, 0xea, 0xa2, 0xa8, 0x0a, 0x32, 0x8a, 0xf8, 0xf0, 0x1d, 0x30, 0x2a, - 0x06, 0x2f, 0xbc, 0x4a, 0x0f, 0xb1, 0xd2, 0x02, 0xee, 0x27, 0x87, 0x42, 0x12, 0xb2, 0xbb, 0xc2, - 0xe6, 0x1e, 0x74, 0x85, 0xbd, 0x6b, 0x49, 0x1b, 0xfd, 0x1f, 0x2f, 0x69, 0xc5, 0x7f, 0x6b, 0xe9, - 0x73, 0x9f, 0x58, 0x6a, 0xb5, 0x86, 0x2d, 0x02, 0xd7, 0xc1, 0x0c, 0x7b, 0xb5, 0x20, 0xe2, 0x59, - 0x66, 0x0d, 0x53, 0xfe, 0x68, 0x16, 0x09, 0xa7, 0xe6, 0x38, 0xd5, 0x14, 0x1f, 0x75, 0x69, 0xc0, - 0x97, 0x00, 0x14, 0x9d, 0x7c, 0x87, 0x1d, 0xd1, 0x94, 0xa8, 0x9e, 0xbc, 0xda, 0x25, 0x81, 0x7a, - 0x68, 0xc1, 0x35, 0x30, 0x6b, 0xe1, 0x3d, 0x62, 0x55, 0x89, 0x45, 0x6a, 0x81, 0xeb, 0x73, 0x53, - 0x62, 0xac, 0x30, 0xd7, 0x6e, 0x15, 0x66, 0xaf, 0xa7, 0x99, 0xa8, 0x5b, 0xbe, 0x78, 0x29, 0x7d, - 0xbc, 0x92, 0x0b, 0x17, 0xef, 0xa3, 0x0f, 0x33, 0x60, 0xa1, 0x7f, 0x66, 0xc0, 0xef, 0xc7, 0xcf, - 0x38, 0xd1, 0xa5, 0xbf, 0x39, 0xac, 0x2c, 0x94, 0xef, 0x38, 0xd0, 0xfd, 0x86, 0x83, 0xdf, 0x61, - 0x2d, 0x13, 0xb6, 0xa2, 0xc1, 0xd1, 0x1b, 0x43, 0x73, 0x81, 0x81, 0x94, 0xc7, 0x45, 0x37, 0x86, - 0x2d, 0xde, 0x7c, 0x61, 0x8b, 0x14, 0x7f, 0xaf, 0xa5, 0x5f, 0xf2, 0xf1, 0x09, 0x86, 0x3f, 0xd5, - 0xc0, 0xb4, 0xeb, 0x11, 0x67, 0x75, 0x67, 0xf3, 0x95, 0x2f, 0x8a, 0x93, 0x2c, 0x43, 0xb5, 0x7d, - 0x9f, 0x7e, 0xbe, 0x54, 0xad, 0x6c, 0x0b, 0x83, 0x3b, 0xbe, 0xeb, 0xd1, 0xf2, 0xb9, 0x76, 0xab, - 0x30, 0x5d, 0xe9, 0x84, 0x42, 0x69, 0xec, 0xa2, 0x0d, 0xe6, 0x36, 0x8e, 0x02, 0xe2, 0x3b, 0xd8, - 0x5a, 0x77, 0x6b, 0xa1, 0x4d, 0x9c, 0x40, 0x38, 0x9a, 0x9a, 0x3a, 0x69, 0xa7, 0x9c, 0x3a, 0x3d, - 0x02, 0xb2, 0xa1, 0x6f, 0xc9, 0x2c, 0x9e, 0x50, 0x53, 0x55, 0x74, 0x1d, 0x31, 0x7a, 0xf1, 0x12, - 0x18, 0x61, 0x7e, 0xc2, 0x0b, 0x20, 0xeb, 0xe3, 0x43, 0x6e, 0x75, 0xb2, 0x3c, 0xc6, 0x44, 0x10, - 0x3e, 0x44, 0x8c, 0x56, 0xfc, 0x4f, 0x01, 0x4c, 0xa7, 0xd6, 0x02, 0x17, 0x40, 0x46, 0x8d, 0x6a, - 0x81, 0x34, 0x9a, 0xd9, 0x5c, 0x47, 0x19, 0xd3, 0x80, 0xcf, 0xaa, 0xe2, 0x2b, 0x40, 0x0b, 0xaa, - 0x9e, 0x73, 0x2a, 0xeb, 0x91, 0x63, 0x73, 0xcc, 0x91, 0xa8, 0x70, 0x32, 0x1f, 0x48, 0x5d, 0x9e, - 0x12, 0xe1, 0x03, 0xa9, 0x23, 0x46, 0xfb, 0xac, 0x23, 0xb7, 0x68, 0xe6, 0x97, 0x3b, 0xc5, 0xcc, - 0x6f, 0xf4, 0xae, 0x33, 0xbf, 0x47, 0x41, 0x2e, 0x30, 0x03, 0x8b, 0xe8, 0x63, 0x9d, 0x4f, 0x99, - 0x1b, 0x8c, 0x88, 0x04, 0x0f, 0xde, 0x02, 0x63, 0x06, 0xa9, 0xe3, 0xd0, 0x0a, 0xf4, 0x3c, 0x4f, - 0xa1, 0xb5, 0x01, 0xa4, 0x90, 0x18, 0xc8, 0xae, 0x0b, 0xbb, 0x28, 0x02, 0x80, 0x8f, 0x81, 0x31, - 0x1b, 0x1f, 0x99, 0x76, 0x68, 0xf3, 0x26, 0x4f, 0x13, 0x62, 0x5b, 0x82, 0x84, 0x22, 0x1e, 0xab, - 0x8c, 0xe4, 0xa8, 0x66, 0x85, 0xd4, 0x6c, 0x12, 0xc9, 0x94, 0x0d, 0x98, 0xaa, 0x8c, 0x1b, 0x29, - 0x3e, 0xea, 0xd2, 0xe0, 0x60, 0xa6, 0xc3, 0x95, 0x27, 0x12, 0x60, 0x82, 0x84, 0x22, 0x5e, 0x27, - 0x98, 0x94, 0x9f, 0xec, 0x07, 0x26, 0x95, 0xbb, 0x34, 0xe0, 0x17, 0xc0, 0xb8, 0x8d, 0x8f, 0xae, - 0x13, 0xa7, 0x11, 0xec, 0xeb, 0x53, 0x4b, 0xda, 0x72, 0xb6, 0x3c, 0xd5, 0x6e, 0x15, 0xc6, 0xb7, - 0x22, 0x22, 0x8a, 0xf9, 0x5c, 0xd8, 0x74, 0xa4, 0xf0, 0xd9, 0x84, 0x70, 0x44, 0x44, 0x31, 0x9f, - 0x75, 0x10, 0x1e, 0x0e, 0xd8, 0xe1, 0xd2, 0xa7, 0x3b, 0x9f, 0x9a, 0x3b, 0x82, 0x8c, 0x22, 0x3e, - 0x5c, 0x06, 0x79, 0x1b, 0x1f, 0xf1, 0xb1, 0x80, 0x3e, 0xc3, 0xcd, 0xf2, 0xe1, 0xf4, 0x96, 0xa4, - 0x21, 0xc5, 0xe5, 0x92, 0xa6, 0x23, 0x24, 0x67, 0x13, 0x92, 0x92, 0x86, 0x14, 0x97, 0x25, 0x71, - 0xe8, 0x98, 0xb7, 0x43, 0x22, 0x84, 0x21, 0x8f, 0x8c, 0x4a, 0xe2, 0x9b, 0x31, 0x0b, 0x25, 0xe5, - 0xd8, 0xb3, 0xdc, 0x0e, 0xad, 0xc0, 0xf4, 0x2c, 0x52, 0xa9, 0xeb, 0xe7, 0x78, 0xfc, 0x79, 0xe3, - 0xbd, 0xa5, 0xa8, 0x28, 0x21, 0x01, 0x09, 0x18, 0x21, 0x4e, 0x68, 0xeb, 0xe7, 0xf9, 0xc5, 0x3e, - 0x90, 0x14, 0x54, 0x27, 0x67, 0xc3, 0x09, 0x6d, 0xc4, 0xcd, 0xc3, 0x67, 0xc1, 0x94, 0x8d, 0x8f, - 0x58, 0x39, 0x20, 0x7e, 0x60, 0x12, 0xaa, 0xcf, 0xf1, 0xc5, 0xcf, 0xb2, 0x8e, 0x73, 0x2b, 0xc9, - 0x40, 0x9d, 0x72, 0x5c, 0xd1, 0x74, 0x12, 0x8a, 0xf3, 0x09, 0xc5, 0x24, 0x03, 0x75, 0xca, 0xb1, - 0x48, 0xfb, 0xe4, 0x76, 0x68, 0xfa, 0xc4, 0xd0, 0x1f, 0xe2, 0x4d, 0xaa, 0xfc, 0x60, 0x20, 0x68, - 0x48, 0x71, 0x61, 0x33, 0x9a, 0x1f, 0xe9, 0xfc, 0x18, 0xde, 0x1c, 0x6c, 0x25, 0xaf, 0xf8, 0xab, - 0xbe, 0x8f, 0x8f, 0xc5, 0x4d, 0x93, 0x9c, 0x1c, 0x41, 0x0a, 0x72, 0xd8, 0xb2, 0x2a, 0x75, 0xfd, - 0x02, 0x8f, 0xfd, 0xa0, 0x6f, 0x10, 0x55, 0x75, 0x56, 0x19, 0x08, 0x12, 0x58, 0x0c, 0xd4, 0x75, - 0x58, 0x6a, 0x2c, 0x0c, 0x17, 0xb4, 0xc2, 0x40, 0x90, 0xc0, 0xe2, 0x2b, 0x75, 0x8e, 0x2b, 0x75, - 0xfd, 0xe1, 0x21, 0xaf, 0x94, 0x81, 0x20, 0x81, 0x05, 0x4d, 0x90, 0x75, 0xdc, 0x40, 0xbf, 0x38, - 0x94, 0xeb, 0x99, 0x5f, 0x38, 0xdb, 0x6e, 0x80, 0x18, 0x06, 0xfc, 0x85, 0x06, 0x80, 0x17, 0xa7, - 0xe8, 0x23, 0x03, 0x19, 0x4b, 0xa4, 0x20, 0x4b, 0x71, 0x6e, 0x6f, 0x38, 0x81, 0x7f, 0x1c, 0xbf, - 0x23, 0x13, 0x67, 0x20, 0xe1, 0x05, 0xfc, 0x8d, 0x06, 0xce, 0x27, 0xdb, 0x64, 0xe5, 0xde, 0x22, - 0x8f, 0xc8, 0x8d, 0x41, 0xa7, 0x79, 0xd9, 0x75, 0xad, 0xb2, 0xde, 0x6e, 0x15, 0xce, 0xaf, 0xf6, - 0x40, 0x45, 0x3d, 0x7d, 0x81, 0x7f, 0xd0, 0xc0, 0xac, 0xac, 0xa2, 0x09, 0x0f, 0x0b, 0x3c, 0x80, - 0x64, 0xd0, 0x01, 0x4c, 0xe3, 0x88, 0x38, 0xaa, 0x0f, 0xdd, 0x5d, 0x7c, 0xd4, 0xed, 0x1a, 0xfc, - 0xb3, 0x06, 0x26, 0x0d, 0xe2, 0x11, 0xc7, 0x20, 0x4e, 0x8d, 0xf9, 0xba, 0x34, 0x90, 0xb1, 0x41, - 0xda, 0xd7, 0xf5, 0x04, 0x84, 0x70, 0xb3, 0x24, 0xdd, 0x9c, 0x4c, 0xb2, 0x4e, 0x5a, 0x85, 0xf9, - 0x58, 0x35, 0xc9, 0x41, 0x1d, 0x5e, 0xc2, 0xf7, 0x35, 0x30, 0x1d, 0x6f, 0x80, 0xb8, 0x52, 0x2e, - 0x0d, 0x31, 0x0f, 0x78, 0xfb, 0xba, 0xda, 0x09, 0x88, 0xd2, 0x1e, 0xc0, 0x3f, 0x6a, 0xac, 0x53, - 0x8b, 0xde, 0x7d, 0x54, 0x2f, 0xf2, 0x58, 0xbe, 0x35, 0xf0, 0x58, 0x2a, 0x04, 0x11, 0xca, 0x2b, - 0x71, 0x2b, 0xa8, 0x38, 0x27, 0xad, 0xc2, 0x5c, 0x32, 0x92, 0x8a, 0x81, 0x92, 0x1e, 0xc2, 0x1f, - 0x6b, 0x60, 0x92, 0xc4, 0x1d, 0x37, 0xd5, 0x1f, 0x1d, 0x48, 0x10, 0x7b, 0x36, 0xf1, 0xe2, 0xa5, - 0x9e, 0x60, 0x51, 0xd4, 0x81, 0xcd, 0x3a, 0x48, 0x72, 0x84, 0x6d, 0xcf, 0x22, 0xfa, 0xff, 0x0d, - 0xb8, 0x83, 0xdc, 0x10, 0x76, 0x51, 0x04, 0x00, 0xaf, 0x80, 0xbc, 0x13, 0x5a, 0x16, 0xde, 0xb3, - 0x88, 0xfe, 0x18, 0xef, 0x45, 0xd4, 0x58, 0x74, 0x5b, 0xd2, 0x91, 0x92, 0x80, 0x75, 0xb0, 0x74, - 0xf4, 0xb2, 0xfa, 0x17, 0xa1, 0x9e, 0x83, 0x3b, 0xfd, 0x32, 0xb7, 0xb2, 0xd0, 0x6e, 0x15, 0xe6, - 0x77, 0x7b, 0x8f, 0xf6, 0xee, 0x69, 0x03, 0xbe, 0x06, 0x1e, 0x4e, 0xc8, 0x6c, 0xd8, 0x7b, 0xc4, - 0x30, 0x88, 0x11, 0x3d, 0xdc, 0xf4, 0xff, 0x17, 0xc3, 0xc3, 0xe8, 0x80, 0xef, 0xa6, 0x05, 0xd0, - 0xdd, 0xb4, 0xe1, 0x75, 0x30, 0x9f, 0x60, 0x6f, 0x3a, 0x41, 0xc5, 0xaf, 0x06, 0xbe, 0xe9, 0x34, - 0xf4, 0x65, 0x6e, 0xf7, 0x7c, 0x74, 0x22, 0x77, 0x13, 0x3c, 0xd4, 0x47, 0x07, 0x7e, 0xad, 0xc3, - 0x1a, 0xff, 0x8c, 0x85, 0xbd, 0x97, 0xc9, 0x31, 0xd5, 0x1f, 0xe7, 0xdd, 0x09, 0xdf, 0xec, 0xdd, - 0x04, 0x1d, 0xf5, 0x91, 0x87, 0x5f, 0x05, 0xe7, 0x52, 0x1c, 0xf6, 0x44, 0xd1, 0x9f, 0x10, 0x6f, - 0x0d, 0xd6, 0xcf, 0xee, 0x46, 0x44, 0xd4, 0x4b, 0x72, 0x81, 0xbd, 0x62, 0x53, 0x55, 0x10, 0xce, - 0x80, 0xec, 0x01, 0x91, 0x5f, 0xff, 0x11, 0xfb, 0x13, 0x1a, 0x20, 0xd7, 0xc4, 0x56, 0x18, 0x3d, - 0xc4, 0x07, 0x7c, 0x83, 0x22, 0x61, 0xfc, 0xf9, 0xcc, 0x73, 0xda, 0xc2, 0x07, 0x1a, 0x98, 0xef, - 0x5d, 0x9c, 0x1f, 0xa8, 0x5b, 0xbf, 0xd2, 0xc0, 0x6c, 0x57, 0x1d, 0xee, 0xe1, 0xd1, 0xed, 0x4e, - 0x8f, 0x5e, 0x1b, 0x74, 0x41, 0x15, 0x09, 0xc4, 0xbb, 0xc8, 0xa4, 0x7b, 0x3f, 0xd3, 0xc0, 0x4c, - 0xba, 0xb4, 0x3d, 0xc8, 0x78, 0x15, 0x3f, 0xc8, 0x80, 0xf9, 0xde, 0xcd, 0x2f, 0xf4, 0xd5, 0x2b, - 0x7f, 0x38, 0xd3, 0x92, 0x5e, 0x93, 0xd5, 0x77, 0x35, 0x30, 0x71, 0x4b, 0xc9, 0x45, 0x5f, 0x87, - 0x07, 0x3e, 0xa7, 0x89, 0xee, 0x92, 0x98, 0x41, 0x51, 0x12, 0xb7, 0xf8, 0x27, 0x0d, 0xcc, 0xf5, - 0xbc, 0x24, 0xe1, 0x65, 0x30, 0x8a, 0x2d, 0xcb, 0x3d, 0x14, 0xe3, 0xb6, 0xc4, 0x2c, 0x7b, 0x95, - 0x53, 0x91, 0xe4, 0x26, 0xa2, 0x97, 0xf9, 0xbc, 0xa2, 0x57, 0xfc, 0xab, 0x06, 0x2e, 0xde, 0x2d, - 0x13, 0x1f, 0xc8, 0x96, 0x2e, 0x83, 0xbc, 0x6c, 0x70, 0x8f, 0xf9, 0x76, 0xca, 0x37, 0x9d, 0x2c, - 0x1a, 0xfc, 0x1f, 0xa2, 0xc4, 0x5f, 0xc5, 0x0f, 0x35, 0x30, 0x53, 0x25, 0x7e, 0xd3, 0xac, 0x11, - 0x44, 0xea, 0xc4, 0x27, 0x4e, 0x8d, 0xc0, 0x15, 0x30, 0xce, 0x3f, 0xcb, 0x7a, 0xb8, 0x16, 0x7d, - 0x62, 0x98, 0x95, 0x21, 0x1f, 0xdf, 0x8e, 0x18, 0x28, 0x96, 0x51, 0x9f, 0x23, 0x32, 0x7d, 0x3f, - 0x47, 0x5c, 0x04, 0x23, 0x5e, 0x3c, 0xac, 0xcd, 0x33, 0x2e, 0x9f, 0xcf, 0x72, 0x2a, 0xe7, 0xba, - 0x7e, 0xc0, 0x27, 0x50, 0x39, 0xc9, 0x75, 0xfd, 0x00, 0x71, 0x6a, 0xf1, 0x6f, 0x1a, 0xe8, 0xf5, - 0xaf, 0x4b, 0xf0, 0x82, 0x18, 0xc2, 0x25, 0x26, 0x5b, 0xd1, 0x00, 0x0e, 0x36, 0xc1, 0x18, 0x15, - 0xab, 0x92, 0x51, 0xaf, 0xdc, 0x67, 0xd4, 0xd3, 0x31, 0x12, 0xb7, 0x7f, 0x44, 0x8d, 0xc0, 0x58, - 0xe0, 0x6b, 0xb8, 0x1c, 0x3a, 0x86, 0x9c, 0xcb, 0x4e, 0x8a, 0xc0, 0xaf, 0xad, 0x0a, 0x1a, 0x52, - 0xdc, 0xf2, 0xd5, 0x8f, 0xee, 0x2c, 0x9e, 0xf9, 0xf8, 0xce, 0xe2, 0x99, 0x4f, 0xee, 0x2c, 0x9e, - 0xf9, 0x6e, 0x7b, 0x51, 0xfb, 0xa8, 0xbd, 0xa8, 0x7d, 0xdc, 0x5e, 0xd4, 0x3e, 0x69, 0x2f, 0x6a, - 0xff, 0x68, 0x2f, 0x6a, 0x3f, 0xff, 0x74, 0xf1, 0xcc, 0x37, 0xc7, 0x24, 0xfe, 0x7f, 0x03, 0x00, - 0x00, 0xff, 0xff, 0x80, 0x3e, 0x52, 0x72, 0x50, 0x2c, 0x00, 0x00, + 0x7c, 0xb3, 0x5f, 0x27, 0xd9, 0x95, 0x93, 0x25, 0x21, 0x21, 0x05, 0x45, 0x59, 0xb6, 0x13, 0x9c, + 0xac, 0x2d, 0xd3, 0xda, 0x4d, 0x0c, 0xf9, 0xd9, 0xd6, 0xb4, 0xe4, 0x59, 0xcf, 0xaf, 0x9d, 0x9e, + 0x91, 0xed, 0x0a, 0x50, 0xfc, 0xa8, 0x14, 0x14, 0x05, 0x84, 0x22, 0xb9, 0x50, 0x05, 0x87, 0x40, + 0x71, 0xe1, 0x00, 0x07, 0x28, 0x2e, 0xf0, 0x07, 0xe4, 0x98, 0xe2, 0x94, 0x03, 0xa5, 0x22, 0xca, + 0x95, 0x23, 0x55, 0x54, 0xf9, 0x44, 0xf5, 0x8f, 0xe9, 0x19, 0x8d, 0xa4, 0x5d, 0x57, 0x56, 0xca, + 0x72, 0xb3, 0xde, 0xaf, 0xcf, 0xeb, 0xd7, 0xaf, 0x5f, 0xbf, 0x7e, 0x63, 0x50, 0x3f, 0x78, 0x96, + 0x96, 0x4c, 0x77, 0xe5, 0x20, 0xdc, 0x23, 0xbe, 0x43, 0x02, 0x42, 0x57, 0x9a, 0xc4, 0x31, 0x5c, + 0x7f, 0x45, 0x32, 0xb0, 0x67, 0x92, 0xa3, 0x80, 0x38, 0xd4, 0x74, 0x1d, 0x7a, 0x15, 0x7b, 0x26, + 0x25, 0x7e, 0x93, 0xf8, 0x2b, 0xde, 0x41, 0x83, 0xf1, 0x68, 0xa7, 0xc0, 0x4a, 0xf3, 0xc9, 0x3d, + 0x12, 0xe0, 0x27, 0x57, 0x1a, 0xc4, 0x21, 0x3e, 0x0e, 0x88, 0x51, 0xf2, 0x7c, 0x37, 0x70, 0xe1, + 0x57, 0x84, 0xb9, 0x52, 0x87, 0xf4, 0x9b, 0xca, 0x5c, 0xc9, 0x3b, 0x68, 0x30, 0x1e, 0xed, 0x14, + 0x28, 0x49, 0x73, 0x0b, 0x57, 0x1b, 0x66, 0xb0, 0x1f, 0xee, 0x95, 0x6a, 0xae, 0xbd, 0xd2, 0x70, + 0x1b, 0xee, 0x0a, 0xb7, 0xba, 0x17, 0xd6, 0xf9, 0x2f, 0xfe, 0x83, 0xff, 0x25, 0xd0, 0x16, 0x9e, + 0x8a, 0x9d, 0xb7, 0x71, 0x6d, 0xdf, 0x74, 0x88, 0x7f, 0x1c, 0x7b, 0x6c, 0x93, 0x00, 0xaf, 0x34, + 0xbb, 0x7c, 0x5c, 0x58, 0xe9, 0xa7, 0xe5, 0x87, 0x4e, 0x60, 0xda, 0xa4, 0x4b, 0xe1, 0x8b, 0x77, + 0x53, 0xa0, 0xb5, 0x7d, 0x62, 0xe3, 0xb4, 0x5e, 0xf1, 0x44, 0x03, 0xb3, 0x6b, 0xae, 0xd3, 0x24, + 0x3e, 0x5b, 0x25, 0x22, 0xb7, 0x43, 0x42, 0x03, 0x58, 0x06, 0xd9, 0xd0, 0x34, 0x74, 0x6d, 0x49, + 0x5b, 0x1e, 0x2f, 0x3f, 0xf1, 0x61, 0xab, 0x70, 0xa6, 0xdd, 0x2a, 0x64, 0x6f, 0x6e, 0xae, 0x9f, + 0xb4, 0x0a, 0x97, 0xfa, 0x21, 0x05, 0xc7, 0x1e, 0xa1, 0xa5, 0x9b, 0x9b, 0xeb, 0x88, 0x29, 0xc3, + 0x17, 0xc0, 0xac, 0x41, 0xa8, 0xe9, 0x13, 0x63, 0x75, 0x67, 0xf3, 0x65, 0x61, 0x5f, 0xcf, 0x70, + 0x8b, 0x17, 0xa4, 0xc5, 0xd9, 0xf5, 0xb4, 0x00, 0xea, 0xd6, 0x81, 0xbb, 0x60, 0xcc, 0xdd, 0xbb, + 0x45, 0x6a, 0x01, 0xd5, 0xb3, 0x4b, 0xd9, 0xe5, 0x89, 0x6b, 0x57, 0x4b, 0xf1, 0x0e, 0x2a, 0x17, + 0xf8, 0xb6, 0xc9, 0xc5, 0x96, 0x10, 0x3e, 0xdc, 0x88, 0x76, 0xae, 0x3c, 0x2d, 0xd1, 0xc6, 0x2a, + 0xc2, 0x0a, 0x8a, 0xcc, 0x15, 0x7f, 0x9b, 0x01, 0x30, 0xb9, 0x78, 0xea, 0xb9, 0x0e, 0x25, 0x03, + 0x59, 0x3d, 0x05, 0x33, 0x35, 0x6e, 0x39, 0x20, 0x86, 0xc4, 0xd5, 0x33, 0x9f, 0xc5, 0x7b, 0x5d, + 0xe2, 0xcf, 0xac, 0xa5, 0xcc, 0xa1, 0x2e, 0x00, 0x78, 0x03, 0x8c, 0xfa, 0x84, 0x86, 0x56, 0xa0, + 0x67, 0x97, 0xb4, 0xe5, 0x89, 0x6b, 0x57, 0xfa, 0x42, 0xf1, 0xfc, 0x66, 0xc9, 0x57, 0x6a, 0x3e, + 0x59, 0xaa, 0x06, 0x38, 0x08, 0x69, 0xf9, 0xac, 0x44, 0x1a, 0x45, 0xdc, 0x06, 0x92, 0xb6, 0x8a, + 0x3f, 0xca, 0x80, 0x99, 0x64, 0x94, 0x9a, 0x26, 0x39, 0x84, 0x87, 0x60, 0xcc, 0x17, 0xc9, 0xc2, + 0xe3, 0x34, 0x71, 0x6d, 0xa7, 0x74, 0x4f, 0xc7, 0xaa, 0xd4, 0x95, 0x84, 0xe5, 0x09, 0xb6, 0x67, + 0xf2, 0x07, 0x8a, 0xd0, 0xe0, 0xdb, 0x20, 0xef, 0xcb, 0x8d, 0xe2, 0xd9, 0x34, 0x71, 0xed, 0xeb, + 0x03, 0x44, 0x16, 0x86, 0xcb, 0x93, 0xed, 0x56, 0x21, 0x1f, 0xfd, 0x42, 0x0a, 0xb0, 0xf8, 0x5e, + 0x06, 0x2c, 0xae, 0x85, 0x34, 0x70, 0x6d, 0x44, 0xa8, 0x1b, 0xfa, 0x35, 0xb2, 0xe6, 0x5a, 0xa1, + 0xed, 0xac, 0x93, 0xba, 0xe9, 0x98, 0x01, 0xcb, 0xd6, 0x25, 0x30, 0xe2, 0x60, 0x9b, 0xc8, 0xec, + 0x99, 0x94, 0x31, 0x1d, 0xd9, 0xc6, 0x36, 0x41, 0x9c, 0xc3, 0x24, 0x58, 0xb2, 0xc8, 0xb3, 0xa0, + 0x24, 0x6e, 0x1c, 0x7b, 0x04, 0x71, 0x0e, 0xbc, 0x0c, 0x46, 0xeb, 0xae, 0x6f, 0x63, 0xb1, 0x8f, + 0xe3, 0xf1, 0xce, 0x3c, 0xcf, 0xa9, 0x48, 0x72, 0xe1, 0xd3, 0x60, 0xc2, 0x20, 0xb4, 0xe6, 0x9b, + 0x1e, 0x83, 0xd6, 0x47, 0xb8, 0xf0, 0x39, 0x29, 0x3c, 0xb1, 0x1e, 0xb3, 0x50, 0x52, 0x0e, 0x5e, + 0x01, 0x79, 0xcf, 0x37, 0x5d, 0xdf, 0x0c, 0x8e, 0xf5, 0xdc, 0x92, 0xb6, 0x9c, 0x2b, 0xcf, 0x48, + 0x9d, 0xfc, 0x8e, 0xa4, 0x23, 0x25, 0x01, 0x97, 0x40, 0xfe, 0xc5, 0x6a, 0x65, 0x7b, 0x07, 0x07, + 0xfb, 0xfa, 0x28, 0x47, 0x18, 0x61, 0xd2, 0x28, 0x7f, 0x4b, 0x52, 0x8b, 0xff, 0xc8, 0x00, 0x3d, + 0x1d, 0x95, 0x28, 0xa4, 0xf0, 0x79, 0x90, 0xa7, 0x01, 0xab, 0x38, 0x8d, 0x63, 0x19, 0x93, 0xc7, + 0x22, 0xb0, 0xaa, 0xa4, 0x9f, 0xb4, 0x0a, 0xf3, 0xb1, 0x46, 0x44, 0xe5, 0xf1, 0x50, 0xba, 0xf0, + 0xd7, 0x1a, 0x38, 0x77, 0x48, 0xf6, 0xf6, 0x5d, 0xf7, 0x60, 0xcd, 0x32, 0x89, 0x13, 0xac, 0xb9, + 0x4e, 0xdd, 0x6c, 0xc8, 0x1c, 0x40, 0xf7, 0x98, 0x03, 0xaf, 0x74, 0x5b, 0x2e, 0x3f, 0xd0, 0x6e, + 0x15, 0xce, 0xf5, 0x60, 0xa0, 0x5e, 0x7e, 0xc0, 0x5d, 0xa0, 0xd7, 0x52, 0x87, 0x44, 0x16, 0x30, + 0x51, 0xb6, 0xc6, 0xcb, 0x17, 0xdb, 0xad, 0x82, 0xbe, 0xd6, 0x47, 0x06, 0xf5, 0xd5, 0x2e, 0xfe, + 0x20, 0x9b, 0x0e, 0x6f, 0x22, 0xdd, 0xde, 0x02, 0x79, 0x76, 0x8c, 0x0d, 0x1c, 0x60, 0x79, 0x10, + 0x9f, 0x38, 0xdd, 0xa1, 0x17, 0x35, 0x63, 0x8b, 0x04, 0xb8, 0x0c, 0xe5, 0x86, 0x80, 0x98, 0x86, + 0x94, 0x55, 0xf8, 0x6d, 0x30, 0x42, 0x3d, 0x52, 0x93, 0x81, 0x7e, 0xf5, 0x5e, 0x0f, 0x5b, 0x9f, + 0x85, 0x54, 0x3d, 0x52, 0x8b, 0xcf, 0x02, 0xfb, 0x85, 0x38, 0x2c, 0x7c, 0x47, 0x03, 0xa3, 0x94, + 0x17, 0x28, 0x59, 0xd4, 0x5e, 0x1f, 0x96, 0x07, 0xa9, 0x2a, 0x28, 0x7e, 0x23, 0x09, 0x5e, 0xfc, + 0x77, 0x06, 0x5c, 0xea, 0xa7, 0xba, 0xe6, 0x3a, 0x86, 0xd8, 0x8e, 0x4d, 0x79, 0xb6, 0x45, 0xa6, + 0x3f, 0x9d, 0x3c, 0xdb, 0x27, 0xad, 0xc2, 0x23, 0x77, 0x35, 0x90, 0x28, 0x02, 0x5f, 0x52, 0xeb, + 0x16, 0x85, 0xe2, 0x52, 0xa7, 0x63, 0x27, 0xad, 0xc2, 0xb4, 0x52, 0xeb, 0xf4, 0x15, 0x36, 0x01, + 0xb4, 0x30, 0x0d, 0x6e, 0xf8, 0xd8, 0xa1, 0xc2, 0xac, 0x69, 0x13, 0x19, 0xbe, 0xc7, 0x4e, 0x97, + 0x1e, 0x4c, 0xa3, 0xbc, 0x20, 0x21, 0xe1, 0xf5, 0x2e, 0x6b, 0xa8, 0x07, 0x02, 0xab, 0x5b, 0x3e, + 0xc1, 0x54, 0x95, 0xa2, 0xc4, 0x8d, 0xc2, 0xa8, 0x48, 0x72, 0xe1, 0xa3, 0x60, 0xcc, 0x26, 0x94, + 0xe2, 0x06, 0xe1, 0xf5, 0x67, 0x3c, 0xbe, 0xa2, 0xb7, 0x04, 0x19, 0x45, 0x7c, 0xd6, 0x9f, 0x5c, + 0xec, 0x17, 0xb5, 0xeb, 0x26, 0x0d, 0xe0, 0x6b, 0x5d, 0x07, 0xa0, 0x74, 0xba, 0x15, 0x32, 0x6d, + 0x9e, 0xfe, 0xaa, 0xf8, 0x45, 0x94, 0x44, 0xf2, 0x7f, 0x0b, 0xe4, 0xcc, 0x80, 0xd8, 0xd1, 0xdd, + 0xfd, 0xca, 0x90, 0x72, 0xaf, 0x3c, 0x25, 0x7d, 0xc8, 0x6d, 0x32, 0x34, 0x24, 0x40, 0x8b, 0xbf, + 0xcb, 0x80, 0x87, 0xfa, 0xa9, 0xb0, 0x0b, 0x85, 0xb2, 0x88, 0x7b, 0x56, 0xe8, 0x63, 0x4b, 0x66, + 0x9c, 0x8a, 0xf8, 0x0e, 0xa7, 0x22, 0xc9, 0x65, 0x25, 0x9f, 0x9a, 0x4e, 0x23, 0xb4, 0xb0, 0x2f, + 0xd3, 0x49, 0xad, 0xba, 0x2a, 0xe9, 0x48, 0x49, 0xc0, 0x12, 0x00, 0x74, 0xdf, 0xf5, 0x03, 0x8e, + 0x21, 0xab, 0xd7, 0x59, 0x56, 0x20, 0xaa, 0x8a, 0x8a, 0x12, 0x12, 0xec, 0x46, 0x3b, 0x30, 0x1d, + 0x43, 0xee, 0xba, 0x3a, 0xc5, 0x2f, 0x99, 0x8e, 0x81, 0x38, 0x87, 0xe1, 0x5b, 0x26, 0x0d, 0x18, + 0x45, 0x6e, 0x79, 0x47, 0xd4, 0xb9, 0xa4, 0x92, 0x60, 0xf8, 0x35, 0x56, 0xf5, 0x5d, 0xdf, 0x24, + 0x54, 0x1f, 0x8d, 0xf1, 0xd7, 0x14, 0x15, 0x25, 0x24, 0x8a, 0xff, 0xca, 0xf7, 0x4f, 0x12, 0x56, + 0x4a, 0xe0, 0xc3, 0x20, 0xd7, 0xf0, 0xdd, 0xd0, 0x93, 0x51, 0x52, 0xd1, 0x7e, 0x81, 0x11, 0x91, + 0xe0, 0xb1, 0xac, 0x6c, 0x76, 0xb4, 0xa9, 0x2a, 0x2b, 0xa3, 0xe6, 0x34, 0xe2, 0xc3, 0xef, 0x69, + 0x20, 0xe7, 0xc8, 0xe0, 0xb0, 0x94, 0x7b, 0x6d, 0x48, 0x79, 0xc1, 0xc3, 0x1b, 0xbb, 0x2b, 0x22, + 0x2f, 0x90, 0xe1, 0x53, 0x20, 0x47, 0x6b, 0xae, 0x47, 0x64, 0xd4, 0x17, 0x23, 0xa1, 0x2a, 0x23, + 0x9e, 0xb4, 0x0a, 0x53, 0x91, 0x39, 0x4e, 0x40, 0x42, 0x18, 0xfe, 0x50, 0x03, 0xa0, 0x89, 0x2d, + 0xd3, 0xc0, 0xbc, 0x65, 0xc8, 0x71, 0xf7, 0x07, 0x9b, 0xd6, 0x2f, 0x2b, 0xf3, 0x62, 0xd3, 0xe2, + 0xdf, 0x28, 0x01, 0x0d, 0xdf, 0xd5, 0xc0, 0x24, 0x0d, 0xf7, 0x7c, 0xa9, 0x45, 0x79, 0x73, 0x31, + 0x71, 0xed, 0x1b, 0x03, 0xf5, 0xa5, 0x9a, 0x00, 0x28, 0xcf, 0xb4, 0x5b, 0x85, 0xc9, 0x24, 0x05, + 0x75, 0x38, 0x00, 0x7f, 0xa2, 0x81, 0x7c, 0x33, 0xba, 0xb3, 0xc7, 0xf8, 0x81, 0x7f, 0x63, 0x48, + 0x1b, 0x2b, 0x33, 0x2a, 0x3e, 0x05, 0xaa, 0x0f, 0x50, 0x1e, 0xc0, 0xbf, 0x6a, 0x40, 0xc7, 0x86, + 0x28, 0xf0, 0xd8, 0xda, 0xf1, 0x4d, 0x27, 0x20, 0xbe, 0xe8, 0x37, 0xa9, 0x9e, 0xe7, 0xee, 0x0d, + 0xf6, 0x2e, 0x4c, 0xf7, 0xb2, 0xe5, 0x25, 0xe9, 0x9d, 0xbe, 0xda, 0xc7, 0x0d, 0xd4, 0xd7, 0x41, + 0x9e, 0x68, 0x71, 0x4b, 0xa3, 0x8f, 0x0f, 0x21, 0xd1, 0xe2, 0x5e, 0x4a, 0x56, 0x87, 0xb8, 0x83, + 0x4a, 0x40, 0xc3, 0x0a, 0x98, 0xf3, 0x7c, 0xc2, 0x01, 0x6e, 0x3a, 0x07, 0x8e, 0x7b, 0xe8, 0x3c, + 0x6f, 0x12, 0xcb, 0xa0, 0x3a, 0x58, 0xd2, 0x96, 0xf3, 0xe5, 0x0b, 0xed, 0x56, 0x61, 0x6e, 0xa7, + 0x97, 0x00, 0xea, 0xad, 0x57, 0x7c, 0x37, 0x9b, 0x7e, 0x05, 0xa4, 0xbb, 0x08, 0xf8, 0xbe, 0x58, + 0xbd, 0x88, 0x0d, 0xd5, 0x35, 0xbe, 0x5b, 0x6f, 0x0d, 0x29, 0x99, 0x54, 0x1b, 0x10, 0x77, 0x72, + 0x8a, 0x44, 0x51, 0xc2, 0x0f, 0xf8, 0x4b, 0x0d, 0x4c, 0xe1, 0x5a, 0x8d, 0x78, 0x01, 0x31, 0x44, + 0x71, 0xcf, 0x7c, 0x0e, 0xf5, 0x6b, 0x4e, 0x7a, 0x35, 0xb5, 0x9a, 0x84, 0x46, 0x9d, 0x9e, 0xc0, + 0xe7, 0xc0, 0x59, 0x1a, 0xb8, 0x3e, 0x31, 0x52, 0x6d, 0x33, 0x6c, 0xb7, 0x0a, 0x67, 0xab, 0x1d, + 0x1c, 0x94, 0x92, 0x2c, 0x7e, 0x3a, 0x02, 0x0a, 0x77, 0x39, 0x6a, 0xa7, 0x78, 0x98, 0x5d, 0x06, + 0xa3, 0x7c, 0xb9, 0x06, 0x8f, 0x4a, 0x3e, 0xd1, 0x0a, 0x72, 0x2a, 0x92, 0x5c, 0x76, 0x51, 0x30, + 0x7c, 0xd6, 0xbe, 0x64, 0xb9, 0xa0, 0xba, 0x28, 0xaa, 0x82, 0x8c, 0x22, 0x3e, 0x7c, 0x1b, 0x8c, + 0x8a, 0xc1, 0x0b, 0xaf, 0xd2, 0x43, 0xac, 0xb4, 0x80, 0xfb, 0xc9, 0xa1, 0x90, 0x84, 0xec, 0xae, + 0xb0, 0xb9, 0xfb, 0x5d, 0x61, 0xef, 0x58, 0xd2, 0x46, 0xff, 0xc7, 0x4b, 0x5a, 0xf1, 0x3f, 0x5a, + 0xfa, 0xdc, 0x27, 0x96, 0x5a, 0xad, 0x61, 0x8b, 0xc0, 0x75, 0x30, 0xc3, 0x5e, 0x2d, 0x88, 0x78, + 0x96, 0x59, 0xc3, 0x94, 0x3f, 0x9a, 0x45, 0xc2, 0xa9, 0x39, 0x4e, 0x35, 0xc5, 0x47, 0x5d, 0x1a, + 0xf0, 0x45, 0x00, 0x45, 0x27, 0xdf, 0x61, 0x47, 0x34, 0x25, 0xaa, 0x27, 0xaf, 0x76, 0x49, 0xa0, + 0x1e, 0x5a, 0x70, 0x0d, 0xcc, 0x5a, 0x78, 0x8f, 0x58, 0x55, 0x62, 0x91, 0x5a, 0xe0, 0xfa, 0xdc, + 0x94, 0x18, 0x2b, 0xcc, 0xb5, 0x5b, 0x85, 0xd9, 0xeb, 0x69, 0x26, 0xea, 0x96, 0x2f, 0x5e, 0x4a, + 0x1f, 0xaf, 0xe4, 0xc2, 0xc5, 0xfb, 0xe8, 0x83, 0x0c, 0x58, 0xe8, 0x9f, 0x19, 0xf0, 0xfb, 0xf1, + 0x33, 0x4e, 0x74, 0xe9, 0x6f, 0x0c, 0x2b, 0x0b, 0xe5, 0x3b, 0x0e, 0x74, 0xbf, 0xe1, 0xe0, 0x77, + 0x58, 0xcb, 0x84, 0xad, 0x68, 0x70, 0xf4, 0xfa, 0xd0, 0x5c, 0x60, 0x20, 0xe5, 0x71, 0xd1, 0x8d, + 0x61, 0x8b, 0x37, 0x5f, 0xd8, 0x22, 0xc5, 0xdf, 0x6b, 0xe9, 0x97, 0x7c, 0x7c, 0x82, 0xe1, 0x4f, + 0x35, 0x30, 0xed, 0x7a, 0xc4, 0x59, 0xdd, 0xd9, 0x7c, 0xf9, 0x0b, 0xe2, 0x24, 0xcb, 0x50, 0x6d, + 0xdf, 0xa3, 0x9f, 0x2f, 0x56, 0x2b, 0xdb, 0xc2, 0xe0, 0x8e, 0xef, 0x7a, 0xb4, 0x7c, 0xae, 0xdd, + 0x2a, 0x4c, 0x57, 0x3a, 0xa1, 0x50, 0x1a, 0xbb, 0x68, 0x83, 0xb9, 0x8d, 0xa3, 0x80, 0xf8, 0x0e, + 0xb6, 0xd6, 0xdd, 0x5a, 0x68, 0x13, 0x27, 0x10, 0x8e, 0xa6, 0xa6, 0x4e, 0xda, 0x29, 0xa7, 0x4e, + 0x0f, 0x81, 0x6c, 0xe8, 0x5b, 0x32, 0x8b, 0x27, 0xd4, 0x54, 0x15, 0x5d, 0x47, 0x8c, 0x5e, 0xbc, + 0x04, 0x46, 0x98, 0x9f, 0xf0, 0x02, 0xc8, 0xfa, 0xf8, 0x90, 0x5b, 0x9d, 0x2c, 0x8f, 0x31, 0x11, + 0x84, 0x0f, 0x11, 0xa3, 0x15, 0xff, 0xb2, 0x04, 0xa6, 0x53, 0x6b, 0x81, 0x0b, 0x20, 0xa3, 0x46, + 0xb5, 0x40, 0x1a, 0xcd, 0x6c, 0xae, 0xa3, 0x8c, 0x69, 0xc0, 0x67, 0x54, 0xf1, 0x15, 0xa0, 0x05, + 0x55, 0xcf, 0x39, 0x95, 0xf5, 0xc8, 0xb1, 0x39, 0xe6, 0x48, 0x54, 0x38, 0x99, 0x0f, 0xa4, 0x2e, + 0x4f, 0x89, 0xf0, 0x81, 0xd4, 0x11, 0xa3, 0x7d, 0xd6, 0x91, 0x5b, 0x34, 0xf3, 0xcb, 0x9d, 0x62, + 0xe6, 0x37, 0x7a, 0xc7, 0x99, 0xdf, 0xc3, 0x20, 0x17, 0x98, 0x81, 0x45, 0xf4, 0xb1, 0xce, 0xa7, + 0xcc, 0x0d, 0x46, 0x44, 0x82, 0x07, 0x6f, 0x81, 0x31, 0x83, 0xd4, 0x71, 0x68, 0x05, 0x7a, 0x9e, + 0xa7, 0xd0, 0xda, 0x00, 0x52, 0x48, 0x0c, 0x64, 0xd7, 0x85, 0x5d, 0x14, 0x01, 0xc0, 0x47, 0xc0, + 0x98, 0x8d, 0x8f, 0x4c, 0x3b, 0xb4, 0x79, 0x93, 0xa7, 0x09, 0xb1, 0x2d, 0x41, 0x42, 0x11, 0x8f, + 0x55, 0x46, 0x72, 0x54, 0xb3, 0x42, 0x6a, 0x36, 0x89, 0x64, 0xca, 0x06, 0x4c, 0x55, 0xc6, 0x8d, + 0x14, 0x1f, 0x75, 0x69, 0x70, 0x30, 0xd3, 0xe1, 0xca, 0x13, 0x09, 0x30, 0x41, 0x42, 0x11, 0xaf, + 0x13, 0x4c, 0xca, 0x4f, 0xf6, 0x03, 0x93, 0xca, 0x5d, 0x1a, 0xf0, 0x71, 0x30, 0x6e, 0xe3, 0xa3, + 0xeb, 0xc4, 0x69, 0x04, 0xfb, 0xfa, 0xd4, 0x92, 0xb6, 0x9c, 0x2d, 0x4f, 0xb5, 0x5b, 0x85, 0xf1, + 0xad, 0x88, 0x88, 0x62, 0x3e, 0x17, 0x36, 0x1d, 0x29, 0x7c, 0x36, 0x21, 0x1c, 0x11, 0x51, 0xcc, + 0x67, 0x1d, 0x84, 0x87, 0x03, 0x76, 0xb8, 0xf4, 0xe9, 0xce, 0xa7, 0xe6, 0x8e, 0x20, 0xa3, 0x88, + 0x0f, 0x97, 0x41, 0xde, 0xc6, 0x47, 0x7c, 0x2c, 0xa0, 0xcf, 0x70, 0xb3, 0x7c, 0x38, 0xbd, 0x25, + 0x69, 0x48, 0x71, 0xb9, 0xa4, 0xe9, 0x08, 0xc9, 0xd9, 0x84, 0xa4, 0xa4, 0x21, 0xc5, 0x65, 0x49, + 0x1c, 0x3a, 0xe6, 0xed, 0x90, 0x08, 0x61, 0xc8, 0x23, 0xa3, 0x92, 0xf8, 0x66, 0xcc, 0x42, 0x49, + 0x39, 0xf6, 0x2c, 0xb7, 0x43, 0x2b, 0x30, 0x3d, 0x8b, 0x54, 0xea, 0xfa, 0x39, 0x1e, 0x7f, 0xde, + 0x78, 0x6f, 0x29, 0x2a, 0x4a, 0x48, 0x40, 0x02, 0x46, 0x88, 0x13, 0xda, 0xfa, 0x79, 0x7e, 0xb1, + 0x0f, 0x24, 0x05, 0xd5, 0xc9, 0xd9, 0x70, 0x42, 0x1b, 0x71, 0xf3, 0xf0, 0x19, 0x30, 0x65, 0xe3, + 0x23, 0x56, 0x0e, 0x88, 0x1f, 0x98, 0x84, 0xea, 0x73, 0x7c, 0xf1, 0xb3, 0xac, 0xe3, 0xdc, 0x4a, + 0x32, 0x50, 0xa7, 0x1c, 0x57, 0x34, 0x9d, 0x84, 0xe2, 0x7c, 0x42, 0x31, 0xc9, 0x40, 0x9d, 0x72, + 0x2c, 0xd2, 0x3e, 0xb9, 0x1d, 0x9a, 0x3e, 0x31, 0xf4, 0x07, 0x78, 0x93, 0x2a, 0x3f, 0x18, 0x08, + 0x1a, 0x52, 0x5c, 0xd8, 0x8c, 0xe6, 0x47, 0x3a, 0x3f, 0x86, 0x37, 0x07, 0x5b, 0xc9, 0x2b, 0xfe, + 0xaa, 0xef, 0xe3, 0x63, 0x71, 0xd3, 0x24, 0x27, 0x47, 0x90, 0x82, 0x1c, 0xb6, 0xac, 0x4a, 0x5d, + 0xbf, 0xc0, 0x63, 0x3f, 0xe8, 0x1b, 0x44, 0x55, 0x9d, 0x55, 0x06, 0x82, 0x04, 0x16, 0x03, 0x75, + 0x1d, 0x96, 0x1a, 0x0b, 0xc3, 0x05, 0xad, 0x30, 0x10, 0x24, 0xb0, 0xf8, 0x4a, 0x9d, 0xe3, 0x4a, + 0x5d, 0x7f, 0x70, 0xc8, 0x2b, 0x65, 0x20, 0x48, 0x60, 0x41, 0x13, 0x64, 0x1d, 0x37, 0xd0, 0x2f, + 0x0e, 0xe5, 0x7a, 0xe6, 0x17, 0xce, 0xb6, 0x1b, 0x20, 0x86, 0x01, 0x7f, 0xa1, 0x01, 0xe0, 0xc5, + 0x29, 0xfa, 0xd0, 0x40, 0xc6, 0x12, 0x29, 0xc8, 0x52, 0x9c, 0xdb, 0x1b, 0x4e, 0xe0, 0x1f, 0xc7, + 0xef, 0xc8, 0xc4, 0x19, 0x48, 0x78, 0x01, 0x7f, 0xa3, 0x81, 0xf3, 0xc9, 0x36, 0x59, 0xb9, 0xb7, + 0xc8, 0x23, 0x72, 0x63, 0xd0, 0x69, 0x5e, 0x76, 0x5d, 0xab, 0xac, 0xb7, 0x5b, 0x85, 0xf3, 0xab, + 0x3d, 0x50, 0x51, 0x4f, 0x5f, 0xe0, 0x1f, 0x34, 0x30, 0x2b, 0xab, 0x68, 0xc2, 0xc3, 0x02, 0x0f, + 0x20, 0x19, 0x74, 0x00, 0xd3, 0x38, 0x22, 0x8e, 0xea, 0x43, 0x77, 0x17, 0x1f, 0x75, 0xbb, 0x06, + 0xff, 0xac, 0x81, 0x49, 0x83, 0x78, 0xc4, 0x31, 0x88, 0x53, 0x63, 0xbe, 0x2e, 0x0d, 0x64, 0x6c, + 0x90, 0xf6, 0x75, 0x3d, 0x01, 0x21, 0xdc, 0x2c, 0x49, 0x37, 0x27, 0x93, 0xac, 0x93, 0x56, 0x61, + 0x3e, 0x56, 0x4d, 0x72, 0x50, 0x87, 0x97, 0xf0, 0x3d, 0x0d, 0x4c, 0xc7, 0x1b, 0x20, 0xae, 0x94, + 0x4b, 0x43, 0xcc, 0x03, 0xde, 0xbe, 0xae, 0x76, 0x02, 0xa2, 0xb4, 0x07, 0xf0, 0x8f, 0x1a, 0xeb, + 0xd4, 0xa2, 0x77, 0x1f, 0xd5, 0x8b, 0x3c, 0x96, 0x6f, 0x0e, 0x3c, 0x96, 0x0a, 0x41, 0x84, 0xf2, + 0x4a, 0xdc, 0x0a, 0x2a, 0xce, 0x49, 0xab, 0x30, 0x97, 0x8c, 0xa4, 0x62, 0xa0, 0xa4, 0x87, 0xf0, + 0xc7, 0x1a, 0x98, 0x24, 0x71, 0xc7, 0x4d, 0xf5, 0x87, 0x07, 0x12, 0xc4, 0x9e, 0x4d, 0xbc, 0x78, + 0xa9, 0x27, 0x58, 0x14, 0x75, 0x60, 0xb3, 0x0e, 0x92, 0x1c, 0x61, 0xdb, 0xb3, 0x88, 0xfe, 0x7f, + 0x03, 0xee, 0x20, 0x37, 0x84, 0x5d, 0x14, 0x01, 0xc0, 0x2b, 0x20, 0xef, 0x84, 0x96, 0x85, 0xf7, + 0x2c, 0xa2, 0x3f, 0xc2, 0x7b, 0x11, 0x35, 0x16, 0xdd, 0x96, 0x74, 0xa4, 0x24, 0x60, 0x1d, 0x2c, + 0x1d, 0xbd, 0xa4, 0xfe, 0x45, 0xa8, 0xe7, 0xe0, 0x4e, 0xbf, 0xcc, 0xad, 0x2c, 0xb4, 0x5b, 0x85, + 0xf9, 0xdd, 0xde, 0xa3, 0xbd, 0xbb, 0xda, 0x80, 0xaf, 0x82, 0x07, 0x13, 0x32, 0x1b, 0xf6, 0x1e, + 0x31, 0x0c, 0x62, 0x44, 0x0f, 0x37, 0xfd, 0xff, 0xc5, 0xf0, 0x30, 0x3a, 0xe0, 0xbb, 0x69, 0x01, + 0x74, 0x27, 0x6d, 0x78, 0x1d, 0xcc, 0x27, 0xd8, 0x9b, 0x4e, 0x50, 0xf1, 0xab, 0x81, 0x6f, 0x3a, + 0x0d, 0x7d, 0x99, 0xdb, 0x3d, 0x1f, 0x9d, 0xc8, 0xdd, 0x04, 0x0f, 0xf5, 0xd1, 0x81, 0x5f, 0xeb, + 0xb0, 0xc6, 0x3f, 0x63, 0x61, 0xef, 0x25, 0x72, 0x4c, 0xf5, 0x47, 0x79, 0x77, 0xc2, 0x37, 0x7b, + 0x37, 0x41, 0x47, 0x7d, 0xe4, 0xe1, 0x57, 0xc1, 0xb9, 0x14, 0x87, 0x3d, 0x51, 0xf4, 0xc7, 0xc4, + 0x5b, 0x83, 0xf5, 0xb3, 0xbb, 0x11, 0x11, 0xf5, 0x92, 0x84, 0x5f, 0x06, 0x30, 0x41, 0xde, 0xc2, + 0x1e, 0xd7, 0x7f, 0x5c, 0x3c, 0x7b, 0xd8, 0x8e, 0xee, 0x4a, 0x1a, 0xea, 0x21, 0xb7, 0xc0, 0xde, + 0xc0, 0xa9, 0x1a, 0x0a, 0x67, 0x40, 0xf6, 0x80, 0xc8, 0xff, 0x1d, 0x40, 0xec, 0x4f, 0x68, 0x80, + 0x5c, 0x13, 0x5b, 0x61, 0xf4, 0x8c, 0x1f, 0xf0, 0xfd, 0x8b, 0x84, 0xf1, 0xe7, 0x32, 0xcf, 0x6a, + 0x0b, 0xef, 0x6b, 0x60, 0xbe, 0x77, 0x69, 0xbf, 0xaf, 0x6e, 0xfd, 0x4a, 0x03, 0xb3, 0x5d, 0x55, + 0xbc, 0x87, 0x47, 0xb7, 0x3b, 0x3d, 0x7a, 0x75, 0xd0, 0xe5, 0x58, 0xa4, 0x1f, 0xef, 0x41, 0x93, + 0xee, 0xfd, 0x4c, 0x03, 0x33, 0xe9, 0xc2, 0x78, 0x3f, 0xe3, 0x55, 0x7c, 0x3f, 0x03, 0xe6, 0x7b, + 0xb7, 0xce, 0xd0, 0x57, 0x33, 0x82, 0xe1, 0xcc, 0x5a, 0x7a, 0xcd, 0x65, 0xdf, 0xd1, 0xc0, 0xc4, + 0x2d, 0x25, 0x17, 0x7d, 0x5b, 0x1e, 0xf8, 0x94, 0x27, 0xba, 0x89, 0x62, 0x06, 0x45, 0x49, 0xdc, + 0xe2, 0x9f, 0x34, 0x30, 0xd7, 0xf3, 0x8a, 0x85, 0x97, 0xc1, 0x28, 0xb6, 0x2c, 0xf7, 0x50, 0x0c, + 0xeb, 0x12, 0x93, 0xf0, 0x55, 0x4e, 0x45, 0x92, 0x9b, 0x88, 0x5e, 0xe6, 0xf3, 0x8a, 0x5e, 0xf1, + 0x6f, 0x1a, 0xb8, 0x78, 0xa7, 0x4c, 0xbc, 0x2f, 0x5b, 0xba, 0x0c, 0xf2, 0xb2, 0x3d, 0x3e, 0xe6, + 0xdb, 0x29, 0x8b, 0x9d, 0x2c, 0x1a, 0xfc, 0xdf, 0xa9, 0xc4, 0x5f, 0xc5, 0x0f, 0x34, 0x30, 0x53, + 0x25, 0x7e, 0xd3, 0xac, 0x11, 0x44, 0xea, 0xc4, 0x27, 0x4e, 0x8d, 0xc0, 0x15, 0x30, 0xce, 0x3f, + 0xea, 0x7a, 0xb8, 0x16, 0x7d, 0xa0, 0x98, 0x95, 0x21, 0x1f, 0xdf, 0x8e, 0x18, 0x28, 0x96, 0x51, + 0x1f, 0x33, 0x32, 0x7d, 0x3f, 0x66, 0x5c, 0x04, 0x23, 0x5e, 0x3c, 0xea, 0xcd, 0x33, 0x2e, 0x9f, + 0xee, 0x72, 0x2a, 0xe7, 0xba, 0x7e, 0xc0, 0xe7, 0x57, 0x39, 0xc9, 0x75, 0xfd, 0x00, 0x71, 0x6a, + 0xf1, 0xef, 0x1a, 0xe8, 0xf5, 0x8f, 0x4f, 0xf0, 0x82, 0x18, 0xe1, 0x25, 0xe6, 0x62, 0xd1, 0xf8, + 0x0e, 0x36, 0xc1, 0x18, 0x15, 0xab, 0x92, 0x51, 0xaf, 0xdc, 0x63, 0xd4, 0xd3, 0x31, 0x12, 0xbd, + 0x43, 0x44, 0x8d, 0xc0, 0x58, 0xe0, 0x6b, 0xb8, 0x1c, 0x3a, 0x86, 0x9c, 0xea, 0x4e, 0x8a, 0xc0, + 0xaf, 0xad, 0x0a, 0x1a, 0x52, 0xdc, 0xf2, 0xd5, 0x0f, 0x3f, 0x59, 0x3c, 0xf3, 0xd1, 0x27, 0x8b, + 0x67, 0x3e, 0xfe, 0x64, 0xf1, 0xcc, 0x77, 0xdb, 0x8b, 0xda, 0x87, 0xed, 0x45, 0xed, 0xa3, 0xf6, + 0xa2, 0xf6, 0x71, 0x7b, 0x51, 0xfb, 0x67, 0x7b, 0x51, 0xfb, 0xf9, 0xa7, 0x8b, 0x67, 0xbe, 0x39, + 0x26, 0xf1, 0xff, 0x1b, 0x00, 0x00, 0xff, 0xff, 0x47, 0x22, 0xb1, 0x79, 0x8e, 0x2c, 0x00, 0x00, } func (m *ConversionRequest) Marshal() (dAtA []byte, err error) { @@ -1892,6 +1893,15 @@ func (m *JSONSchemaProps) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l + if m.XMapType != nil { + i -= len(*m.XMapType) + copy(dAtA[i:], *m.XMapType) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.XMapType))) + i-- + dAtA[i] = 0x2 + i-- + dAtA[i] = 0xda + } if m.XListType != nil { i -= len(*m.XListType) copy(dAtA[i:], *m.XListType) @@ -3135,6 +3145,10 @@ func (m *JSONSchemaProps) Size() (n int) { l = len(*m.XListType) n += 2 + l + sovGenerated(uint64(l)) } + if m.XMapType != nil { + l = len(*m.XMapType) + n += 2 + l + sovGenerated(uint64(l)) + } return n } @@ -3602,6 +3616,7 @@ func (this *JSONSchemaProps) String() string { `XIntOrString:` + fmt.Sprintf("%v", this.XIntOrString) + `,`, `XListMapKeys:` + fmt.Sprintf("%v", this.XListMapKeys) + `,`, `XListType:` + valueToStringGenerated(this.XListType) + `,`, + `XMapType:` + valueToStringGenerated(this.XMapType) + `,`, `}`, }, "") return s @@ -8188,6 +8203,39 @@ func (m *JSONSchemaProps) Unmarshal(dAtA []byte) error { s := string(dAtA[iNdEx:postIndex]) m.XListType = &s iNdEx = postIndex + case 43: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field XMapType", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := string(dAtA[iNdEx:postIndex]) + m.XMapType = &s + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/generated.proto b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/generated.proto index 12ecc7abb4..705ca07995 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/generated.proto +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/generated.proto @@ -411,6 +411,32 @@ message JSONSchemaProps { optional string type = 5; + // format is an OpenAPI v3 format string. Unknown formats are ignored. The following formats are validated: + // + // - bsonobjectid: a bson object ID, i.e. a 24 characters hex string + // - uri: an URI as parsed by Golang net/url.ParseRequestURI + // - email: an email address as parsed by Golang net/mail.ParseAddress + // - hostname: a valid representation for an Internet host name, as defined by RFC 1034, section 3.1 [RFC1034]. + // - ipv4: an IPv4 IP as parsed by Golang net.ParseIP + // - ipv6: an IPv6 IP as parsed by Golang net.ParseIP + // - cidr: a CIDR as parsed by Golang net.ParseCIDR + // - mac: a MAC address as parsed by Golang net.ParseMAC + // - uuid: an UUID that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{12}$ + // - uuid3: an UUID3 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?3[0-9a-f]{3}-?[0-9a-f]{4}-?[0-9a-f]{12}$ + // - uuid4: an UUID4 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?4[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$ + // - uuid5: an UUID5 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?5[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$ + // - isbn: an ISBN10 or ISBN13 number string like "0321751043" or "978-0321751041" + // - isbn10: an ISBN10 number string like "0321751043" + // - isbn13: an ISBN13 number string like "978-0321751041" + // - creditcard: a credit card number defined by the regex ^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\\d{3})\\d{11})$ with any non digit characters mixed in + // - ssn: a U.S. social security number following the regex ^\\d{3}[- ]?\\d{2}[- ]?\\d{4}$ + // - hexcolor: an hexadecimal color code like "#FFFFFF: following the regex ^#?([0-9a-fA-F]{3}|[0-9a-fA-F]{6})$ + // - rgbcolor: an RGB color code like rgb like "rgb(255,255,2559" + // - byte: base64 encoded binary data + // - password: any kind of string + // - date: a date string like "2006-01-02" as defined by full-date in RFC3339 + // - duration: a duration string like "22 ns" as parsed by Golang time.ParseDuration or compatible with Scala duration format + // - datetime: a date time string like "2014-12-15T19:30:20.000Z" as defined by date-time in RFC3339. optional string format = 6; optional string title = 7; @@ -537,6 +563,18 @@ message JSONSchemaProps { // Defaults to atomic for arrays. // +optional optional string xKubernetesListType = 42; + + // x-kubernetes-map-type annotates an object to further describe its topology. + // This extension must only be used when type is object and may have 2 possible values: + // + // 1) `granular`: + // These maps are actual maps (key-value pairs) and each fields are independent + // from each other (they can each be manipulated by separate actors). This is + // the default behaviour for all maps. + // 2) `atomic`: the list is treated as a single entity, like a scalar. + // Atomic maps will be entirely replaced when updated. + // +optional + optional string xKubernetesMapType = 43; } // JSONSchemaPropsOrArray represents a value that can either be a JSONSchemaProps diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/register.go b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/register.go index 97bc5431cc..ac807211b7 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/register.go +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/register.go @@ -38,7 +38,7 @@ func Resource(resource string) schema.GroupResource { } var ( - SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes, addDefaultingFuncs, addConversionFuncs) + SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes, addDefaultingFuncs) localSchemeBuilder = &SchemeBuilder AddToScheme = localSchemeBuilder.AddToScheme ) @@ -58,5 +58,5 @@ func init() { // We only register manually written functions here. The registration of the // generated functions takes place in the generated files. The separation // makes the code compile even when the generated files are missing. - localSchemeBuilder.Register(addDefaultingFuncs, addConversionFuncs) + localSchemeBuilder.Register(addDefaultingFuncs) } diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/types_jsonschema.go b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/types_jsonschema.go index d71a5a02cf..b51a324996 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/types_jsonschema.go +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/types_jsonschema.go @@ -23,8 +23,36 @@ type JSONSchemaProps struct { Ref *string `json:"$ref,omitempty" protobuf:"bytes,3,opt,name=ref"` Description string `json:"description,omitempty" protobuf:"bytes,4,opt,name=description"` Type string `json:"type,omitempty" protobuf:"bytes,5,opt,name=type"` - Format string `json:"format,omitempty" protobuf:"bytes,6,opt,name=format"` - Title string `json:"title,omitempty" protobuf:"bytes,7,opt,name=title"` + + // format is an OpenAPI v3 format string. Unknown formats are ignored. The following formats are validated: + // + // - bsonobjectid: a bson object ID, i.e. a 24 characters hex string + // - uri: an URI as parsed by Golang net/url.ParseRequestURI + // - email: an email address as parsed by Golang net/mail.ParseAddress + // - hostname: a valid representation for an Internet host name, as defined by RFC 1034, section 3.1 [RFC1034]. + // - ipv4: an IPv4 IP as parsed by Golang net.ParseIP + // - ipv6: an IPv6 IP as parsed by Golang net.ParseIP + // - cidr: a CIDR as parsed by Golang net.ParseCIDR + // - mac: a MAC address as parsed by Golang net.ParseMAC + // - uuid: an UUID that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{12}$ + // - uuid3: an UUID3 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?3[0-9a-f]{3}-?[0-9a-f]{4}-?[0-9a-f]{12}$ + // - uuid4: an UUID4 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?4[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$ + // - uuid5: an UUID5 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?5[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$ + // - isbn: an ISBN10 or ISBN13 number string like "0321751043" or "978-0321751041" + // - isbn10: an ISBN10 number string like "0321751043" + // - isbn13: an ISBN13 number string like "978-0321751041" + // - creditcard: a credit card number defined by the regex ^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\\d{3})\\d{11})$ with any non digit characters mixed in + // - ssn: a U.S. social security number following the regex ^\\d{3}[- ]?\\d{2}[- ]?\\d{4}$ + // - hexcolor: an hexadecimal color code like "#FFFFFF: following the regex ^#?([0-9a-fA-F]{3}|[0-9a-fA-F]{6})$ + // - rgbcolor: an RGB color code like rgb like "rgb(255,255,2559" + // - byte: base64 encoded binary data + // - password: any kind of string + // - date: a date string like "2006-01-02" as defined by full-date in RFC3339 + // - duration: a duration string like "22 ns" as parsed by Golang time.ParseDuration or compatible with Scala duration format + // - datetime: a date time string like "2014-12-15T19:30:20.000Z" as defined by date-time in RFC3339. + Format string `json:"format,omitempty" protobuf:"bytes,6,opt,name=format"` + + Title string `json:"title,omitempty" protobuf:"bytes,7,opt,name=title"` // default is a default value for undefined object fields. // Defaulting is a beta feature under the CustomResourceDefaulting feature gate. // CustomResourceDefinitions with defaults must be created using the v1 (or newer) CustomResourceDefinition API. @@ -118,6 +146,18 @@ type JSONSchemaProps struct { // Defaults to atomic for arrays. // +optional XListType *string `json:"x-kubernetes-list-type,omitempty" protobuf:"bytes,42,opt,name=xKubernetesListType"` + + // x-kubernetes-map-type annotates an object to further describe its topology. + // This extension must only be used when type is object and may have 2 possible values: + // + // 1) `granular`: + // These maps are actual maps (key-value pairs) and each fields are independent + // from each other (they can each be manipulated by separate actors). This is + // the default behaviour for all maps. + // 2) `atomic`: the list is treated as a single entity, like a scalar. + // Atomic maps will be entirely replaced when updated. + // +optional + XMapType *string `json:"x-kubernetes-map-type,omitempty" protobuf:"bytes,43,opt,name=xKubernetesMapType"` } // JSON represents any valid JSON value. diff --git a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/zz_generated.conversion.go b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/zz_generated.conversion.go index 64073df08f..95d430c52e 100644 --- a/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/zz_generated.conversion.go +++ b/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1/zz_generated.conversion.go @@ -176,26 +176,11 @@ func RegisterConversions(s *runtime.Scheme) error { }); err != nil { return err } - if err := s.AddGeneratedConversionFunc((*JSON)(nil), (*apiextensions.JSON)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta1_JSON_To_apiextensions_JSON(a.(*JSON), b.(*apiextensions.JSON), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*apiextensions.JSON)(nil), (*JSON)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_apiextensions_JSON_To_v1beta1_JSON(a.(*apiextensions.JSON), b.(*JSON), scope) - }); err != nil { - return err - } if err := s.AddGeneratedConversionFunc((*JSONSchemaProps)(nil), (*apiextensions.JSONSchemaProps)(nil), func(a, b interface{}, scope conversion.Scope) error { return Convert_v1beta1_JSONSchemaProps_To_apiextensions_JSONSchemaProps(a.(*JSONSchemaProps), b.(*apiextensions.JSONSchemaProps), scope) }); err != nil { return err } - if err := s.AddGeneratedConversionFunc((*apiextensions.JSONSchemaProps)(nil), (*JSONSchemaProps)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_apiextensions_JSONSchemaProps_To_v1beta1_JSONSchemaProps(a.(*apiextensions.JSONSchemaProps), b.(*JSONSchemaProps), scope) - }); err != nil { - return err - } if err := s.AddGeneratedConversionFunc((*JSONSchemaPropsOrArray)(nil), (*apiextensions.JSONSchemaPropsOrArray)(nil), func(a, b interface{}, scope conversion.Scope) error { return Convert_v1beta1_JSONSchemaPropsOrArray_To_apiextensions_JSONSchemaPropsOrArray(a.(*JSONSchemaPropsOrArray), b.(*apiextensions.JSONSchemaPropsOrArray), scope) }); err != nil { @@ -945,6 +930,7 @@ func autoConvert_v1beta1_JSONSchemaProps_To_apiextensions_JSONSchemaProps(in *JS out.XIntOrString = in.XIntOrString out.XListMapKeys = *(*[]string)(unsafe.Pointer(&in.XListMapKeys)) out.XListType = (*string)(unsafe.Pointer(in.XListType)) + out.XMapType = (*string)(unsafe.Pointer(in.XMapType)) return nil } @@ -1132,6 +1118,7 @@ func autoConvert_apiextensions_JSONSchemaProps_To_v1beta1_JSONSchemaProps(in *ap out.XIntOrString = in.XIntOrString out.XListMapKeys = *(*[]string)(unsafe.Pointer(&in.XListMapKeys)) out.XListType = (*string)(unsafe.Pointer(in.XListType)) + out.XMapType = (*string)(unsafe.Pointer(in.XMapType)) return nil } diff --git a/vendor/knative.dev/pkg/apis/duck/v1/kresource_type.go b/vendor/knative.dev/pkg/apis/duck/v1/kresource_type.go new file mode 100644 index 0000000000..b64977fcb5 --- /dev/null +++ b/vendor/knative.dev/pkg/apis/duck/v1/kresource_type.go @@ -0,0 +1,102 @@ +/* +Copyright 2020 The Knative Authors + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1 + +import ( + "time" + + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + + "knative.dev/pkg/apis" +) + +// KRShaped is an interface for retrieving the duck elements of an arbitraty resource. +type KRShaped interface { + metav1.ObjectMetaAccessor + + GetTypeMeta() *metav1.TypeMeta + + GetStatus() *Status + + GetTopLevelConditionType() apis.ConditionType +} + +// Asserts KResource conformance with KRShaped +var _ KRShaped = (*KResource)(nil) + +// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object + +// KResource is a skeleton type wrapping Conditions in the manner we expect +// resource writers defining compatible resources to embed it. We will +// typically use this type to deserialize Conditions ObjectReferences and +// access the Conditions data. This is not a real resource. +type KResource struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + + Status Status `json:"status"` +} + +// Populate implements duck.Populatable +func (t *KResource) Populate() { + t.Status.ObservedGeneration = 42 + t.Status.Conditions = Conditions{{ + // Populate ALL fields + Type: "Birthday", + Status: corev1.ConditionTrue, + LastTransitionTime: apis.VolatileTime{Inner: metav1.NewTime(time.Date(1984, 02, 28, 18, 52, 00, 00, time.UTC))}, + Reason: "Celebrate", + Message: "n3wScott, find your party hat :tada:", + }} +} + +// GetListType implements apis.Listable +func (*KResource) GetListType() runtime.Object { + return &KResourceList{} +} + +// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object + +// KResourceList is a list of KResource resources +type KResourceList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata"` + + Items []KResource `json:"items"` +} + +// GetTypeMeta retrieves the ObjectMeta of the KResource. Implements the KRShaped interface. +func (t *KResource) GetTypeMeta() *metav1.TypeMeta { + return &t.TypeMeta +} + +// GetStatus retrieves the status of the KResource. Implements the KRShaped interface. +func (t *KResource) GetStatus() *Status { + return &t.Status +} + +// GetTopLevelConditionType retrieves the happy condition of this resource. Implements the KRShaped interface. +func (t *KResource) GetTopLevelConditionType() apis.ConditionType { + // Note: KResources are unmarshalled from existing resources. This will only work properly for resources that + // have already been initialized to their type. + if cond := t.Status.GetCondition(apis.ConditionSucceeded); cond != nil { + return apis.ConditionSucceeded + } + return apis.ConditionReady +} diff --git a/vendor/knative.dev/pkg/apis/duck/v1/status_types.go b/vendor/knative.dev/pkg/apis/duck/v1/status_types.go index 2165e78385..9186e961f4 100644 --- a/vendor/knative.dev/pkg/apis/duck/v1/status_types.go +++ b/vendor/knative.dev/pkg/apis/duck/v1/status_types.go @@ -18,11 +18,6 @@ package v1 import ( "context" - "time" - - corev1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/runtime" "knative.dev/pkg/apis" "knative.dev/pkg/apis/duck" @@ -36,19 +31,6 @@ type Conditions apis.Conditions // Conditions is an Implementable "duck type". var _ duck.Implementable = (*Conditions)(nil) -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object - -// KResource is a skeleton type wrapping Conditions in the manner we expect -// resource writers defining compatible resources to embed it. We will -// typically use this type to deserialize Conditions ObjectReferences and -// access the Conditions data. This is not a real resource. -type KResource struct { - metav1.TypeMeta `json:",inline"` - metav1.ObjectMeta `json:"metadata,omitempty"` - - Status Status `json:"status"` -} - // Status shows how we expect folks to embed Conditions in // their Status field. // WARNING: Adding fields to this struct will add them to all Knative resources. @@ -126,31 +108,3 @@ func (source *Status) ConvertTo(ctx context.Context, sink *Status, predicates .. sink.SetConditions(conditions) } - -// Populate implements duck.Populatable -func (t *KResource) Populate() { - t.Status.ObservedGeneration = 42 - t.Status.Conditions = Conditions{{ - // Populate ALL fields - Type: "Birthday", - Status: corev1.ConditionTrue, - LastTransitionTime: apis.VolatileTime{Inner: metav1.NewTime(time.Date(1984, 02, 28, 18, 52, 00, 00, time.UTC))}, - Reason: "Celebrate", - Message: "n3wScott, find your party hat :tada:", - }} -} - -// GetListType implements apis.Listable -func (*KResource) GetListType() runtime.Object { - return &KResourceList{} -} - -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object - -// KResourceList is a list of KResource resources -type KResourceList struct { - metav1.TypeMeta `json:",inline"` - metav1.ListMeta `json:"metadata"` - - Items []KResource `json:"items"` -} diff --git a/vendor/knative.dev/pkg/codegen/cmd/injection-gen/generators/comment_parser.go b/vendor/knative.dev/pkg/codegen/cmd/injection-gen/generators/comment_parser.go new file mode 100644 index 0000000000..ada0dff3ec --- /dev/null +++ b/vendor/knative.dev/pkg/codegen/cmd/injection-gen/generators/comment_parser.go @@ -0,0 +1,75 @@ +/* +Copyright 2020 The Knative Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ +package generators + +import "strings" + +// Adapted from the k8s.io comment parser https://github.com/kubernetes/gengo/blob/master/types/comments.go + +// ExtractCommentTags parses comments for lines of the form: +// +// 'marker' + ':' "key=value,key2=value2". +// +// Values are optional; empty map is the default. A tag can be specified more than +// one time and all values are returned. If the resulting map has an entry for +// a key, the value (a slice) is guaranteed to have at least 1 element. +// +// Example: if you pass "+" for 'marker', and the following lines are in +// the comments: +// +foo:key=value1,key2=value2 +// +bar +// +// Then this function will return: +// map[string]map[string]string{"foo":{"key":value1","key2":"value2"}, "bar": nil} +// +// Users are not expected to repeat values. +func ExtractCommentTags(marker string, lines []string) map[string]map[string]string { + out := map[string]map[string]string{} + for _, line := range lines { + line = strings.TrimSpace(line) + if len(line) == 0 || !strings.HasPrefix(line, marker) { + continue + } + + options := strings.SplitN(line[len(marker):], ":", 2) + if len(options) == 2 { + vals := strings.Split(options[1], ",") + + opts := out[options[0]] + if opts == nil { + opts = make(map[string]string, len(vals)) + } + + for _, pair := range vals { + if kv := strings.SplitN(pair, "=", 2); len(kv) == 2 { + opts[kv[0]] = kv[1] + } else if kv[0] != "" { + opts[kv[0]] = "" + } + } + if len(opts) == 0 { + out[options[0]] = nil + } else { + out[options[0]] = opts + } + } else if len(options) == 1 && options[0] != "" { + if _, has := out[options[0]]; !has { + out[options[0]] = nil + } + } + } + return out +} diff --git a/vendor/knative.dev/pkg/codegen/cmd/injection-gen/generators/packages.go b/vendor/knative.dev/pkg/codegen/cmd/injection-gen/generators/packages.go index ce8b6dbc68..a53dc8b02c 100644 --- a/vendor/knative.dev/pkg/codegen/cmd/injection-gen/generators/packages.go +++ b/vendor/knative.dev/pkg/codegen/cmd/injection-gen/generators/packages.go @@ -188,22 +188,27 @@ func MustParseClientGenTags(lines []string) Tags { return ret } -func extractReconcilerClassTag(t *types.Type) (string, bool) { +func extractCommentTags(t *types.Type) map[string]map[string]string { comments := append(append([]string{}, t.SecondClosestCommentLines...), t.CommentLines...) - values := types.ExtractCommentTags("+", comments)["genreconciler:class"] - for _, v := range values { - if len(v) == 0 { - continue - } - return v, true + return ExtractCommentTags("+", comments) +} + +func extractReconcilerClassTag(tags map[string]map[string]string) (string, bool) { + vals, ok := tags["genreconciler"] + if !ok { + return "", false } - return "", false + classname, has := vals["class"] + return classname, has } -func isNonNamespaced(t *types.Type) bool { - comments := append(append([]string{}, t.SecondClosestCommentLines...), t.CommentLines...) - _, nonNamespaced := types.ExtractCommentTags("+", comments)["genclient:nonNamespaced"] - return nonNamespaced +func isNonNamespaced(tags map[string]map[string]string) bool { + vals, has := tags["genclient"] + if !has { + return false + } + _, has = vals["nonNamespaced"] + return has } func vendorless(p string) string { @@ -416,8 +421,9 @@ func reconcilerPackages(basePackage string, groupPkgName string, gv clientgentyp // Fix for golang iterator bug. t := t - reconcilerClass, hasReconcilerClass := extractReconcilerClassTag(t) - nonNamespaced := isNonNamespaced(t) + extracted := extractCommentTags(t) + reconcilerClass, hasReconcilerClass := extractReconcilerClassTag(extracted) + nonNamespaced := isNonNamespaced(extracted) packagePath := filepath.Join(packagePath, strings.ToLower(t.Name.Name)) diff --git a/vendor/knative.dev/pkg/codegen/cmd/injection-gen/generators/reconciler_controller.go b/vendor/knative.dev/pkg/codegen/cmd/injection-gen/generators/reconciler_controller.go index 16ec669356..bf3ca04fe8 100644 --- a/vendor/knative.dev/pkg/codegen/cmd/injection-gen/generators/reconciler_controller.go +++ b/vendor/knative.dev/pkg/codegen/cmd/injection-gen/generators/reconciler_controller.go @@ -185,29 +185,9 @@ func NewImpl(ctx {{.contextContext|raw}}, r Interface{{if .hasClass}}, classValu {{.type|lowercaseSingular}}Informer := {{.informerGet|raw}}(ctx) - recorder := {{.controllerGetEventRecorder|raw}}(ctx) - if recorder == nil { - // Create event broadcaster - logger.Debug("Creating event broadcaster") - eventBroadcaster := {{.recordNewBroadcaster|raw}}() - watches := []{{.watchInterface|raw}}{ - eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), - eventBroadcaster.StartRecordingToSink( - &{{.typedcorev1EventSinkImpl|raw}}{Interface: {{.kubeclientGet|raw}}(ctx).CoreV1().Events("")}), - } - recorder = eventBroadcaster.NewRecorder({{.schemeScheme|raw}}, {{.corev1EventSource|raw}}{Component: defaultControllerAgentName}) - go func() { - <-ctx.Done() - for _, w := range watches { - w.Stop() - } - }() - } - rec := &reconcilerImpl{ Client: {{.clientGet|raw}}(ctx), Lister: {{.type|lowercaseSingular}}Informer.Lister(), - Recorder: recorder, reconciler: r, finalizerName: defaultFinalizerName, {{if .hasClass}}classValue: classValue,{{end}} @@ -217,6 +197,7 @@ func NewImpl(ctx {{.contextContext|raw}}, r Interface{{if .hasClass}}, classValu queueName := {{.fmtSprintf|raw}}("%s.%s", {{.stringsReplaceAll|raw}}(t.PkgPath(), "/", "-"), t.Name()) impl := {{.controllerNewImpl|raw}}(rec, logger, queueName) + agentName := defaultControllerAgentName // Pass impl to the options. Save any optional results. for _, fn := range optionsFns { @@ -227,11 +208,41 @@ func NewImpl(ctx {{.contextContext|raw}}, r Interface{{if .hasClass}}, classValu if opts.FinalizerName != "" { rec.finalizerName = opts.FinalizerName } + if opts.AgentName != "" { + agentName = opts.AgentName + } } + rec.Recorder = createRecorder(ctx, agentName) + return impl } +func createRecorder(ctx context.Context, agentName string) record.EventRecorder { + logger := {{.loggingFromContext|raw}}(ctx) + + recorder := {{.controllerGetEventRecorder|raw}}(ctx) + if recorder == nil { + // Create event broadcaster + logger.Debug("Creating event broadcaster") + eventBroadcaster := {{.recordNewBroadcaster|raw}}() + watches := []{{.watchInterface|raw}}{ + eventBroadcaster.StartLogging(logger.Named("event-broadcaster").Infof), + eventBroadcaster.StartRecordingToSink( + &{{.typedcorev1EventSinkImpl|raw}}{Interface: {{.kubeclientGet|raw}}(ctx).CoreV1().Events("")}), + } + recorder = eventBroadcaster.NewRecorder({{.schemeScheme|raw}}, {{.corev1EventSource|raw}}{Component: agentName}) + go func() { + <-ctx.Done() + for _, w := range watches { + w.Stop() + } + }() + } + + return recorder +} + func init() { {{.schemeAddToScheme|raw}}({{.schemeScheme|raw}}) } diff --git a/vendor/knative.dev/pkg/controller/controller.go b/vendor/knative.dev/pkg/controller/controller.go index 166d109c83..4713335d18 100644 --- a/vendor/knative.dev/pkg/controller/controller.go +++ b/vendor/knative.dev/pkg/controller/controller.go @@ -96,7 +96,7 @@ func Filter(gvk schema.GroupVersionKind) func(obj interface{}) bool { // cache.FilteringResourceEventHandler that filter based on the // schema.GroupVersionKind of the controlling resources. // -// Deprecated: Use FilterControlledByGVK instead. +// Deprecated: Use FilterControllerGVK instead. func FilterGroupVersionKind(gvk schema.GroupVersionKind) func(obj interface{}) bool { return FilterControllerGVK(gvk) } @@ -122,7 +122,7 @@ func FilterControllerGVK(gvk schema.GroupVersionKind) func(obj interface{}) bool // cache.FilteringResourceEventHandler that filter based on the // schema.GroupKind of the controlling resources. // -// Deprecated: Use FilterControlledByGK instead +// Deprecated: Use FilterControllerGK instead func FilterGroupKind(gk schema.GroupKind) func(obj interface{}) bool { return FilterControllerGK(gk) } diff --git a/vendor/knative.dev/pkg/controller/options.go b/vendor/knative.dev/pkg/controller/options.go index 5b7b8a4058..6853839dce 100644 --- a/vendor/knative.dev/pkg/controller/options.go +++ b/vendor/knative.dev/pkg/controller/options.go @@ -27,6 +27,10 @@ type Options struct { // FinalizerName is the name of the finalizer this reconciler uses. This // overrides a default finalizer name assigned by the generator if needed. FinalizerName string + + // AgentName is the name of the agent this reconciler uses. This overrides + // the default controller's agent name. + AgentName string } // OptionsFn is a callback method signature that accepts an Impl and returns diff --git a/vendor/knative.dev/pkg/hack/generate-knative.sh b/vendor/knative.dev/pkg/hack/generate-knative.sh index cbd957077a..931e0a425c 100644 --- a/vendor/knative.dev/pkg/hack/generate-knative.sh +++ b/vendor/knative.dev/pkg/hack/generate-knative.sh @@ -46,13 +46,6 @@ APIS_PKG="$3" GROUPS_WITH_VERSIONS="$4" shift 4 -( - # To support running this script from anywhere, we have to first cd into this directory - # so we can install the tools. - cd $(dirname "${0}") - go install ../codegen/cmd/injection-gen -) - function codegen::join() { local IFS="$1"; shift; echo "$*"; } # enumerate group versions @@ -89,7 +82,7 @@ if grep -qw "injection" <<<"${GENS}"; then # Clear old injection rm -rf ${OUTPUT_PKG} - ${GOPATH}/bin/injection-gen \ + go run knative.dev/pkg/codegen/cmd/injection-gen \ --input-dirs $(codegen::join , "${FQ_APIS[@]}") \ --versioned-clientset-package ${VERSIONED_CLIENTSET_PKG} \ --external-versions-informers-package ${EXTERNAL_INFORMER_PKG} \ diff --git a/vendor/knative.dev/pkg/hack/update-codegen.sh b/vendor/knative.dev/pkg/hack/update-codegen.sh index 108d8e6468..d049c42cbc 100644 --- a/vendor/knative.dev/pkg/hack/update-codegen.sh +++ b/vendor/knative.dev/pkg/hack/update-codegen.sh @@ -24,8 +24,6 @@ source $(dirname $0)/../vendor/knative.dev/test-infra/scripts/library.sh CODEGEN_PKG=${CODEGEN_PKG:-$(cd ${REPO_ROOT_DIR}; ls -d -1 $(dirname $0)/../vendor/k8s.io/code-generator 2>/dev/null || echo ../code-generator)} -go install $(dirname $0)/../vendor/k8s.io/code-generator/cmd/deepcopy-gen - # generate the code with: # --output-base because this script should also be able to run inside the vendor dir of # k8s.io/kubernetes. The output-base is needed for the generators to output into the vendor dir @@ -64,7 +62,7 @@ ${CODEGEN_PKG}/generate-groups.sh "deepcopy" \ --go-header-file ${REPO_ROOT_DIR}/hack/boilerplate/boilerplate.go.txt # Depends on generate-groups.sh to install bin/deepcopy-gen -${GOPATH}/bin/deepcopy-gen --input-dirs \ +go run k8s.io/code-generator/cmd/deepcopy-gen --input-dirs \ $(echo \ knative.dev/pkg/apis \ knative.dev/pkg/tracker \ diff --git a/vendor/knative.dev/pkg/metrics/config.go b/vendor/knative.dev/pkg/metrics/config.go index 4403ddb3d8..71161baa1f 100644 --- a/vendor/knative.dev/pkg/metrics/config.go +++ b/vendor/knative.dev/pkg/metrics/config.go @@ -29,6 +29,7 @@ import ( "go.opencensus.io/stats" "go.uber.org/zap" + corev1 "k8s.io/api/core/v1" "knative.dev/pkg/metrics/metricskey" ) @@ -88,9 +89,8 @@ type metricsConfig struct { // writing the metrics to the stats.RecordWithOptions interface. recorder func(context.Context, []stats.Measurement, ...stats.Options) error - // secretFetcher provides access for fetching Kubernetes Secrets from an - // informer cache. - secretFetcher SecretFetcher + // secret contains credentials for an exporter to use for authentication. + secret *corev1.Secret // ---- OpenCensus specific below ---- // collectorAddress is the address of the collector, if not `localhost:55678` @@ -162,10 +162,6 @@ func (mc *metricsConfig) record(ctx context.Context, mss []stats.Measurement, ro func createMetricsConfig(ops ExporterOptions, logger *zap.SugaredLogger) (*metricsConfig, error) { var mc metricsConfig - // We don't check if this is `nil` right now, because this is a transition step. - // Eventually, this should be a startup check. - mc.secretFetcher = ops.Secrets - if ops.Domain == "" { return nil, errors.New("metrics domain cannot be empty") } @@ -205,6 +201,13 @@ func createMetricsConfig(ops ExporterOptions, logger *zap.SugaredLogger) (*metri if mc.requireSecure, err = strconv.ParseBool(isSecure); err != nil { return nil, fmt.Errorf("invalid %s value %q", CollectorSecureKey, isSecure) } + + if mc.requireSecure { + mc.secret, err = getOpenCensusSecret(ops.Component, ops.Secrets) + if err != nil { + return nil, err + } + } } } @@ -265,6 +268,15 @@ func createMetricsConfig(ops ExporterOptions, logger *zap.SugaredLogger) (*metri return stats.RecordWithOptions(ctx, append(ros, stats.WithMeasurements(mss...))...) } } + + if scc.UseSecret { + secret, err := getStackdriverSecret(ops.Secrets) + if err != nil { + return nil, err + } + + mc.secret = secret + } } // If reporting period is specified, use the value from the configuration. diff --git a/vendor/knative.dev/pkg/metrics/exporter.go b/vendor/knative.dev/pkg/metrics/exporter.go index f3f0006e4f..b8f4fd12f0 100644 --- a/vendor/knative.dev/pkg/metrics/exporter.go +++ b/vendor/knative.dev/pkg/metrics/exporter.go @@ -121,6 +121,7 @@ func UpdateExporterFromConfigMapWithOpts(opts ExporterOptions, logger *zap.Sugar Component: opts.Component, ConfigMap: configMap.Data, PrometheusPort: opts.PrometheusPort, + Secrets: opts.Secrets, }, logger) }, nil } @@ -130,6 +131,7 @@ func UpdateExporterFromConfigMapWithOpts(opts ExporterOptions, logger *zap.Sugar // to prevent a race condition between reading the current configuration // and updating the current exporter. func UpdateExporter(ops ExporterOptions, logger *zap.SugaredLogger) error { + // TODO(https://github.com/knative/pkg/issues/1273): check if ops.secrets is `nil` after new metrics plan lands newConfig, err := createMetricsConfig(ops, logger) if err != nil { if getCurMetricsConfig() == nil { @@ -141,28 +143,33 @@ func UpdateExporter(ops ExporterOptions, logger *zap.SugaredLogger) error { return err } + // Updating the metrics config and the metrics exporters needs to be atomic to + // avoid using an outdated metrics config with new exporters. + metricsMux.Lock() + defer metricsMux.Unlock() + if isNewExporterRequired(newConfig) { logger.Info("Flushing the existing exporter before setting up the new exporter.") - FlushExporter() + flushGivenExporter(curMetricsExporter) e, err := newMetricsExporter(newConfig, logger) if err != nil { logger.Errorf("Failed to update a new metrics exporter based on metric config %v. error: %v", newConfig, err) return err } - existingConfig := getCurMetricsConfig() - setCurMetricsExporter(e) + existingConfig := curMetricsConfig + curMetricsExporter = e logger.Infof("Successfully updated the metrics exporter; old config: %v; new config %v", existingConfig, newConfig) } - setCurMetricsConfig(newConfig) + setCurMetricsConfigUnlocked(newConfig) return nil } // isNewExporterRequired compares the non-nil newConfig against curMetricsConfig. When backend changes, // or stackdriver project ID changes for stackdriver backend, we need to update the metrics exporter. -// This function is not implicitly thread-safe. +// This function must be called with the metricsMux reader (or writer) locked. func isNewExporterRequired(newConfig *metricsConfig) bool { - cc := getCurMetricsConfig() + cc := curMetricsConfig if cc == nil || newConfig.backendDestination != cc.backendDestination { return true } @@ -177,15 +184,14 @@ func isNewExporterRequired(newConfig *metricsConfig) bool { } // newMetricsExporter gets a metrics exporter based on the config. -// This function is not implicitly thread-safe. +// This function must be called with the metricsMux reader (or writer) locked. func newMetricsExporter(config *metricsConfig, logger *zap.SugaredLogger) (view.Exporter, error) { - ce := getCurMetricsExporter() // If there is a Prometheus Exporter server running, stop it. resetCurPromSrv() // TODO(https://github.com/knative/pkg/issues/866): Move Stackdriver and Promethus // operations before stopping to an interface. - if se, ok := ce.(stoppable); ok { + if se, ok := curMetricsExporter.(stoppable); ok { se.StopMetricsExporter() } @@ -230,6 +236,10 @@ func getCurMetricsConfig() *metricsConfig { func setCurMetricsConfig(c *metricsConfig) { metricsMux.Lock() defer metricsMux.Unlock() + setCurMetricsConfigUnlocked(c) +} + +func setCurMetricsConfigUnlocked(c *metricsConfig) { if c != nil { view.SetReportingPeriod(c.reportingPeriod) } else { @@ -244,6 +254,10 @@ func setCurMetricsConfig(c *metricsConfig) { // Return value indicates whether the exporter is flushable or not. func FlushExporter() bool { e := getCurMetricsExporter() + return flushGivenExporter(e) +} + +func flushGivenExporter(e view.Exporter) bool { if e == nil { return false } diff --git a/vendor/knative.dev/pkg/metrics/opencensus_exporter.go b/vendor/knative.dev/pkg/metrics/opencensus_exporter.go index 5e3924646a..dd80c855df 100644 --- a/vendor/knative.dev/pkg/metrics/opencensus_exporter.go +++ b/vendor/knative.dev/pkg/metrics/opencensus_exporter.go @@ -24,6 +24,7 @@ import ( "go.opencensus.io/stats/view" "go.uber.org/zap" "google.golang.org/grpc/credentials" + corev1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/errors" ) @@ -33,49 +34,53 @@ func newOpenCensusExporter(config *metricsConfig, logger *zap.SugaredLogger) (vi opts = append(opts, ocagent.WithAddress(config.collectorAddress)) } if config.requireSecure { - opts = append(opts, ocagent.WithTLSCredentials(credentialFetcher(config.component, config.secretFetcher, logger))) + opts = append(opts, ocagent.WithTLSCredentials(getCredentials(config.component, config.secret, logger))) } else { opts = append(opts, ocagent.WithInsecure()) } e, err := ocagent.NewExporter(opts...) if err != nil { - logger.Errorw("Failed to create the OpenCensus exporter.", zap.Error(err)) + logger.Errorw("failed to create the OpenCensus exporter.", zap.Error(err)) return nil, err } - logger.Infof("Created OpenCensus exporter with config: %+v.", *config) + logger.Infof("created OpenCensus exporter with config: %+v.", *config) view.RegisterExporter(e) return e, nil } -// credentialFetcher attempts to locate a secret containing TLS credentials +// getOpenCensusSecret attempts to locate a secret containing TLS credentials // for communicating with the OpenCensus Agent. To do this, it first looks // for a secret named "-opencensus", then for a generic // "opencensus" secret. -func credentialFetcher(component string, lister SecretFetcher, logger *zap.SugaredLogger) credentials.TransportCredentials { +func getOpenCensusSecret(component string, lister SecretFetcher) (*corev1.Secret, error) { if lister == nil { - logger.Errorf("No secret lister provided for component %q; cannot use requireSecure=true", component) + return nil, fmt.Errorf("no secret lister provided for component %q; cannot use requireSecure=true", component) + } + secret, err := lister(component + "-opencensus") + if errors.IsNotFound(err) { + secret, err = lister("opencensus") + } + if err != nil { + return nil, fmt.Errorf("unable to fetch opencensus secret for %q, cannot use requireSecure=true: %+v", component, err) + } + + return secret, nil +} + +// getCredentials attempts to create a certificate containing TLS credentials +// for communicating with the OpenCensus Agent. +func getCredentials(component string, secret *corev1.Secret, logger *zap.SugaredLogger) credentials.TransportCredentials { + if secret == nil { + logger.Errorf("no secret provided for component %q; cannot use requireSecure=true", component) return nil } return credentials.NewTLS(&tls.Config{ GetClientCertificate: func(*tls.CertificateRequestInfo) (*tls.Certificate, error) { - // We ignore the CertificateRequestInfo for now, and hand back a single fixed certificate. - // TODO(evankanderson): maybe do something SPIFFE-ier? - cert, err := certificateFetcher(component+"-opencensus", lister) - if errors.IsNotFound(err) { - cert, err = certificateFetcher("opencensus", lister) - } + cert, err := tls.X509KeyPair(secret.Data["client-cert.pem"], secret.Data["client-key.pem"]) if err != nil { - return nil, fmt.Errorf("Unable to fetch opencensus secret for %q, cannot use requireSecure=true: %+v", component, err) + return nil, err } - return &cert, err + return &cert, nil }, }) } - -func certificateFetcher(secretName string, lister SecretFetcher) (tls.Certificate, error) { - secret, err := lister(secretName) - if err != nil { - return tls.Certificate{}, err - } - return tls.X509KeyPair(secret.Data["client-cert.pem"], secret.Data["client-key.pem"]) -} diff --git a/vendor/knative.dev/pkg/metrics/stackdriver_exporter.go b/vendor/knative.dev/pkg/metrics/stackdriver_exporter.go index e87a3e37fc..87bb722a2f 100644 --- a/vendor/knative.dev/pkg/metrics/stackdriver_exporter.go +++ b/vendor/knative.dev/pkg/metrics/stackdriver_exporter.go @@ -115,7 +115,7 @@ func newOpencensusSDExporter(o stackdriver.Options) (view.Exporter, error) { func newStackdriverExporter(config *metricsConfig, logger *zap.SugaredLogger) (view.Exporter, error) { gm := getMergedGCPMetadata(config) mpf := getMetricPrefixFunc(config.stackdriverMetricTypePrefix, config.stackdriverCustomMetricTypePrefix) - co, err := getStackdriverExporterClientOptions(&config.stackdriverClientConfig) + co, err := getStackdriverExporterClientOptions(config) if err != nil { logger.Warnw("Issue configuring Stackdriver exporter client options, no additional client options will be used: ", zap.Error(err)) } @@ -140,21 +140,21 @@ func newStackdriverExporter(config *metricsConfig, logger *zap.SugaredLogger) (v // getStackdriverExporterClientOptions creates client options for the opencensus Stackdriver exporter from the given stackdriverClientConfig. // On error, an empty array of client options is returned. -func getStackdriverExporterClientOptions(sdconfig *StackdriverClientConfig) ([]option.ClientOption, error) { +func getStackdriverExporterClientOptions(config *metricsConfig) ([]option.ClientOption, error) { var co []option.ClientOption - if sdconfig.UseSecret && useStackdriverSecretEnabled { - secret, err := getStackdriverSecret(sdconfig) - if err != nil { - return co, err + + // SetStackdriverSecretLocation must have been called by calling package for this to work. + if config.stackdriverClientConfig.UseSecret { + if config.secret == nil { + return co, fmt.Errorf("No secret provided for component %q; cannot use stackdriver-use-secret=true", config.component) } - if opt, err := convertSecretToExporterOption(secret); err == nil { + if opt, err := convertSecretToExporterOption(config.secret); err == nil { co = append(co, opt) } else { return co, err } } - return co, nil } @@ -215,19 +215,31 @@ func getMetricPrefixFunc(metricTypePrefix, customMetricTypePrefix string) func(n } // getStackdriverSecret returns the Kubernetes Secret specified in the given config. +// SetStackdriverSecretLocation must have been called by calling package for this to work. // TODO(anniefu): Update exporter if Secret changes (https://github.com/knative/pkg/issues/842) -func getStackdriverSecret(sdconfig *StackdriverClientConfig) (*corev1.Secret, error) { - if err := ensureKubeclient(); err != nil { - return nil, err - } - +func getStackdriverSecret(secretFetcher SecretFetcher) (*corev1.Secret, error) { stackdriverMtx.RLock() defer stackdriverMtx.RUnlock() - sec, secErr := kubeclient.CoreV1().Secrets(secretNamespace).Get(secretName, metav1.GetOptions{}) + if !useStackdriverSecretEnabled { + return nil, nil + } + + var secErr error + var sec *corev1.Secret + if secretFetcher != nil { + sec, secErr = secretFetcher(fmt.Sprintf("%s/%s", secretNamespace, secretName)) + } else { + // This else-block can be removed once UpdateExporterFromConfigMap is fully deprecated in favor of ConfigMapWatcher + if err := ensureKubeclient(); err != nil { + return nil, err + } + + sec, secErr = kubeclient.CoreV1().Secrets(secretNamespace).Get(secretName, metav1.GetOptions{}) + } if secErr != nil { - return nil, fmt.Errorf("Error getting Secret [%s] in namespace [%s]: %w", secretName, secretNamespace, secErr) + return nil, fmt.Errorf("error getting Secret [%v] in namespace [%v]: %v", secretName, secretNamespace, secErr) } return sec, nil diff --git a/vendor/knative.dev/pkg/network/transports.go b/vendor/knative.dev/pkg/network/transports.go index 1ff6f46153..019f4ba87b 100644 --- a/vendor/knative.dev/pkg/network/transports.go +++ b/vendor/knative.dev/pkg/network/transports.go @@ -19,6 +19,7 @@ package network import ( "context" "errors" + "fmt" "net" "net/http" "time" @@ -74,6 +75,7 @@ func dialBackOffHelper(ctx context.Context, network, address string, bo wait.Bac KeepAlive: 5 * time.Second, DualStack: true, } + start := time.Now() for { c, err := dialer.DialContext(ctx, network, address) if err != nil { @@ -89,7 +91,8 @@ func dialBackOffHelper(ctx context.Context, network, address string, bo wait.Bac } return c, nil } - return nil, errDialTimeout + elapsed := time.Now().Sub(start) + return nil, fmt.Errorf("timed out dialing after %.2fs", elapsed.Seconds()) } func newHTTPTransport(connTimeout time.Duration, disableKeepAlives bool) http.RoundTripper { diff --git a/vendor/knative.dev/pkg/test/helpers/name.go b/vendor/knative.dev/pkg/test/helpers/name.go index 5d75708c0c..7bf14a3bf9 100644 --- a/vendor/knative.dev/pkg/test/helpers/name.go +++ b/vendor/knative.dev/pkg/test/helpers/name.go @@ -17,7 +17,6 @@ limitations under the License. package helpers import ( - "log" "math/rand" "strings" "time" @@ -39,7 +38,6 @@ func init() { // Otherwise, rerunning tests will generate the same names for the test resources, causing conflicts with // already existing resources. seed := time.Now().UTC().UnixNano() - log.Printf("Using '%d' to seed the random number generator", seed) rand.Seed(seed) } diff --git a/vendor/knative.dev/test-infra/scripts/dummy.go b/vendor/knative.dev/test-infra/scripts/dummy.go index e6cc380fd7..809b3a6071 100644 --- a/vendor/knative.dev/test-infra/scripts/dummy.go +++ b/vendor/knative.dev/test-infra/scripts/dummy.go @@ -1,12 +1,9 @@ /* Copyright 2018 The Knative Authors - Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - https://www.apache.org/licenses/LICENSE-2.0 - Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. diff --git a/vendor/knative.dev/test-infra/scripts/library.sh b/vendor/knative.dev/test-infra/scripts/library.sh index b90f4e1d7b..34e9c6aa6c 100644 --- a/vendor/knative.dev/test-infra/scripts/library.sh +++ b/vendor/knative.dev/test-infra/scripts/library.sh @@ -219,6 +219,42 @@ function wait_until_service_has_external_ip() { return 1 } +# Waits until the given service has an external address (IP/hostname) that allow HTTP connections. +# Parameters: $1 - namespace. +# $2 - service name. +function wait_until_service_has_external_http_address() { + local ns=$1 + local svc=$2 + local sleep_seconds=6 + local attempts=150 + + echo -n "Waiting until service $ns/$svc has an external address (IP/hostname)" + for attempt in $(seq 1 $attempts); do # timeout after 15 minutes + local address=$(kubectl get svc $svc -n $ns -o jsonpath="{.status.loadBalancer.ingress[0].ip}") + if [[ -n "${address}" ]]; then + echo -e "Service $ns/$svc has IP $address" + else + address=$(kubectl get svc $svc -n $ns -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") + if [[ -n "${address}" ]]; then + echo -e "Service $ns/$svc has hostname $address" + fi + fi + if [[ -n "${address}" ]]; then + local status=$(curl -s -o /dev/null -w "%{http_code}" http://"${address}") + if [[ $status != "" && $status != "000" ]]; then + echo -e "$address is ready: prober observed HTTP $status" + return 0 + else + echo -e "$address is not ready: prober observed HTTP $status" + fi + fi + echo -n "." + sleep $sleep_seconds + done + echo -e "\n\nERROR: timeout waiting for service $ns/$svc to have an external HTTP address" + return 1 +} + # Waits for the endpoint to be routable. # Parameters: $1 - External ingress IP address. # $2 - cluster hostname. @@ -357,7 +393,7 @@ function create_junit_xml() { local failure="" if [[ "$3" != "" ]]; then # Transform newlines into HTML code. - # Also escape `<` and `>` as here: https://github.com/golang/go/blob/50bd1c4d4eb4fac8ddeb5f063c099daccfb71b26/src/encoding/json/encode.go#L48, + # Also escape `<` and `>` as here: https://github.com/golang/go/blob/50bd1c4d4eb4fac8ddeb5f063c099daccfb71b26/src/encoding/json/encode.go#L48, # this is temporary solution for fixing https://github.com/knative/test-infra/issues/1204, # which should be obsolete once Test-infra 2.0 is in place local msg="$(echo -n "$3" | sed 's/$/\ /g' | sed 's//\\u003e/' | sed 's/&/\\u0026/' | tr -d '\n')" diff --git a/vendor/knative.dev/test-infra/scripts/presubmit-tests.sh b/vendor/knative.dev/test-infra/scripts/presubmit-tests.sh index ec26a086a9..797f2fd3e1 100644 --- a/vendor/knative.dev/test-infra/scripts/presubmit-tests.sh +++ b/vendor/knative.dev/test-infra/scripts/presubmit-tests.sh @@ -180,11 +180,13 @@ function default_build_test_runner() { # Consider an error message everything that's not a package name. errors_go1="$(grep -v '^\(github\.com\|knative\.dev\)/' "${report}" | sort | uniq)" fi - # Get all build tags in go code (ignore /vendor and /hack) + # Get all build tags in go code (ignore /vendor, /hack and /third_party) local tags="$(grep -r '// +build' . \ - | grep -v '^./vendor/' | grep -v '^./hack/' | cut -f3 -d' ' | sort | uniq | tr '\n' ' ')" + | grep -v '^./vendor/' | grep -v '^./hack/' | grep -v '^./third_party' \ + | cut -f3 -d' ' | sort | uniq | tr '\n' ' ')" local tagged_pkgs="$(grep -r '// +build' . \ - | grep -v '^./vendor/' | grep -v '^./hack/' | grep ":// +build " | cut -f1 -d: | xargs dirname \ + | grep -v '^./vendor/' | grep -v '^./hack/' | grep -v '^./third_party' \ + | grep ":// +build " | cut -f1 -d: | xargs dirname \ | sort | uniq | tr '\n' ' ')" for pkg in ${tagged_pkgs}; do # `go test -c` lets us compile the tests but do not run them. diff --git a/vendor/modules.txt b/vendor/modules.txt index 88840faf70..3ccd00fbe4 100644 --- a/vendor/modules.txt +++ b/vendor/modules.txt @@ -84,6 +84,8 @@ github.com/census-instrumentation/opencensus-proto/gen-go/agent/trace/v1 github.com/census-instrumentation/opencensus-proto/gen-go/metrics/v1 github.com/census-instrumentation/opencensus-proto/gen-go/resource/v1 github.com/census-instrumentation/opencensus-proto/gen-go/trace/v1 +# github.com/cespare/xxhash/v2 v2.1.1 +github.com/cespare/xxhash/v2 # github.com/cloudevents/sdk-go v1.2.0 github.com/cloudevents/sdk-go github.com/cloudevents/sdk-go/pkg/cloudevents @@ -141,9 +143,9 @@ github.com/go-logr/logr github.com/go-openapi/jsonpointer # github.com/go-openapi/jsonreference v0.19.3 github.com/go-openapi/jsonreference -# github.com/go-openapi/spec v0.19.4 +# github.com/go-openapi/spec v0.19.6 github.com/go-openapi/spec -# github.com/go-openapi/swag v0.19.5 +# github.com/go-openapi/swag v0.19.7 github.com/go-openapi/swag # github.com/gobuffalo/envy v1.7.1 github.com/gobuffalo/envy @@ -176,7 +178,7 @@ github.com/google/go-cmp/cmp/internal/diff github.com/google/go-cmp/cmp/internal/flags github.com/google/go-cmp/cmp/internal/function github.com/google/go-cmp/cmp/internal/value -# github.com/google/go-containerregistry v0.0.0-20191010200024-a3d713f9b7f8 +# github.com/google/go-containerregistry v0.0.0-20200123184029-53ce695e4179 github.com/google/go-containerregistry/pkg/name # github.com/google/gofuzz v1.1.0 github.com/google/gofuzz @@ -198,7 +200,7 @@ github.com/googleapis/gnostic/extensions github.com/grpc-ecosystem/grpc-gateway/internal github.com/grpc-ecosystem/grpc-gateway/runtime github.com/grpc-ecosystem/grpc-gateway/utilities -# github.com/hashicorp/golang-lru v0.5.3 +# github.com/hashicorp/golang-lru v0.5.4 github.com/hashicorp/golang-lru github.com/hashicorp/golang-lru/simplelru # github.com/imdario/mergo v0.3.8 => github.com/imdario/mergo v0.3.7 @@ -241,7 +243,7 @@ github.com/openzipkin/zipkin-go/reporter/http github.com/pkg/errors # github.com/pmezard/go-difflib v1.0.0 github.com/pmezard/go-difflib/difflib -# github.com/prometheus/client_golang v1.1.0 +# github.com/prometheus/client_golang v1.5.0 github.com/prometheus/client_golang/prometheus github.com/prometheus/client_golang/prometheus/internal github.com/prometheus/client_golang/prometheus/promhttp @@ -291,11 +293,11 @@ go.opencensus.io/trace/tracestate go.opentelemetry.io/otel/api/core go.opentelemetry.io/otel/api/propagation go.opentelemetry.io/otel/api/trace -# go.uber.org/atomic v1.4.0 +# go.uber.org/atomic v1.6.0 go.uber.org/atomic -# go.uber.org/multierr v1.2.0 +# go.uber.org/multierr v1.5.0 go.uber.org/multierr -# go.uber.org/zap v1.10.0 => go.uber.org/zap v1.9.2-0.20180814183419-67bc79d13d15 +# go.uber.org/zap v1.14.1 => go.uber.org/zap v1.9.2-0.20180814183419-67bc79d13d15 go.uber.org/zap go.uber.org/zap/buffer go.uber.org/zap/internal/bufferpool @@ -574,7 +576,7 @@ istio.io/api/type/v1beta1 istio.io/client-go/pkg/apis/security/v1beta1 # istio.io/gogo-genproto v0.0.0-20200130224810-a0338448499a istio.io/gogo-genproto/googleapis/google/api -# k8s.io/api v0.17.0 => k8s.io/api v0.16.4 +# k8s.io/api v0.17.3 => k8s.io/api v0.16.4 k8s.io/api/admission/v1beta1 k8s.io/api/admissionregistration/v1 k8s.io/api/admissionregistration/v1beta1 @@ -614,7 +616,7 @@ k8s.io/api/settings/v1alpha1 k8s.io/api/storage/v1 k8s.io/api/storage/v1alpha1 k8s.io/api/storage/v1beta1 -# k8s.io/apiextensions-apiserver v0.16.4 +# k8s.io/apiextensions-apiserver v0.17.2 k8s.io/apiextensions-apiserver/pkg/apis/apiextensions k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1 k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1 @@ -681,7 +683,7 @@ k8s.io/apimachinery/pkg/version k8s.io/apimachinery/pkg/watch k8s.io/apimachinery/third_party/forked/golang/json k8s.io/apimachinery/third_party/forked/golang/reflect -# k8s.io/apiserver v0.16.4 +# k8s.io/apiserver v0.17.2 k8s.io/apiserver/pkg/storage/names # k8s.io/client-go v11.0.1-0.20190805182717-6502b5e7b1b5+incompatible => k8s.io/client-go v0.16.4 k8s.io/client-go/discovery @@ -935,7 +937,7 @@ k8s.io/gengo/parser k8s.io/gengo/types # k8s.io/klog v1.0.0 k8s.io/klog -# k8s.io/kube-openapi v0.0.0-20190918143330-0270cf2f1c1d => k8s.io/kube-openapi v0.0.0-20190918143330-0270cf2f1c1d +# k8s.io/kube-openapi v0.0.0-20191107075043-30be4d16710a => k8s.io/kube-openapi v0.0.0-20190918143330-0270cf2f1c1d k8s.io/kube-openapi/cmd/openapi-gen k8s.io/kube-openapi/cmd/openapi-gen/args k8s.io/kube-openapi/pkg/common @@ -943,7 +945,7 @@ k8s.io/kube-openapi/pkg/generators k8s.io/kube-openapi/pkg/generators/rules k8s.io/kube-openapi/pkg/util/proto k8s.io/kube-openapi/pkg/util/sets -# k8s.io/utils v0.0.0-20191114184206-e782cd3c129f +# k8s.io/utils v0.0.0-20200124190032-861946025e34 k8s.io/utils/buffer k8s.io/utils/integer k8s.io/utils/pointer @@ -1041,7 +1043,7 @@ knative.dev/eventing/test/test_images/logevents knative.dev/eventing/test/test_images/recordevents knative.dev/eventing/test/test_images/sendevents knative.dev/eventing/test/test_images/transformevents -# knative.dev/pkg v0.0.0-20200501164043-2e4e82aa49f1 +# knative.dev/pkg v0.0.0-20200506001744-478962f05e2b knative.dev/pkg/apis knative.dev/pkg/apis/duck knative.dev/pkg/apis/duck/v1 @@ -1177,7 +1179,7 @@ knative.dev/serving/pkg/client/listers/networking/v1alpha1 knative.dev/serving/pkg/client/listers/serving/v1 knative.dev/serving/pkg/client/listers/serving/v1alpha1 knative.dev/serving/pkg/client/listers/serving/v1beta1 -# knative.dev/test-infra v0.0.0-20200430225942-f7c1fafc1cde +# knative.dev/test-infra v0.0.0-20200505192244-75864c82db21 knative.dev/test-infra/scripts # sigs.k8s.io/yaml v1.2.0 => sigs.k8s.io/yaml v1.1.0 sigs.k8s.io/yaml From 9782b1b8a52d26c037b3318d5eea855a2340e707 Mon Sep 17 00:00:00 2001 From: Matt Moore Date: Wed, 6 May 2020 09:29:44 -0700 Subject: [PATCH 12/12] [master] Format markdown (#1007) Produced via: `prettier --write --prose-wrap=always $(find -name '*.md' | grep -v vendor | grep -v .github | grep -v docs/cmd/)` /assign grantr nachocano /cc grantr nachocano --- docs/examples/cloudschedulersource/README.md | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/docs/examples/cloudschedulersource/README.md b/docs/examples/cloudschedulersource/README.md index bb38db63dc..ede59e7ade 100644 --- a/docs/examples/cloudschedulersource/README.md +++ b/docs/examples/cloudschedulersource/README.md @@ -8,17 +8,19 @@ scheduled events from ## Prerequisites -1. [Install Knative-GCP](../../install/install-knative-gcp.md). +1. [Install Knative-GCP](../../install/install-knative-gcp.md). 1. Create with an App Engine application in your project. Refer to this [guide](https://cloud.google.com/scheduler/docs/quickstart#create_a_project_with_an_app_engine_app) - for more details. You can change the APP_ENGINE_LOCATION, - but please make sure you also update the spec.location in [`CloudSchedulerSource`](cloudschedulersource.yaml) - - ```shell - export APP_ENGINE_LOCATION=us-central1 - gcloud app create --region=$APP_ENGINE_LOCATION - ``` + for more details. You can change the APP_ENGINE_LOCATION, but please make + sure you also update the spec.location in + [`CloudSchedulerSource`](cloudschedulersource.yaml) + + ```shell + export APP_ENGINE_LOCATION=us-central1 + gcloud app create --region=$APP_ENGINE_LOCATION + ``` + 1. [Create a Pub/Sub enabled Service Account](../../install/pubsub-service-account.md) 1. Enable the `Cloud Scheduler API` on your project: