NETOBSERV-2376 fix statuses#2677
Conversation
|
Skipping CI for Draft Pull Request. |
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
New images: quay.io/netobserv/network-observability-operator:e2c6794
quay.io/netobserv/network-observability-operator-bundle:v0.0.0-sha-e2c6794
quay.io/netobserv/network-observability-operator-catalog:v0.0.0-sha-e2c6794They will expire in two weeks. To deploy this build: # Direct deployment, from operator repo
IMAGE=quay.io/netobserv/network-observability-operator:e2c6794 make deploy
# Or using operator-sdk
operator-sdk run bundle quay.io/netobserv/network-observability-operator-bundle:v0.0.0-sha-e2c6794Or as a Catalog Source: apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: netobserv-dev
namespace: openshift-marketplace
spec:
sourceType: grpc
image: quay.io/netobserv/network-observability-operator-catalog:v0.0.0-sha-e2c6794
displayName: NetObserv development catalog
publisher: Me
updateStrategy:
registryPoll:
interval: 1m |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2677 +/- ##
==========================================
+ Coverage 70.60% 70.83% +0.22%
==========================================
Files 107 107
Lines 13619 13646 +27
==========================================
+ Hits 9616 9666 +50
+ Misses 3502 3478 -24
- Partials 501 502 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/pkg/manager/status/status_manager.go`:
- Around line 313-318: GetKafkaCondition() can return nil for a placeholder
"Unknown" state, so do not unconditionally call
meta.RemoveStatusCondition(&fc.Status.Conditions, "KafkaReady") when it returns
nil; instead, change the logic to only remove the "KafkaReady" condition when
Kafka is actually disabled/not configured. Implement this by either: (a)
modifying GetKafkaCondition to return a distinct signal (or error) when Kafka is
truly disabled vs when it's a temporary placeholder, or (b) add a helper like
IsKafkaDisabled/IsKafkaInUse and check that before calling
meta.RemoveStatusCondition; update the three sites that currently do the
unconditional remove (the blocks around GetKafkaCondition, including the
occurrences near ForComponent(FLPTransformer)) to use this new check so
KafkaReady is only removed when Kafka is definitively not in use.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c70ceeaf-dd27-4637-a1c2-d6d81e371416
📒 Files selected for processing (3)
internal/controller/flp/flp_controller.gointernal/pkg/manager/status/status_manager.gointernal/pkg/manager/status/status_manager_test.go
476174e to
b53d64c
Compare
|
New images: quay.io/netobserv/network-observability-operator:486fdeb
quay.io/netobserv/network-observability-operator-bundle:v0.0.0-sha-486fdeb
quay.io/netobserv/network-observability-operator-catalog:v0.0.0-sha-486fdebThey will expire in two weeks. To deploy this build: # Direct deployment, from operator repo
IMAGE=quay.io/netobserv/network-observability-operator:486fdeb make deploy
# Or using operator-sdk
operator-sdk run bundle quay.io/netobserv/network-observability-operator-bundle:v0.0.0-sha-486fdebOr as a Catalog Source: apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: netobserv-dev
namespace: openshift-marketplace
spec:
sourceType: grpc
image: quay.io/netobserv/network-observability-operator-catalog:v0.0.0-sha-486fdeb
displayName: NetObserv development catalog
publisher: Me
updateStrategy:
registryPoll:
interval: 1m |
b53d64c to
9e7d744
Compare
|
New images: quay.io/netobserv/network-observability-operator:41433e6
quay.io/netobserv/network-observability-operator-bundle:v0.0.0-sha-41433e6
quay.io/netobserv/network-observability-operator-catalog:v0.0.0-sha-41433e6They will expire in two weeks. To deploy this build: # Direct deployment, from operator repo
IMAGE=quay.io/netobserv/network-observability-operator:41433e6 make deploy
# Or using operator-sdk
operator-sdk run bundle quay.io/netobserv/network-observability-operator-bundle:v0.0.0-sha-41433e6Or as a Catalog Source: apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: netobserv-dev
namespace: openshift-marketplace
spec:
sourceType: grpc
image: quay.io/netobserv/network-observability-operator-catalog:v0.0.0-sha-41433e6
displayName: NetObserv development catalog
publisher: Me
updateStrategy:
registryPoll:
interval: 1m |
|
/test e2e-operator |
|
@jpinsonneau: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/label qe-approved |
|
[APPROVALNOTIFIER] This PR is APPROVED Approval requirements bypassed by manually added approval. This pull-request has been approved by: The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/test e2e-operator |
62eed57
into
netobserv:main
Description
Also see netobserv/netobserv-web-console#1445 for colors and icons in plugin
Dependencies
n/a
Checklist
If you are not familiar with our processes or don't know what to answer in the list below, let us know in a comment: the maintainers will take care of that.
Summary by CodeRabbit