-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed http2 parsing several bugs #15124
Merged
guyarb
merged 232 commits into
amit.slavin/grpc-new
from
guy.arbitman/http2-fix-several-bugs
Jan 18, 2023
Merged
Fixed http2 parsing several bugs #15124
guyarb
merged 232 commits into
amit.slavin/grpc-new
from
guy.arbitman/http2-fix-several-bugs
Jan 18, 2023
+115,684
−65,684
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* [config/environment] Check AWS_EXECUTION_ENV in Fargate detection * [util/fargate] Rely on features for ECS Fargate detection * [fargate/detection] Rely on features to detect EKS * [trace-agent/config] Call fargate.GetOrchestrator after loading config * add unit-test for trace-agent config on fargate * Add release note * [cmd/trace-agent/config] Fix TestFargateConfig in macOS Co-authored-by: Cedric Lamoriniere <[email protected]>
Co-authored-by: paulcacheux <[email protected]>
* ci: kitchen: Allow running dockers in kitchen test, and extend the filesystem The PR introduce a way to run external dockers in the kitchen tests, without pulling them As we cannot authenticate in the kitchen machines to dockerhub, we had to work around that and we are pulling and saving the dockers in gitlab, uploading them to the remote machine using kitchen, and then loading those dockers on the remote machine so they are available for usage. In the PR we added steps to install docker and docker compose on the kitchen machines. The PR introduce an example test that runs dockers. During the PR we faced the problem of "no space left on the device", to solve those errors we have to extend the filesystem of the remote machines. * Fixed cr comments * Debugging the artifacts * Debugging the artifacts * Debugging the artifacts * Debugging the artifacts * revert artifacts * Giving another try to dependencies * Fixed path * Fixed CR comment
…4710) * Bump golang.org/x/tools from 0.3.0 to 0.4.0 in /pkg/security/secl Bumps [golang.org/x/tools](https://github.com/golang/tools) from 0.3.0 to 0.4.0. - [Release notes](https://github.com/golang/tools/releases) - [Commits](golang/tools@v0.3.0...v0.4.0) --- updated-dependencies: - dependency-name: golang.org/x/tools dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> * Auto-generate go.sum and LICENSE-3rdparty.csv changes Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: paulcacheux <[email protected]>
* [WIP][single-machine-performance] Introduce regression detector jobs This PR intends to introduce the Single Machine Performance regression detector into Agent CI. This builds on work done in #14477 and is peer to #14438. The Regression Detector is a CI tool that determines if a changed introduced into a project modifies project performance in a way that is more than just random chance with some statistical guarantee. The Regression Detector is not a microbenchmarking tool and must operate on the whole Agent. This PR introduces only 'throughput' as an optmization goal -- how quickly can the Regression Detector produce load into the Agent -- but other goals are possible. Regressions are checked per-experiment, please see `tests/regression` for details about how to define an experiment. The Regression Detector runs today in vectordotdev/vector project and is influential in keeping that project's performance consistently high. REF SMP-208 Signed-off-by: Brian L. Troutwine <[email protected]> * Use static smp binary Signed-off-by: Brian L. Troutwine <[email protected]> * different base sha calculation Signed-off-by: Brian L. Troutwine <[email protected]> * Try to clone the whole repo Signed-off-by: Brian L. Troutwine <[email protected]> * baseline sha computation redux Signed-off-by: Brian L. Troutwine <[email protected]> * specify region explicitly Signed-off-by: Brian L. Troutwine <[email protected]> * use smp 0.6.3-rc3 Signed-off-by: Brian L. Troutwine <[email protected]> * Wait for job to complete, output report, status Signed-off-by: Brian L. Troutwine <[email protected]> * update job name Signed-off-by: Brian L. Troutwine <[email protected]> * Update smp, lading Signed-off-by: Brian L. Troutwine <[email protected]> * remove \ Signed-off-by: Brian L. Troutwine <[email protected]> * Use smp 0.6.4 Signed-off-by: Brian L. Troutwine <[email protected]> * diagnose why file_to_blackhole fails Signed-off-by: Brian L. Troutwine <[email protected]> * just one test for now Signed-off-by: Brian L. Troutwine <[email protected]> * set log level for smp Signed-off-by: Brian L. Troutwine <[email protected]> * tweaks Signed-off-by: Brian L. Troutwine <[email protected]> * debug Signed-off-by: Brian L. Troutwine <[email protected]> * actually add datadog.yaml et all, .gitignore issue? Signed-off-by: Brian L. Troutwine <[email protected]> * tidy up cases to initial trio, less file_to_blackhole which needs work Signed-off-by: Brian L. Troutwine <[email protected]> * update smp, config tweak Signed-off-by: Brian L. Troutwine <[email protected]> * override .gitignore Signed-off-by: Brian L. Troutwine <[email protected]> * Apply @GeorgeHahn's patches Signed-off-by: Brian L. Troutwine <[email protected]> * enable other tests, tweak OTEL Signed-off-by: Brian L. Troutwine <[email protected]> * more fiddling Signed-off-by: Brian L. Troutwine <[email protected]> * tweaks Signed-off-by: Brian L. Troutwine <[email protected]> * use markdown output report Signed-off-by: Brian L. Troutwine <[email protected]> * use OTEL http Signed-off-by: Brian L. Troutwine <[email protected]> * use smp 0.6.5-rc1 Signed-off-by: Brian L. Troutwine <[email protected]> * debug -> info Signed-off-by: Brian L. Troutwine <[email protected]> * preserve output Signed-off-by: Brian L. Troutwine <[email protected]> * remove stray tick Signed-off-by: Brian L. Troutwine <[email protected]> * Update test/regression/README.md Co-authored-by: Kylian Serrania <[email protected]> * Update test/regression/README.md Co-authored-by: Kylian Serrania <[email protected]> Signed-off-by: Brian L. Troutwine <[email protected]> Co-authored-by: Kylian Serrania <[email protected]>
* Split BundleParams into ConfigParams and LogParams * Move ConfigParams and LogParams to their own file * Move WithXXX functions from BundleParams to config.Params * Use constructors for config.Params * Fix comp/core/log/params_test.go * Make fields for log.Params unexported * Make config.Params fields not exported. * Fix package names in the security agent. * Explain why `fx.Provide` is needed in bundle.go * Remove configLoadSecurityAgent from NewSecurityAgentParams * Add NewAgentParamsWithSecrets and NewAgentParamsWithoutSecrets
Co-authored-by: paulcacheux <[email protected]>
…ipt after packaging. (#14777)
…ers_with_distributions (#14805) * Updates prometheusScrape to support tag_by_endpoint * Adds release note * Cleans release note * Also adds support for `collect_counters_with_distributions` * Updates release note to include the second added parameter * Updates release note based on suggestion by @clamoriniere
Migrating flare to a component This adds a 'flare' component and rework the flare package to be compatible with fx app and non-fx app. The flare generation now happens through a FlareBuilder which handles all the logic of adding data to a flare. This FlareBuilder can be used directly (by the flare package) or be received by each component when they register a flare provider. Migration workflow for each component would be to move their dedicated code from the flare package to a flare provider. Note: Until `cmd/systray/` is migrated to fx we can't start using the flare component from other flare (on windows the systray can create flare on it's own).
This monitor will read the netlink socket process events queue and run it on parallel worker (map to n cpu cores) ProcessMonitor require root or CAP_NET_ADMIN capabilities Aim to Subscribe() to process event Exec, Exit With or without metadata process Any, Name, MAPfile ProcessMonitor will subscribe to the netlink process events like Exec, Exit and call the subscribed callbacks Initialize() will scan the current process and will call the subscribed callbacks callbacks will be executed in parallel via a pool of goroutines (runtime.NumCPU()) callbackRunner is callbacks queue. The queue size is set by processMonitorMaxEvents Multiple team can use the same ProcessMonitor, the callers need to guarantee calling each Initialize() Stop() one single time this maintain an internal reference counter Netlink process subscription, socket connection is allowed only by one PID
* protocols: refactor tests to allow pre-post setups * Added temporary nolint for skippers * Fixed bugs
…ith backend limits (#14782)
* Add logging around container retries * Add trace log * Change to debug and add release note * Delete Improve-container-tagger-logging-e48b0fffbe8563d0.yaml * Add timestamp id to events * Make id more specific, use container String method * Just print class * Update pkg/cloudfoundry/containertagger/container_tagger.go Co-authored-by: NouemanKHAL <[email protected]> * Address PR review * Create event ID Co-authored-by: NouemanKHAL <[email protected]>
* [Serverless] change account (#14755) * Aj/buffer cold start span data (#14664) * wip dirty commit - trace being created but not flushed properly. No further traces appearing WIP: more debugging. StopChan properly set up feat: Starting coldstart creator as a daemon, and recieving data from two channels. Todo: spec feat: Update specs to write to channels feat: Merge conflicts resolved for tests feat: Use smaller methods to handle locking fix: pass coldstartSpanId to sls-init main feat: Remove default feat: Use Millisecond as Second is far longer than necessary feat: No need to export ColdStartSpanId fix: update units feat: Directionality for lambdaSpanChan as well as for initDurationChan fix: No need for the nil check, I need to stop javascripting my go feat: ints * feat: rebase missing changes from merge commits * feat: update ints after moving accounts * Empty commit to trigger ci * [Serverless] Fix flaky integration tests and make them more easily maintainable. (#14783) * Retry serverless integration test failures automatically. (#14801) * [Serverless] Allow some keys to be option in serverless integration tests. (#14827) * Ability to remove items from the json. * Remove items from snapshot. Co-authored-by: Maxime David <[email protected]> Co-authored-by: AJ Stuyvenberg <[email protected]>
* [Windows] implement mapping of pid to service name Checks the pid against the table of SCM controlled processes. If it's SCM controlled, returns the service information. Because we must enumerate the entire SCM (there doesn't seem to be an api for that), the SCMManager object maintains a cache of objects, and refreshes only when it sees a PID it hasn't seen before. On a machine with high process churn, this could still result in a lot of accesses. However, if process agent queries only when doing the process check (i.e. every 30s), then it should only iterate the list once per 30s. * ci fixes, add tests * fix test/improper conversion of data buffer * review feedback * more review feedback * Rename structure
1. Fixed the wrong path size bug in the http2 path name 2. Fixed wrong condition in http2 entrypoint 3. Fixed wrong status code
Slavek Kabrda seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Motivation
Additional Notes
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
Reviewer's Checklist
Triage
milestone is set.major_change
label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.changelog/no-changelog
label has been applied.qa/skip-qa
label is not applied.team/..
label has been applied, indicating the team(s) that should QA this change.need-change/operator
andneed-change/helm
labels have been applied.k8s/<min-version>
label, indicating the lowest Kubernetes version compatible with this feature.