fix direct agent configuration#5380
Conversation
selecting "all" for service name or environment in the kibana ui marshals service.name and service.environment as empty strings, which is valid
|
Note: The current nesting in elastic/kibana#100744 has |
|
Note: If we're running in managed mode ( Lines 200 to 206 in 16618f2 I could thread some sort of "managed" bool through to help make this decision, but I'm not sure if there might be some better way to handle it. Any suggestions? |
💚 Build Succeeded
Expand to view the summary
Build stats
Test stats 🧪
Trends 🧪 |
|
This pull request is now in conflicts. Could you fix it @stuartnelson3? 🙏 |
this was used temporarily to support central config via kibana, which is not necessary for 7.14
This reverts commit e6e2c0e.
|
jenkins run the tests |
* disable kibana if running in managed mode * remove agent_config service validation selecting "all" for service name or environment in the kibana ui marshals service.name and service.environment as empty strings, which is valid * remove kibana api key this was used temporarily to support central config via kibana, which is not necessary for 7.14 (cherry picked from commit 9980296)
* disable kibana if running in managed mode * remove agent_config service validation selecting "all" for service name or environment in the kibana ui marshals service.name and service.environment as empty strings, which is valid * remove kibana api key this was used temporarily to support central config via kibana, which is not necessary for 7.14 (cherry picked from commit 9980296) Co-authored-by: stuart nelson <stuartnelson3@gmail.com>
* disable kibana if running in managed mode * remove agent_config service validation selecting "all" for service name or environment in the kibana ui marshals service.name and service.environment as empty strings, which is valid * remove kibana api key this was used temporarily to support central config via kibana, which is not necessary for 7.14 (cherry picked from commit 9980296) # Conflicts: # apmpackage/apm/0.3.0/agent/input/template.yml.hbs
|
Verified with 7.14.0-BC2. I ran apm-integration-testing with package main
import (
"context"
"fmt"
"go.elastic.co/apm/apmconfig"
"go.elastic.co/apm/transport"
)
func main() {
var args apmconfig.WatchParams
args.Service.Name = "main"
transport, _ := transport.NewHTTPTransport()
changes := transport.WatchConfig(context.Background(), args)
for change := range changes {
fmt.Println("changed:", change)
}
} (This is using internals of the Go Agent.) Then setting/updating a central config rule for service "main", I observed the changes in the config-watching program. There is something wrong with how apm-server is handling config changes generally. On some changes, I observed apm-server would become unresponsive. Then looking inside the container I found the process had exited, but there was no sign of any errors in the logs. I'll continue digging into that separately. |
Motivation/summary
Manual testing with elastic/kibana#100744 revealed
errors in the current apm-server logic. This PR introduces changes so that
apm-server correctly switches to direct agent configuration when running under
managed mode.
depends on elastic/kibana#100744
Checklist
How to test these changes
apm-serverincluding this PR
apm-serverelastic-agent.ymlwith the generated policy from theprevious step
cURLing the config endpoint returns your agent configcurl 'http://localhost:8200/config/v1/agents?service.name=all'