Skip to content

Lower Kafka log level in product tests#14149

Merged
kokosing merged 1 commit intotrinodb:masterfrom
nevillelyh:neville/kafka-log
Sep 19, 2022
Merged

Lower Kafka log level in product tests#14149
kokosing merged 1 commit intotrinodb:masterfrom
nevillelyh:neville/kafka-log

Conversation

@nevillelyh
Copy link
Copy Markdown
Member

Description

Lower Kafka log level in product tests

Non-technical explanation

Make Kafka product tests less noisy

Release notes

(x) This is not user-visible and no release notes are required.
( ) Release notes are required, please propose a release note for me.
( ) Release notes are required, with the following suggested text:

# Section
* Fix some things. ({issue}`issuenumber`)

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slightly better approach is to use:
private final DockerFiles.ResourceProvider configDir;
and get referrence to all config files only once (typically in the constuctor) via:
configDir = dockerFiles.getDockerFilesHostDirectory("common/kafka");

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: extract copying of the file to a method?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another approach is:

        builder.configureContainers(dockerContainer -> {
            if (isPrestoContainer(dockerContainer.getLogicalName())) {
                dockerContainer
                        .withCopyFileToContainer(
                                forHostPath(configDir.getPath("log.properties")),
                                CONTAINER_PRESTO_ETC + "/log.properties");
            }
        });

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

Copy link
Copy Markdown
Member

@kokosing kokosing left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you checked Trino server logs? What is the diff between before and after?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another approach is:

        builder.configureContainers(dockerContainer -> {
            if (isPrestoContainer(dockerContainer.getLogicalName())) {
                dockerContainer
                        .withCopyFileToContainer(
                                forHostPath(configDir.getPath("log.properties")),
                                CONTAINER_PRESTO_ETC + "/log.properties");
            }
        });

@nevillelyh
Copy link
Copy Markdown
Member Author

Have you checked Trino server logs? What is the diff between before and after?

Yeah tried querying a Kafka table. I think ConsumerConfig was the only one and it's gone. Just the query log now:

presto-master | 2022-09-16T18:59:36.595+0545 INFO dispatcher-query-9 io.trino.event.QueryMonitor TIMELINE: Query 20220916_131435_00002_6ah46 :: FINISHED :: elapsed 1146ms :: planning 249ms :: waiting 57ms :: scheduling 617ms :: running 171ms :: finishing 109ms :: begin 2022-09-16T18:59:35.370+05:45 :: end 2022-09-16T18:59:36.516+05:45

And the verbose logging before:

presto-master       | 2022-09-16T19:19:19.119+0545      INFO    Query-20220916_133417_00001_vyvt3-167   org.apache.kafka.clients.consumer.ConsumerConfig        ConsumerConfig values:
presto-master       |   allow.auto.create.topics = true
presto-master       |   auto.commit.interval.ms = 5000
presto-master       |   auto.offset.reset = latest
presto-master       |   bootstrap.servers = [kafka:9092]
presto-master       |   check.crcs = true
presto-master       |   client.dns.lookup = default
presto-master       |   client.id =
presto-master       |   client.rack =
presto-master       |   connections.max.idle.ms = 540000
presto-master       |   default.api.timeout.ms = 60000
presto-master       |   enable.auto.commit = false
presto-master       |   exclude.internal.topics = true
presto-master       |   fetch.max.bytes = 52428800
presto-master       |   fetch.max.wait.ms = 500
presto-master       |   fetch.min.bytes = 1
presto-master       |   group.id = null
presto-master       |   group.instance.id = null
presto-master       |   heartbeat.interval.ms = 3000
presto-master       |   interceptor.classes = []
presto-master       |   internal.leave.group.on.close = true
presto-master       |   isolation.level = read_uncommitted
presto-master       |   key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
presto-master       |   max.partition.fetch.bytes = 1048576
presto-master       |   max.poll.interval.ms = 300000
presto-master       |   max.poll.records = 500
presto-master       |   metadata.max.age.ms = 300000
presto-master       |   metric.reporters = []
presto-master       |   metrics.num.samples = 2
presto-master       |   metrics.recording.level = INFO
presto-master       |   metrics.sample.window.ms = 30000
presto-master       |   partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
presto-master       |   receive.buffer.bytes = 65536
presto-master       |   reconnect.backoff.max.ms = 1000
presto-master       |   reconnect.backoff.ms = 50
presto-master       |   request.timeout.ms = 30000
presto-master       |   retry.backoff.ms = 100
presto-master       |   sasl.client.callback.handler.class = null
presto-master       |   sasl.jaas.config = null
presto-master       |   sasl.kerberos.kinit.cmd = /usr/bin/kinit
presto-master       |   sasl.kerberos.min.time.before.relogin = 60000
presto-master       |   sasl.kerberos.service.name = null
presto-master       |   sasl.kerberos.ticket.renew.jitter = 0.05
presto-master       |   sasl.kerberos.ticket.renew.window.factor = 0.8
presto-master       |   sasl.login.callback.handler.class = null
presto-master       |   sasl.login.class = null
presto-master       |   sasl.login.refresh.buffer.seconds = 300
presto-master       |   sasl.login.refresh.min.period.seconds = 60
presto-master       |   sasl.login.refresh.window.factor = 0.8
presto-master       |   sasl.login.refresh.window.jitter = 0.05
presto-master       |   sasl.mechanism = GSSAPI
presto-master       |   security.protocol = PLAINTEXT
presto-master       |   security.providers = null
presto-master       |   send.buffer.bytes = 131072
presto-master       |   session.timeout.ms = 10000
presto-master       |   ssl.cipher.suites = null
presto-master       |   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
presto-master       |   ssl.endpoint.identification.algorithm = https
presto-master       |   ssl.key.password = null
presto-master       |   ssl.keymanager.algorithm = SunX509
presto-master       |   ssl.keystore.location = null
presto-master       |   ssl.keystore.password = null
presto-master       |   ssl.keystore.type = JKS
presto-master       |   ssl.protocol = TLS
presto-master       |   ssl.provider = null
presto-master       |   ssl.secure.random.implementation = null
presto-master       |   ssl.trustmanager.algorithm = PKIX
presto-master       |   ssl.truststore.location = null
presto-master       |   ssl.truststore.password = null
presto-master       |   ssl.truststore.type = JKS
presto-master       |   value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
presto-master       |
presto-master       | 2022-09-16T19:19:19.434+0545      INFO    Query-20220916_133417_00001_vyvt3-167   org.apache.kafka.common.utils.AppInfoParser     Kafka version: 2.4.1
presto-master       | 2022-09-16T19:19:19.435+0545      INFO    Query-20220916_133417_00001_vyvt3-167   org.apache.kafka.common.utils.AppInfoParser     Kafka commitId: c57222ae8cd7866b
presto-master       | 2022-09-16T19:19:19.435+0545      INFO    Query-20220916_133417_00001_vyvt3-167   org.apache.kafka.common.utils.AppInfoParser     Kafka startTimeMs: 1663335259404
presto-master       | 2022-09-16T19:19:20.772+0545      INFO    Query-20220916_133417_00001_vyvt3-167   org.apache.kafka.clients.Metadata       [Consumer clientId=consumer-1, groupId=null] Cluster ID: TCun7pZLShW3sgvreJYXlw
presto-worker       | 2022-09-16T19:19:25.530+0545      INFO    20220916_133417_00001_vyvt3.1.0.0-0-60  org.apache.kafka.clients.consumer.ConsumerConfig        ConsumerConfig values:
presto-worker       |   allow.auto.create.topics = true
presto-worker       |   auto.commit.interval.ms = 5000
presto-worker       |   auto.offset.reset = latest
presto-worker       |   bootstrap.servers = [kafka:9092]
presto-worker       |   check.crcs = true
presto-worker       |   client.dns.lookup = default
presto-worker       |   client.id =
presto-worker       |   client.rack =
presto-worker       |   connections.max.idle.ms = 540000
presto-worker       |   default.api.timeout.ms = 60000
presto-worker       |   enable.auto.commit = false
presto-worker       |   exclude.internal.topics = true
presto-worker       |   fetch.max.bytes = 52428800
presto-worker       |   fetch.max.wait.ms = 500
presto-worker       |   fetch.min.bytes = 1
presto-worker       |   group.id = null
presto-worker       |   group.instance.id = null
presto-worker       |   heartbeat.interval.ms = 3000
presto-worker       |   interceptor.classes = []
presto-worker       |   internal.leave.group.on.close = true
presto-worker       |   isolation.level = read_uncommitted
presto-worker       |   key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
presto-worker       |   max.partition.fetch.bytes = 1048576
presto-worker       |   max.poll.interval.ms = 300000
presto-worker       |   max.poll.records = 500
presto-worker       |   metadata.max.age.ms = 300000
presto-worker       |   metric.reporters = []
presto-worker       |   metrics.num.samples = 2
presto-worker       |   metrics.recording.level = INFO
presto-worker       |   metrics.sample.window.ms = 30000
presto-worker       |   partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
presto-worker       |   receive.buffer.bytes = 65536
presto-worker       |   reconnect.backoff.max.ms = 1000
presto-worker       |   reconnect.backoff.ms = 50
presto-worker       |   request.timeout.ms = 30000
presto-worker       |   retry.backoff.ms = 100
presto-worker       |   sasl.client.callback.handler.class = null
presto-worker       |   sasl.jaas.config = null
presto-worker       |   sasl.kerberos.kinit.cmd = /usr/bin/kinit
presto-worker       |   sasl.kerberos.min.time.before.relogin = 60000
presto-worker       |   sasl.kerberos.service.name = null
presto-worker       |   sasl.kerberos.ticket.renew.jitter = 0.05
presto-worker       |   sasl.kerberos.ticket.renew.window.factor = 0.8
presto-worker       |   sasl.login.callback.handler.class = null
presto-worker       |   sasl.login.class = null
presto-worker       |   sasl.login.refresh.buffer.seconds = 300
presto-worker       |   sasl.login.refresh.min.period.seconds = 60
presto-worker       |   sasl.login.refresh.window.factor = 0.8
presto-worker       |   sasl.login.refresh.window.jitter = 0.05
presto-worker       |   sasl.mechanism = GSSAPI
presto-worker       |   security.protocol = PLAINTEXT
presto-worker       |   security.providers = null
presto-worker       |   send.buffer.bytes = 131072
presto-worker       |   session.timeout.ms = 10000
presto-worker       |   ssl.cipher.suites = null
presto-worker       |   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
presto-worker       |   ssl.endpoint.identification.algorithm = https
presto-worker       |   ssl.key.password = null
presto-worker       |   ssl.keymanager.algorithm = SunX509
presto-worker       |   ssl.keystore.location = null
presto-worker       |   ssl.keystore.password = null
presto-worker       |   ssl.keystore.type = JKS
presto-worker       |   ssl.protocol = TLS
presto-worker       |   ssl.provider = null
presto-worker       |   ssl.secure.random.implementation = null
presto-worker       |   ssl.trustmanager.algorithm = PKIX
presto-worker       |   ssl.truststore.location = null
presto-worker       |   ssl.truststore.password = null
presto-worker       |   ssl.truststore.type = JKS
presto-worker       |   value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
presto-worker       |
presto-worker       | 2022-09-16T19:19:27.013+0545      INFO    20220916_133417_00001_vyvt3.1.0.0-0-60  org.apache.kafka.common.utils.AppInfoParser     Kafka version: 2.4.1
presto-worker       | 2022-09-16T19:19:27.018+0545      INFO    20220916_133417_00001_vyvt3.1.0.0-0-60  org.apache.kafka.common.utils.AppInfoParser     Kafka commitId: c57222ae8cd7866b
presto-worker       | 2022-09-16T19:19:27.018+0545      INFO    20220916_133417_00001_vyvt3.1.0.0-0-60  org.apache.kafka.common.utils.AppInfoParser     Kafka startTimeMs: 1663335266968
presto-worker       | 2022-09-16T19:19:27.078+0545      INFO    20220916_133417_00001_vyvt3.1.0.0-0-60  org.apache.kafka.clients.consumer.KafkaConsumer [Consumer clientId=consumer-1, groupId=null] Subscribed to partition(s): read_all_datatypes_avro-0
presto-worker       | 2022-09-16T19:19:27.131+0545      INFO    20220916_133417_00001_vyvt3.1.0.0-0-60  org.apache.kafka.clients.consumer.KafkaConsumer [Consumer clientId=consumer-1, groupId=null] Seeking to offset 0 for partition read_all_datatypes_avro-0
presto-master       | 2022-09-16T19:19:27.994+0545      INFO    dispatcher-query-0      io.trino.event.QueryMonitor     TIMELINE: Query 20220916_133417_00001_vyvt3 :: FINISHED :: elapsed 9246ms :: planning 366ms :: waiting 421ms :: scheduling 5978ms :: running 2820ms :: finishing 82ms :: begin 2022-09-16T19:19:18.183+05:45 :: end 2022-09-16T19:19:27.429+05:45

Copy link
Copy Markdown
Member

@hashhar hashhar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In addition to consumer config we'd also log from admin client + producer depending on what operation is being done.

We can target others if they turn out to be an issue but this tackles the bulk of the logs.


MountableFile logConfigFile = forHostPath(configDir.getPath("log.properties"));
builder.configureContainers(container -> {
if (isPrestoContainer(container.getLogicalName())) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would like to rename isPrestoContainer to isTrinoContainer as separate PR?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Created #14195

@kokosing kokosing merged commit 5dd95a9 into trinodb:master Sep 19, 2022
@kokosing
Copy link
Copy Markdown
Member

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Development

Successfully merging this pull request may close these issues.

4 participants