Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Fix RetryDestinationLimiter re-starting finished log contexts #12803

Merged
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions changelog.d/12803.bugfix
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Fix a long-standing bug where finished log contexts would be re-started when failing to contact remote homeservers.
6 changes: 4 additions & 2 deletions synapse/util/retryutils.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@
from types import TracebackType
from typing import TYPE_CHECKING, Any, Optional, Type

import synapse.logging.context
from synapse.api.errors import CodeMessageException
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage import DataStore
from synapse.util import Clock

Expand Down Expand Up @@ -265,4 +265,6 @@ async def store_retry_timings() -> None:
logger.exception("Failed to store destination_retry_timings")

# we deliberately do this in the background.
synapse.logging.context.run_in_background(store_retry_timings)
run_as_background_process(
f"store_retry_timings-{self.destination}", store_retry_timings
)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The difference between run_in_background and run_as_background_process is that the former inherits the current logcontext, while the latter creates an entirely new one with separate metrics.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Including the destination in the name of the background process might not be such a good idea. I think we'll end up with one metric per destination in prometheus?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, true. We could possibly put them as tags or something but that still seems unlikely to work well.