-
Notifications
You must be signed in to change notification settings - Fork 695
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Binding without creating a new logger #53
Comments
Hello @mooncake4132. I tried to implement the You can do Making |
Thanks for the reply! I thought about using
entry.py from loguru import logger
from .utils import some_small_util_function()
# Process loop
while True:
logger.bind('request_id', 'xxx-xxx-xxx')
some_small_util_function()
...
logger.unbind('request_id') utils.py from loguru import logger
def some_small_util_function():
logger.info('test') The If you want to keep the API minimalist that's absolutely fine. I can easily work around the issue through inheritance and having my own logger registry 😃 |
Thanks for the code samples, it helps a lot to visualize the problem and think about a proper solution! I don't know what to think of this kind of contextual logging. On one hand I see that this can be super useful, on the other hand I'm worried this will break as soon as you introduce multithreading. This looks too much like using global variables across modules in my opinion, like:
If logs emitted by class Utils:
def __init__(self, request_id):
self.logger = logger.bind(request_id=request_id)
def some_small_util_function(self):
self.logger.info("Test") And use it like: # Process loop
while True:
utils = Utils(request_id="xxx.xxx.xxx")
utils.some_small_util_function() I agree that this doesn't look perfect as the function normally does not need the context, except for logging which is orthogonal to How would you implement this using standard |
What you suggested is actually what we have today. However, as we have more and more modules and functions, we find ourselves passing loggers around way too often. So we want to move away from this model. You do have a point with concurrent executions (i.e. threading or async tasks). Our solution is to use a proxy logger, which will ensure each concurrent environment only accesses its own logger. This is what we came up with (some parts left out): import asyncio
import threading
import aiotask_context
_tls = threading.local()
class LoggerProxy:
def __getattr__(self, key):
return getattr(self._get_context_logger(), key)
def _get_context_logger(self):
if asyncio._get_running_loop() is None:
if not getattr(_tls, 'proxy_logger', None):
_tls.proxy_logger = loguru.logger.bind()
return _tls.proxy_logger
else:
proxy_logger = aiotask_context.get('proxy_logger')
if not proxy_logger:
proxy_logger = loguru.logger.bind()
aiotask_context.set('proxy_logger', proxy_logger)
return proxy_logger
logger = LoggerProxy() *aiotask_context from here |
Thanks once again for the code snippet, very instructive. So, this means that for It's a complex problem, because in addition to the context per thread, a context for asyncio is also needed as you pointed out. I don't know if and how I would implement this, but I will think about it, because as your use case demonstrates, it can be very useful and convenient. In the meantime, you can also add a Also, it may worth looking at alternative libraries which seems to provide support for context logging across threads. I think to |
Yes, I've already secretly added Thanks for the discussion! |
Thanks to you, as I said this is an interesting use case, so I will see if I can integrate it to Loguru elegantly. 🙂 |
Maybe open the issue/feature again? I ended up here desperately trying to achieve the same. I have the exact same problem @mooncake4132 had. I've had this for years tbh while using Python loggers. The ugly workaround for me as well has been passing logger objects around, but this gets so tedious fast. The problem is really painful if you work in an microservices environment and you pass some contextual data between those things. Request-ID or Trace-ID, user info etc. Having 100+ services makes it quite necessary to log something down. If those logs are collected to some central place (like ELK stack) it's really convenient to search all of them via this contextual info. @Delgan if you have an idea and willingness to implement this I can try to help with a PR. |
Hey @kerma! You are right, I left the issue closed because I did not have yet a precise idea about the solution to implement, but it always remained half-open in my mind. So, let's re-open it now. :) I have not yet taken the time to think about it a lot. However, I feel it's possible to solve this issue as well as #72 and #108 by adding just one new method to the Loguru's API. I thought of something like a For example, would this be ok if you have to use it like this in your inner functions, def some_small_util_function():
logger.context("foobar").info('test') |
Hey @Delgan, thanks for the reopen. The Ideally I'd like to use the logger in a way that "it just works".
Given that Someclass also uses logger in other() function the output could be something like:
I guess the question is what happens if I call the logger with context when the context is not yet set? Ideally this should just work as well. But we'd need some mechanism for debugging. |
Hi @Delgan, snippet of what I have in mind:
|
Ok, sorry it took so long! So, taking into consideration what the three of you advised, I think a very simple def other():
logger.info("Calling other()")
with logger.contextualize(id=123):
other() There would be no need to re-call any method from inside As you suggested, @superzarzar, the new method will be used a context manager to automatically set and unset the values. It's common sense, and very elegant too. However, I would like to avoid using |
Fixed on It uses I will close this issue once the next Loguru release is published. |
The new |
I'm wondering if we can have new APIs that can bind and unbind structured arguments in place (i.e. without creating a new logger). Maybe an
add_extra
and aremove_extra
method? I also think it'll be useful ifLogger._extra
can be promoted to a public member for viewing (a getter property would suffice).The motivation here is that I want to be able to do
from loguru import logger
from all my modules and get the structured arguments I've set elsewhere. I can workaround this issue by keeping my own logger registry or adding values to_extra
directly but I think it'll be nice if this can be officially supported.The text was updated successfully, but these errors were encountered: