The logging module is the standard tool in Python for logging. It implements a hierarchy of loggers and supports logging levels (severity levels): DEBUG, INFO, WARNING, ERROR, CRITICAL. Proper usage allows centralized control over output, saving logs to files, sending them via email, filtering by level, etc.
The main idea is to create loggers with names (logging.getLogger(__name__)) in each module, not to create a global root logger from scratch in every place. Configuration (format, handlers, level) is done centrally at application startup.
Example configuration:
import logging logging.basicConfig(format='%(levelname)s:%(name)s:%(message)s', level=logging.INFO) logger = logging.getLogger(__name__) def foo(): logger.info('Informational message') logger.error('Error!') foo()
Why should logging.basicConfig() not be called in every module? What happens if this is done?
Answer: logging.basicConfig() configures the root logger only once per Python session. Subsequent calls, if the root logger has already been initialized, will be ignored. As a result, if different modules attempt to call basicConfig() with their own formats/levels — only the very first one will take effect!
Story
In a large project, each developer configured logging to their liking through basicConfig and local handlers. Because of this, some logs did not appear at all, others were duplicated 10 times, and messages from different modules could not fit into one file.
Story
During the migration of a web service to multi-level logging, they forgot to specify the logger name using getLogger(__name__), and used the root logger everywhere. As a result, it was impossible to determine where a specific log came from.
Story
They used the logger.error() function to log all messages, even non-error ones. Consequently, automatic monitoring systems constantly "raised alarms" because they saw a high level of errors, whereas these were just debug/informational messages.