Python Logging – The Right Way
Logging is one of the most important (and most underused) tools in a serious Python developer’s toolbox.
print() is great for debugging, but once your application goes to production, you need structured, timestamped, level-aware, configurable logs — not console spam.
In this practical guide we’ll cover everything you need to know about the logging module in modern Python (3.8–3.12+):
- Logging levels – what they really mean
- Basic implementation
- Writing to file (overwrite vs append)
- Adding proper timestamps
- Capturing exceptions automatically
- Why and how to create your own customized logger
- Best-practice features for production-ready loggers
1. Logging Levels – Quick Reference
| Level | Numeric Value | When to use | Typical console visibility |
|---|---|---|---|
CRITICAL | 50 | Application is unusable/major failure | Always |
ERROR | 40 | Serious problem – feature broken | Always |
WARNING | 30 | Something unexpected, butthe program continues | Usually |
INFO | 20 | Normal operation milestones | Development/staging |
DEBUG | 10 | Detailed diagnostic information | Only when debugging |
NOTSET | 0 | Placeholder / inherit parent level | — |
Golden rule 2026:
DEBUG → developers only
INFO → important business events
WARNING → something smells bad but we survived
ERROR → user-visible problem or broken feature
CRITICAL → wake someone up at 3 AM
2. Basic Logging Implementation
import logging
# Option A: Quick & dirty (not recommended for production)
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s | %(levelname)-8s | %(message)s'
)
logging.debug("This won't appear")
logging.info("Application started")
logging.warning("Low disk space")
logging.error("Failed to connect to database")
logging.critical("Cache server down – emergency!")
3. Writing to a Log File (Overwrite vs Append)
# Overwrite mode (creates new file every run)
logging.basicConfig(
filename='app-overwrite.log',
filemode='w', # ← overwrite
level=logging.DEBUG,
format='%(asctime)s [%(levelname)s] %(name)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
# Append mode (most common in production)
logging.basicConfig(
filename='app-append.log',
filemode='a', # ← append (default when filename is given)
...
)
4. Clean Timestamp Format
Recommended production format (ISO-ish + milliseconds):
format='%(asctime)s.%(msecs)03d | %(levelname)-8s | %(name)s | %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
Result example:
2026-06-15 14:37:22.145 | INFO | main | Starting background task 2026-06-15 14:37:22.189 | ERROR | payment | Stripe API timeout
5. Logging Exceptions Automatically
try:
1 / 0
except Exception as e:
logging.exception("Critical division error occurred") # ← best method
# or even simpler – captures full stack trace
logging.error("Something bad happened", exc_info=True)
logging.exception() automatically logs:
- level = ERROR
- full traceback
- exception message
6. Why You Almost Always Need Your Own Logger
logging.basicConfig() is great for scripts, but terrible for:
- libraries
- large applications
- microservices
- multi-module projects
Problems with root logger:
- Global configuration → conflicts between modules
- Hard to disable third-party library logs
- No hierarchy control
7. Production-Ready Custom Logger (Recommended Pattern)
import logging
import sys
from pathlib import Path
def get_logger(name: str, log_file: str | Path | None = None) -> logging.Logger:
logger = logging.getLogger(name)
# Prevent duplicate handlers if logger is already configured
if logger.handlers:
return logger
logger.setLevel(logging.DEBUG) # lowest level we'll ever log
# Formatter
formatter = logging.Formatter(
fmt='%(asctime)s.%(msecs)03d | %(levelname)-8s | %(name)-20s | %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
# Console handler
console = logging.StreamHandler(sys.stdout)
console.setLevel(logging.INFO)
console.setFormatter(formatter)
logger.addHandler(console)
# File handler (optional)
if log_file:
file_handler = logging.FileHandler(log_file, mode='a', encoding='utf-8')
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
return logger
# ------------------ Usage ------------------
# In any module
logger = get_logger(__name__, log_file="logs/app.log")
logger.debug("Detailed debug info (only in file)")
logger.info("User logged in: %s", username)
logger.warning("High latency detected: %.2f ms", latency)
try:
connect_to_db()
except DatabaseError as e:
logger.exception("Database connection failed")
Recommended Features for Customized Logger
- Named logger per module →
logging.getLogger(__name__) - Separate levels for console & file
- RotatingFileHandler / TimedRotatingFileHandler
- Contextual information (user_id, request_id, correlation_id)
- JSON formatter for structured logging (ELK, Datadog, etc.)
- Async handlers (logging.handlers.QueueHandler) in high-throughput apps
- Central configuration (usually via dictConfig or fileConfig)
Quick Summary Table – When to Use Which Level
| You want to log... | Use level |
|---|---|
| Very detailed tracing | DEBUG |
| Application lifecycle events | INFO |
| Recoverable strange situations | WARNING |
| Feature broken / user affected | ERROR |
| System unusable / panic | CRITICAL |
Final Thoughts – 2026 Style
- Never use
print()in production code - Always use named loggers (
__name__) - Log exceptions with
logger.exception() - Separate DEBUG to file, INFO+ to console
- Consider structured (JSON) logging for observability platforms
- Rotate logs or send directly to log management system
Good logging saves hours (or days) of debugging. Invest five minutes now — thank yourself later.
Happy logging!
