Monitoring, Logging, & Backups
This document provides an overview of the logging, monitoring, and backup strategies for the Merq backend.
Logging
Section titled “Logging”Framework
Section titled “Framework”The backend uses Zap, a blazingly fast, structured logging library for Go. All logging is configured in the pkg/customlog/customlog.go package.
Configuration
Section titled “Configuration”- Development Mode: Logs are human-readable, colorful, and printed to the console (
zap.NewDevelopmentConfig()). - Production Mode: Logs are structured as JSON (
zap.NewProductionConfig()), which is ideal for ingestion by log management systems. - Timestamp: Timestamps are formatted using the
ISO8601standard (e.g.,2026-01-09T...). - Log Level: The log level (e.g.,
INFO,ERROR) is capitalized for clarity.
The configured Zap logger is injected into the service layer, where it is used to record important events, errors, and business operations.
// Example from a services.logger.Error("failed to process widget", zap.Error(err), zap.String("widget_id", id))This produces a structured JSON log entry:
{ "level": "ERROR", "timestamp": "2026-03-02T10:30:00.123Z", "caller": "service/widget_service.go:42", "msg": "failed to process widget", "widget_id": "w_12345", "error": "database connection lost"}Monitoring
Section titled “Monitoring”The primary monitoring tools are those provided by the deployment environment (likely Dokploy).
- Real-time Logs: The deployment platform’s UI is used to stream logs from the running containers.
- Health Checks: The backend exposes two health check endpoints for monitoring by uptime checkers or load balancers:
GET /api/health: A simple endpoint to confirm the server is running.GET /api/ready: A readiness probe that might check dependencies like the database connection.
- Metrics: The deployment platform’s dashboard provides basic metrics on CPU, memory, and network usage.
Backups
Section titled “Backups”Strategy
Section titled “Strategy”The backend implements a pre-migration backup strategy. Before any database migrations are applied, the entrypoint.sh script automatically creates a backup of the PostgreSQL database.
Implementation
Section titled “Implementation”- Tool: The backup is created using the standard PostgreSQL
pg_dumputility, which is installed in the Docker image. - Trigger: The backup process is triggered at the start of the container’s lifecycle, right before running migrations.
- Storage: Backups are stored in the
/backupsdirectory inside the container. This directory is mapped to a host volume (./backups:/backupsindocker-compose.yml), ensuring that the backup files persist on the host machine even if the container is removed.
This strategy ensures that there is always a safe recovery point before making any changes to the database schema, minimizing the risk of data loss during deployments.