Log Configuration & Processing Learning Path
Follow this curated learning path to effectively collect, structure, and optimize logs using Datadog Log Management.
In these hands-on courses, you’ll learn to configure logs for collection, build pipelines for processing logs, and apply rules to keep your data accurate, secure, and actionable. You’ll also practice indexing strategies to reduce costs, monitor usage, and streamline log ingestion workflows.
This path is designed for engineers, DevOps practitioners, and other roles responsible for configuring log sources, building pipelines and applying processing rules at scale, and managing log ingestion and indexing.
You’ll learn how to do the following:
Configure Log Collection for a Containerized Application
Learn how to set up logging and log ingestion for an app that is built with Ruby and Python services in a Docker environment. The concepts are applicable to other languages, frameworks, and environments.
Process Logs Out of the Box with Integration Pipelines
Structure and enrich ingested logs from common sources using out-of-the-box and modified Integration Pipelines.
Build and Manage Log Pipelines
Build a log pipeline from scratch. Use the Pipeline Scanner to check if the pipeline is processing the logs as expected. Add a Standard Attribute for a common attribute in logs.
Advanced Log Configuration
NEW! Configure your logs to filter out sensitive data, gain support for multi-line logging, and apply global processing rules to get the most out of your logs. Solve common problems in configuring logs for a production application.
Manage and Monitor Indexed Log Volumes
Control and track retention and volumes of indexed logs to maintain costs using Indexes with Exclusion Filters, Logs Monitors, and the Log Management - Estimated Usage dashboard. Learn about Flex Logs vs Standard Indexing.