Learning Objectives

  • Enable LLM Observability in your LLM application using auto-instrumentation with the OpenAI Python client 
  • Explore trace data containing the inputs and outputs of LLM calls and other relevant metadata
  • Observe the performance of the LLM in the context of your application using metrics for latency, token usage, and errors
  • Identify errors in LLM applications and track down root causes using observability data

Primary Audience

This course is designed for professionals new to building and monitoring LLM applications. Software developers, AI engineers, product or engineering managers, decision makers, or anyone who wants to gain an understanding of how observability works in LLM applications can benefit from this course.

Prerequisites

Basic familiarity with the Datadog UI is required. If you are new to Datadog, we recommend completing the following course before starting this one:

- Datadog Quick Start

If you are new to observability, the following course is also recommended:

- Introduction to Observability

Technical Requirements

In order to complete the course, you will need the following:

- Chrome or Firefox

- Third-party cookies must be enabled to access labs

Course Navigation

At the bottom of each lesson, click MARK LESSON COMPLETE AND CONTINUE button so that you are marked complete for each lesson and can receive the certificate at the end of the course.

Course Enrollment Period

Please note that your enrollment in this course ends after 30 days. You can re-enroll at any time and pick up where you left off.

Course curriculum

    1. Welcome to Getting Started with LLM Observability

    1. Introduction to LLM Observability

    2. Explore LLM Observability

    3. Lab: Getting Started with LLM Observability

    1. Summary

    2. Feedback Survey

Getting Started with LLM Observability

  • 1 hours to complete
  • Beginner