Centralised Logging for Claude Code

Every Claude Code session produces a detailed transcript: what files it read, what changes it made, what commands it ran, and what errors it encountered. When you have one developer, these logs live on their machine. When you have a team, you need them in one place.

Why centralised logging matters

  • Visibility — Understand what your AI agents are doing across the organisation, not just within individual sessions.
  • Debugging — When something goes wrong, trace back through the session that caused it. Which file was modified? What command produced the error? What was the agent trying to do?
  • Audit — In regulated environments, you need proof of what happened and when. Session logs provide a complete audit trail.
  • Optimisation — Identify patterns: which tasks consume the most tokens? Which skills fail most often? Where do agents get stuck?

What to log

Claude Code session logs contain rich structured data. The most valuable fields to capture:

  • Session metadata — Who started it, when, which project, which branch.
  • Tool calls — Every file read, file write, bash command, and search operation.
  • Token usage — Input and output tokens per turn. This is your cost data.
  • Errors and retries — When Claude Code hits an error and how it recovers (or doesn't).
  • Final outcome — Did the task succeed? Was a PR created? Were tests passing?

Collection patterns

File-based collection

The simplest approach: configure Claude Code to write session transcripts to a shared network drive or cloud storage bucket. Each session creates a JSON file with a unique ID. Works for small teams and doesn't require additional infrastructure.

Webhook-based collection

For real-time visibility, use hooks to send session events to a webhook endpoint. This can feed into your existing logging infrastructure (ELK stack, Datadog, Cloud Logging).

Git-based collection

Store session summaries alongside your code. After each session, Claude Code commits a summary to a .claude/logs/ directory. This keeps logs version-controlled and co-located with the code they affected.

Analysis and alerting

Raw logs are useful for debugging. For operational intelligence, you need analysis:

  • Daily digest — Summarise: how many sessions, total cost, tasks completed, error rate.
  • Cost alerts — Trigger when daily or weekly spend exceeds thresholds.
  • Error patterns — Detect recurring failures that indicate a skill needs updating or a workflow is broken.
  • Session duration outliers — Flag sessions that run much longer than typical. These often indicate the agent is stuck or the task is poorly defined.

Privacy considerations

Session logs contain your source code and potentially sensitive information:

  • Store logs with the same security controls as your source code
  • Scrub secrets and credentials from logs before storage
  • Set retention policies — logs from six months ago rarely need detailed session transcripts
  • Control access — not everyone needs to see every team member's session history

Tools and infrastructure

You don't need to build a logging platform from scratch. Existing tools work well:

  • Google Cloud Logging — If you're on GCP, ship logs to Cloud Logging and use Log Analytics for queries.
  • Firestore — Store session summaries as documents. Good for dashboards and queries.
  • BigQuery — For large-scale analysis across thousands of sessions. Query token usage, cost trends, and error patterns.
  • Simple dashboard — A basic web page that reads from your log store and shows today's sessions, costs, and status. Often more useful than a complex analytics platform.

Next steps

Centralised logging is a prerequisite for effective AgentOps and guardrail monitoring. It also enables smarter agent hierarchies where parent agents can review child agent logs.