Skip to content

Update STT metrics to include token usage and enhance logging for tra…#5029

Open
bml1g12 wants to merge 6 commits intolivekit:mainfrom
bml1g12:add_gpt_realtime_transcription_metrics
Open

Update STT metrics to include token usage and enhance logging for tra…#5029
bml1g12 wants to merge 6 commits intolivekit:mainfrom
bml1g12:add_gpt_realtime_transcription_metrics

Conversation

@bml1g12
Copy link
Contributor

@bml1g12 bml1g12 commented Mar 6, 2026

Summary

The OpenAI Realtime API's conversation.item.input_audio_transcription.completed event carries a usage field with ASR duration (whisper-1 / gpt-4o-transcribe), billed separately from the realtime model. LiveKit currently ignores this field, so users cannot track transcription metrics via on_metrics_collected.

Per OpenAI's Realtime costs documentation, input transcription is billed at the ASR model's rate (e.g. $0.006 / 1M tokens for whisper-1), separately from the realtime model's audio tokens. I have confirmed with OpenAI support that when using gpt-realtime, the Whisper ASR model is billed per token not per minute, so for cost tracking purposes we need to track at least the audio token counts, however sadly at time of writing OpenAI only emit the duration (UsageTranscriptTextUsageDuration) despite their blog suggesting it should emit the token counts UsageTranscriptTextUsageTokens. I have made OpenAI support aware of the contradiction between their blog and actual implemented events for cost tracking, but in any case, this PR also implements the handling for UsageTranscriptTextUsageTokens events if they were to be produced in future. The way it is implemented in this PR means that if OpenAI fix it in future to also emit the relevant audio tokens counts (which are the ones we are most interested in for cost estimation) it will also work for those, enabling livekit users to track their Whisper costs on a per-session basis. As it is today though, this PR just enables livekit users to track the duration of Whisper ASR performed.

The Metadata.model_name field identifies which transcription model produced the metrics (e.g. whisper-1, gpt-4o-transcribe).

Note that I have not emited these metrics as OTEL traces as it seems we currently do not emit STT traces in general, and because for LangFuse to track the cost of these I think I would need to use platform specific attributes (`langfuse.observation.type": "generation") as the OTEL specification does not have a standard attribute for STT token counting. I would be happy to add this as a further improvement is there is interest from the livekit team, but otherwise will just implement it in our own client code.

Changes

  • STTMetrics (metrics/base.py): Add optional input_tokens, output_tokens, total_tokens, and input_audio_tokens fields. All default to None so existing STT plugins are unaffected.
  • OpenAI realtime plugin (realtime_model.py): Extract usage from conversation.item.input_audio_transcription.completed events and emit STTMetrics via the existing metrics_collected event. Handles both the token-based (UsageTranscriptTextUsageTokens) and duration-based (UsageTranscriptTextUsageDuration) usage variants from the OpenAI SDK.
  • log_metrics (metrics/utils.py): Log token fields for STT metrics when present.
  • UsageCollector (metrics/usage_collector.py): Aggregate stt_input_tokens, stt_output_tokens, and stt_input_audio_tokens in UsageSummary.

Design decisions

  • STTMetrics rather than RealtimeModelMetrics: The transcription runs on a separate model (whisper/gpt-4o-transcribe) with its own billing rate, so it belongs in STTMetrics with the model identified via Metadata.

@bml1g12 bml1g12 marked this pull request as ready for review March 13, 2026 16:37
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no potential bugs to report.

View in Devin Review to see 4 additional findings.

Open in Devin Review

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant