Skip to main content
The conversation-update event is triggered whenever a message occurs during the call - both user messages (from speech-to-text) and assistant messages (generated responses).
Each event includes the complete conversation history from the start of the call, not just the latest message. This ensures you never miss context even if some events are lost.

When It’s Triggered

This event is sent for:
  • User Messages: When user speech is transcribed by STT
  • Assistant Messages: When the assistant generates and speaks a response
  • Welcome Messages: When the assistant plays the initial greeting

Event Structure

{
    "message": {
        "timestamp": 1772702480281,
        "type": "conversation-update",
        "call": { /* Call Object */ },
        "assistant": { /* Assistant Object */ },
        "messages": [ /* Array of Message Objects */ ],
        "phone": { /* Phone Object */ },
        "customer": { /* Customer Object */ },
        "analysis": { /* Empty during conversation */ }
    }
}

Key Fields

FieldTypeDescription
message.typestringAlways “conversation-update” for this event
message.timestampnumberUnix timestamp when message was processed
messagesarrayComplete conversation history up to this point
call.statusstringCurrent call status (usually “ongoing”)

Messages Object

The messages array contains the conversation history between the user and assistant.
Each conversation-update event includes all messages from the start of the call up to that point, not just the latest message. This ensures that if any individual event is missed, you still have the complete conversation context.

Message Structure

FieldTypeDescription
messageIdstringUnique message identifier
rolestringMessage sender: “user” or “assistant”
textstringTranscribed or generated message content
timestampnumberUnix timestamp in milliseconds
metricsobjectPerformance metrics for this message
skippedAssistantMessagesarrayAssistant messages that were skipped due to interruptions

Message Metrics

Each message includes detailed performance metrics:

Timeline Metrics

  • totalElapsedTimeMs: Total time since call start
  • offsetFromPreviousTurnMs: Time gap from previous message

Audio Metrics

  • totalAudioReceivedMs: Total audio duration received
  • audioDelayPerTurnMs: Audio processing delay for this turn
  • delayedPacketsPerTurnCount: Count of delayed network packets

Speech-to-Text (STT) Metrics

  • timestampMs: When transcription was completed
  • startOffsetMs/endOffsetMs: Audio segment boundaries
  • confidence: Transcription confidence score (0.0 to 1.0)
  • vadMs: Voice activity detection duration

Text-to-Speech (TTS) Metrics

  • audioDurationMs: Duration of generated audio
  • generationTimeMs: Time to generate audio
  • queueLatencyMs: Processing queue delay
  • isCachePlaying: Whether audio was served from cache

LLM Metrics

  • responseLatencyMs: Time for LLM to generate response
  • queueLatencyMs: Processing queue delay

Message Types

  • Welcome Messages: Have messageId starting with “welcome-”
  • User Messages: Have messageId starting with “user-”
  • Assistant Messages: Have complex messageId with user reference and timestamp

Call Object

The call object contains comprehensive information about the call session.
FieldTypeDescription
idstringUnique call identifier (e.g., “WC-82015760-c3bd-427d-a23b-ba9b07e4ab85”)
teamIdstringOrganization/team identifier
assistantIdstringID of the assistant handling this call
callTypestringType of call: “web”, “phone”, etc.
directionstringCall direction: “inbound” or “outbound”
startAtstringISO timestamp when call started
endAtstringISO timestamp when call ended (only in end-of-call-report)
userNumberstringUser’s phone number or identifier
assistantNumberstringAssistant’s number or identifier
statusstringCurrent call status: “queued”, “ongoing”, “finished”, “forwarded”
phoneCallStatusstringDetailed phone status: “in-progress”, “completed”, etc.
phoneCallStatusReasonstringHuman-readable status reason
callEndTriggerBystringWhat triggered call end: “bot”, “user”, “system”
assistantCallDurationnumberDuration of call in milliseconds
analysisanalysis-objectCall analysis results
recordingobjectRecording information with S3 bucket and path
assistantOverridesobjectDynamic variables and validation overrides
metadataobjectCustom metadata associated with the call
costobjectCost breakdown (only in end-of-call-report)
metricsobjectDetailed call metrics (only in end-of-call-report)

Assistant Object

The assistant object contains the configuration and settings of the assistant handling the call.
In some webhook events, the assistant object may be truncated for brevity. The full assistant configuration is typically included in status-update events.
FieldTypeDescription
_idstringUnique assistant identifier
namestringDisplay name of the assistant
welcomeMessagestringMessage played when call starts
welcomeMessageModestringHow welcome message is triggered: “automatic”, “manual”
welcomeMessageInterruptionsEnabledbooleanWhether users can interrupt welcome message
endCallMessagestringMessage played when call ends
endCallPhrasesarrayPhrases that trigger call termination
bargeInEnabledbooleanWhether users can interrupt assistant responses
assistantProviderstringLLM provider: “openai”, “anthropic”, “gemini”, etc.
assistantModelstringSpecific model being used
assistantSystemPromptstringSystem prompt defining assistant behavior
assistantTemperaturenumberLLM creativity setting (0.0 to 1.0)
assistantMaxTokensnumberMaximum tokens per response
assistantAnalysisobjectConfiguration for call analysis features
assistantServerobjectWebhook configuration for this assistant
configobjectSpeech-to-text and text-to-speech configurations

Key Subobjects

  • assistantAnalysis: Contains settings for summary generation, success evaluation, and structured data extraction
  • assistantServer: The webhook configuration that triggered this event
  • config.speech: STT/TTS vendor settings, voice configuration, and language options

Message Flow Pattern

Events are triggered in this typical sequence:
  1. Welcome Messageconversation-update with assistant greeting
  2. User Speaksconversation-update with transcribed user message
  3. Assistant Respondsconversation-update with assistant reply
  4. User Repliesconversation-update with new user message
  5. Cycle Continues until call ends
Messages may be interrupted or skipped if users barge in while the assistant is speaking. Check the skippedAssistantMessages field for interrupted responses.

Example Payload

{
  "message": {
    "timestamp": 1772702480281,
    "type": "conversation-update",
    "call": {
      "id": "WC-82015760-c3bd-427d-a23b-ba9b07e4ab85",
      "status": "ongoing"
    },
    "messages": [
      {
        "messageId": "welcome-1",
        "role": "assistant",
        "text": "Welcome to Apollo clinic!!",
        "timestamp": 1772702480279,
        "metrics": {
          "tts": {
            "audioDurationMs": 1442,
            "generationTimeMs": 7,
            "isCachePlaying": true
          }
        }
      }
    ]
  }
}

Common Use Cases

Real-Time Conversation Display

def handle_conversation_update(event_data):
    call_id = event_data["message"]["call"]["id"]
    messages = event_data["message"]["messages"]

    # Update conversation display
    for message in messages:
        display_message(
            call_id=call_id,
            role=message["role"],
            text=message["text"],
            timestamp=message["timestamp"]
        )

    # Scroll to latest message
    scroll_to_latest(call_id)

Live Transcription Storage

const storeConversation = (eventData) => {
  const { call, messages } = eventData.message;

  // Store each message with full context
  messages.forEach(message => {
    db.conversations.upsert({
      call_id: call.id,
      message_id: message.messageId,
      role: message.role,
      text: message.text,
      timestamp: message.timestamp,
      confidence: message.metrics?.stt?.confidence,
      created_at: new Date()
    });
  });

  // Update call status
  db.calls.update(call.id, {
    last_message_at: new Date(),
    message_count: messages.length,
    status: call.status
  });
};

Sentiment Analysis Pipeline

def analyze_conversation_sentiment(event_data):
    messages = event_data["message"]["messages"]
    call_id = event_data["message"]["call"]["id"]

    # Analyze only user messages for sentiment
    user_messages = [msg for msg in messages if msg["role"] == "user"]

    if user_messages:
        latest_user_message = user_messages[-1]

        # Run sentiment analysis
        sentiment = sentiment_analyzer.analyze(latest_user_message["text"])

        # Store sentiment data
        store_sentiment(
            call_id=call_id,
            message_id=latest_user_message["messageId"],
            sentiment=sentiment,
            confidence=sentiment.confidence
        )

        # Trigger alerts for negative sentiment
        if sentiment.label == "negative" and sentiment.confidence > 0.8:
            alert_supervisor(call_id, "Negative sentiment detected")

Message Metrics Tracking

def track_message_performance(event_data):
    messages = event_data["message"]["messages"]

    for message in messages:
        if message["role"] == "assistant" and "metrics" in message:
            metrics = message["metrics"]

            # Track TTS performance
            if "tts" in metrics:
                track_metric("tts_generation_time",
                           metrics["tts"]["generationTimeMs"])
                track_metric("tts_audio_duration",
                           metrics["tts"]["audioDurationMs"])

            # Track LLM performance
            if "llm" in metrics:
                track_metric("llm_response_time",
                           metrics["llm"]["responseLatencyMs"])

        elif message["role"] == "user" and "metrics" in message:
            # Track STT confidence
            stt_metrics = message["metrics"].get("stt", {})
            if "confidence" in stt_metrics:
                track_metric("stt_confidence", stt_metrics["confidence"])

Message Interruptions

When users interrupt assistant responses, the skippedAssistantMessages field contains messages that were cut off:
def handle_interruptions(event_data):
    messages = event_data["message"]["messages"]

    for message in messages:
        if message["role"] == "user" and "skippedAssistantMessages" in message:
            skipped = message["skippedAssistantMessages"]

            if skipped:
                # Log interrupted responses for analysis
                for skipped_msg in skipped:
                    log_interruption(
                        call_id=event_data["message"]["call"]["id"],
                        interrupted_text=skipped_msg.get("text", ""),
                        interruption_time=message["timestamp"]
                    )

Performance Considerations

Since conversation-update events can contain large message arrays:
  1. Incremental Processing: Only process new messages since your last update
  2. Message Deduplication: Use messageId to avoid processing duplicates
  3. Efficient Storage: Consider storing messages individually rather than entire payloads
  4. Pagination: For long conversations, implement message pagination in your UI
def process_incremental_messages(event_data, last_seen_message_id=None):
    messages = event_data["message"]["messages"]

    if last_seen_message_id:
        # Find index of last seen message
        last_index = -1
        for i, msg in enumerate(messages):
            if msg["messageId"] == last_seen_message_id:
                last_index = i
                break

        # Process only new messages
        new_messages = messages[last_index + 1:] if last_index >= 0 else messages
    else:
        new_messages = messages

    # Process new messages
    for message in new_messages:
        process_single_message(message)

    # Return latest message ID for next call
    return messages[-1]["messageId"] if messages else None

Conversation Update Event Example

This is a complete example of a conversation-update webhook event payload with multiple messages.
{
    "message": {
        "timestamp": 1772702480281,
        "type": "conversation-update",
        "call": {
            "id": "WC-82015760-c3bd-427d-a23b-ba9b07e4ab85",
            "teamId": "67c0231ae6880fe48ef929ee",
            "assistantId": "697769ef5e6d94d5ad83e01e",
            "callType": "web",
            "direction": "inbound",
            "startAt": "2026-03-05T09:21:20.063Z",
            "userNumber": "web-Ramesh Naik",
            "assistantNumber": "697769ef5e6d94d5ad83e01e",
            "status": "ongoing",
            "phoneCallStatus": "in-progress",
            "phoneCallStatusReason": "Call is in progress",
            "callEndTriggerBy": "",
            "assistantCallDuration": 0,
            "analysis": {
                "summary": "",
                "successEvaluation": ""
            },
            "recording": {
                "s3Bucket": "",
                "path": ""
            },
            "assistantOverrides": {
                "dynamicVariables": {
                    "serial_number": ""
                },
                "variablesValidations": {
                    "serial_number": "none"
                }
            },
            "metadata": {}
        },
        "assistant": {
            "_id": "697769ef5e6d94d5ad83e01e",
            "name": "Mary Dental - main",
            "welcomeMessage": "Welcome to Apollo clinic!!",
            "assistantProvider": "gemini",
            "assistantModel": "gemini-3-flash-preview"
        },
        "messages": [
            {
                "messageId": "welcome-1",
                "role": "assistant",
                "text": "Welcome to Apollo clinic!!",
                "timestamp": 1772702480279,
                "metrics": {
                    "timeline": {
                        "totalElapsedTimeMs": 2
                    },
                    "audio": {
                        "totalAudioReceivedMs": 16
                    },
                    "llm": {},
                    "tts": {
                        "audioDurationMs": 1442,
                        "queueLatencyMs": 0,
                        "generationTimeMs": 7,
                        "isCachePlaying": true
                    },
                    "interactlyPlayer": {
                        "waitTimeMs": 7
                    }
                }
            },
            {
                "messageId": "user-1",
                "role": "user",
                "text": "Hello.",
                "timestamp": 1772702485138,
                "metrics": {
                    "timeline": {
                        "offsetFromPreviousTurnMs": 4082,
                        "totalElapsedTimeMs": 3861
                    },
                    "audio": {
                        "delayedPacketsPerTurnCount": 12,
                        "delayedPacketsCumulativeCount": -2,
                        "lostPacketsCumulativeCount": 0,
                        "audioDelayPerTurnMs": 194,
                        "audioDelayCumulativeMs": -27,
                        "offsetFromPreviousTurnMs": 3888,
                        "totalAudioReceivedMs": 3888
                    },
                    "stt": {
                        "timestampMs": 1772702484138,
                        "startOffsetMs": 0,
                        "endOffsetMs": 4200,
                        "durationMs": 4200,
                        "vadMs": 520,
                        "relativeLatencyMs": -312,
                        "confidence": 0.7364502
                    },
                    "interactlyEOU": {
                        "turnDetectionTimeMs": 1000,
                        "queueLatencyMs": 0
                    }
                },
                "skippedAssistantMessages": []
            },
            {
                "messageId": "user-2",
                "role": "user",
                "text": "Can you book an appointment?",
                "timestamp": 1772702488063,
                "metrics": {
                    "timeline": {
                        "offsetFromPreviousTurnMs": 2925,
                        "totalElapsedTimeMs": 6786
                    },
                    "audio": {
                        "delayedPacketsPerTurnCount": 1,
                        "delayedPacketsCumulativeCount": -1,
                        "lostPacketsCumulativeCount": 0,
                        "audioDelayPerTurnMs": 13,
                        "audioDelayCumulativeMs": -14,
                        "offsetFromPreviousTurnMs": 2912,
                        "totalAudioReceivedMs": 6800
                    },
                    "stt": {
                        "timestampMs": 1772702487063,
                        "startOffsetMs": 6490,
                        "endOffsetMs": 7110,
                        "durationMs": 620,
                        "vadMs": -100,
                        "relativeLatencyMs": -310,
                        "confidence": 0.7314453
                    },
                    "interactlyEOU": {
                        "turnDetectionTimeMs": 1000,
                        "queueLatencyMs": 0
                    }
                },
                "skippedAssistantMessages": []
            },
            {
                "messageId": "user-3",
                "role": "user",
                "text": "For next week.",
                "timestamp": 1772702493047,
                "metrics": {
                    "timeline": {
                        "offsetFromPreviousTurnMs": 4982,
                        "totalElapsedTimeMs": 11768
                    },
                    "audio": {
                        "delayedPacketsPerTurnCount": -1,
                        "delayedPacketsCumulativeCount": -1,
                        "lostPacketsCumulativeCount": 0,
                        "audioDelayPerTurnMs": -10,
                        "audioDelayCumulativeMs": -24,
                        "offsetFromPreviousTurnMs": 4992,
                        "totalAudioReceivedMs": 11792
                    },
                    "stt": {
                        "timestampMs": 1772702492045,
                        "startOffsetMs": 7110,
                        "endOffsetMs": 11990,
                        "durationMs": 4880,
                        "vadMs": 4480,
                        "relativeLatencyMs": -198,
                        "confidence": 0.2600708
                    },
                    "interactlyEOU": {
                        "turnDetectionTimeMs": 1000,
                        "queueLatencyMs": 1
                    }
                },
                "skippedAssistantMessages": []
            },
            {
                "messageId": "user-3-WcIrvV6mE7-1772702495604",
                "role": "assistant",
                "text": "Hi there! I'd be happy to help you book an appointment with Dr. Sam for next week. Could you please tell me your full name first?",
                "timestamp": 1772702495958,
                "metrics": {
                    "timeline": {
                        "totalElapsedTimeMs": 15681
                    },
                    "audio": {
                        "totalAudioReceivedMs": 15696
                    },
                    "llm": {
                        "timestampMs": 1772702495603,
                        "queueLatencyMs": 3,
                        "responseLatencyMs": 2557
                    },
                    "tts": {
                        "audioDurationMs": 3207,
                        "queueLatencyMs": 2,
                        "generationTimeMs": 353,
                        "isCachePlaying": false
                    },
                    "interactlyPlayer": {
                        "waitTimeMs": 352
                    }
                }
            }
        ],
        "phone": {
            "provider": {
                "name": ""
            }
        },
        "customer": {
            "number": "web-Ramesh Naik"
        },
        "analysis": {}
    }
}