What’s new in v1.3.0
🚀 Updates & Improvements🔧 External API Tools - Full HTTP Method Support
Enhanced API Integration CapabilitiesExternal API Tools now support all standard HTTP methods, providing complete flexibility for integrating with third-party services and APIs.What’s Changed
Before: OnlyPOST method was supported by default for External API Tools.Now: Full support for all HTTP methods:GET- Retrieve data from APIsPOST- Create new resourcesPUT- Update existing resources completelyPATCH- Partial updates to existing resourcesDELETE- Remove resources
How to Use
- Navigate to the Tools Section in your dashboard
- Create or edit an External API Tool
- Select your desired HTTP method from the dropdown
- Configure the tool parameters as needed
- Attach the tool to your assistant
📚 Knowledge Base Optimization
Improved Query Performance and Indexing- Enhanced Query Processing - Optimized algorithms for faster knowledge base searches and more accurate results
- Advanced Indexing - Improved indexing mechanisms that reduce response times and enhance search relevance
- Better Scalability - Enhanced performance for larger knowledge bases with more documents and data
🎤 ElevenLabs Configuration Enhancement
New Streaming Latency ControlWe’ve added granular control over ElevenLabs streaming optimization, giving you more precise control over the balance between quality and latency.New Configuration Field
Configuration Details
- Field Name:
optimizeStreamingLatency - Value Range:
0to4 - New Default:
0(previously managed internally with value3) - Configuration Methods:
- Dashboard UI toggle
- API configuration support
Latency Optimization Levels
0- Maximum quality, higher latency (recommended for most use cases)1-2- Balanced quality and latency3- Previous default - good balance4- Maximum speed, optimized for real-time interactions
Migration Note: Existing configurations will automatically use the new default value of
0. If you were relying on the previous behavior, you may want to set this to 3 to maintain the same performance characteristics.📊 Per-Turn Latency Metrics
Enhanced Performance MonitoringWe’ve introduced detailed per-turn latency metrics for both user and assistant interactions, providing unprecedented visibility into conversation performance.New Metrics Available
User Turn Metrics:- Audio Packets latency
- Voice Activity Detection (VAD) latency
- Speech-to-Text (STT) processing time
- Total user turn processing duration
- Large Language Model (LLM) response generation time
- Text-to-Speech (TTS) processing duration
- Total assistant response latency
Where to Find These Metrics
-
Debug Mode Dashboard
- Enable Debug mode from your profile or by clicking the Interactly.ai logo
- Click on individual user or assistant messages in Call Logs
- View detailed timing breakdowns for each interaction
-
Webhook Integration
- Available in webhook subscription
messageevents - Real-time access to latency data for your applications
- Perfect for monitoring and analytics systems
- Available in webhook subscription
Use Cases
- Performance Optimization - Identify bottlenecks in conversation flow
- Quality Monitoring - Track response times across different configurations
- Analytics Integration - Export timing data to your monitoring systems
- Debugging - Diagnose latency issues in real-time conversations
🛠️ Bug Fixes and Improvements
- Resolved minor issues affecting call stability and user experience
- Various UI/UX refinements for better user interaction
What’s new in v1.2.0
🚀 Updates & Improvements🔧 Stability Improvements
- Enhanced system reliability with comprehensive bug fixes and performance optimizations
- Resolved known issues that were impacting call quality and user experience
- Improved error handling and recovery mechanisms across the platform
🎨 UI Enhancements
- Refined user interface elements for better visual consistency and usability
- Updated component styling and layouts for improved user experience
- Enhanced accessibility features and responsive design improvements
🎙️ Noise Suppression Configuration Updates
⚠️ Breaking Changes - Assistant ConfigurationWe’ve improved the noise suppression configuration with a more comprehensive and flexible approach. The following fields are now removed:noiseSuppressorConfig object:Configuration Options Explained
enabled- Master toggle for noise suppression functionalitysuppressBackgroundNoise- Controls Multi-Variate Noise Suppression (MVNS) to filter out ambient soundssuppressBackgroundVoice- Controls Background Voice Suppression (BVS) to minimize interference from other speakersdigitalGainControl- Controls Digital Gain Control (DGC) for automatic volume adjustmentknobValue- Fine-tune suppression intensity from 0 (minimal) to 100 (maximum)
Migration Guide
Before:Migration Required: Please update your assistant configurations to use the new
noiseSuppressorConfig object. The deprecated fields will continue to work temporarily but will be removed in a future release.🛠️ General
- Performance improvements and system optimizations
- Enhanced logging and monitoring capabilities
- Security updates and compliance improvements
What’s new in v1.1.0
🚀 Updates & Improvements📞 Caller ID Fix for Outbound Forwarded Calls
When an assistant‐initiated outbound call is forwarded to another recipient, the user caller ID (user’s phone number) is now preserved. The receiving user will no longer see the forwarded assistant’s caller ID.🎨 UI Enhancements
- Call Logs: Error Commands/Utterances are now visually highlighted in red for quick identification.
- Customer Logs: Updated layout and improved readability for streamlined troubleshooting.
- Assistant STT Configuration: UI support added for Deepgram Flux model configuration.
🗣️ Assistant Behavior Enhancements
- Added support for
<nonInterruptible>tag to ensure uninterrupted prompt/audio playback by the assistant. - Webhook listeners now emit LLM error events, allowing better error tracking and observability.
📚 Knowledge Base
- Pagination and search capabilities added within the assistant’s Knowledge Base for faster navigation and scaling with larger datasets.
🔄 Integration & Automation
- Support added to run cadence flow using Microsoft integration.
📘 API Docs Optimizations
- Improved API documentation layout and navigation for easier reference.
- Certain API URLs have been updated to new paths. If you have bookmarked API endpoints, please review and update them accordingly.
🛠️ General
- Multiple bug fixes and performance improvements across the platform.
What’s new in v1.0.0
1. Enable Debug mode (see Enable Debug mode section below)
Quickly turn on verbose developer logs from your profile so you can inspect detailed runtime information when diagnosing issues. When Debug mode is enabled the dashboard surfaces expanded logs and error traces useful for developers during troubleshooting.See the Enable Debug mode section below for more details.2. TTS — new audio cache scopes (assistant & team) + clip deletion
We added two new cache scope levels for TTS audio clips — assistant and team — so generated audio can be cached at the most appropriate scope for reuse and cost savings. The dashboard now also exposes the ability to delete existing audio clips so you can manage storage and refresh voices or content when needed.3. Campaign Webhooks
You can now register webhooks to receive real-time notifications for important campaign lifecycle events (for example: campaign completion) and per-call status updates (for example: call completed, failed, or dropped). Webhooks may be configured when creating or updating a campaign, enabling easy integration with downstream systems and automation pipelines.4. Twilio SMS — Inbound & Outbound
Dashboard toggle to enable Twilio SMS inbound and outbound functionality. This makes two-way SMS possible so your flows can receive replies and send messages via Twilio directly from the dashboard — useful for bi-directional support and conversational workflows.5. Vonage Number management
Buy or import Vonage phone numbers directly from the dashboard. Note: you will need to provide your Vonage credentials when adding or importing numbers.6. Fixed: missed tool calls and trailing messages
Resolved a bug where tool calls and messages at the end of conversations could be missed or dropped. This fix improves reliability for trailing-message flows and integrations that depend on final tool outputs.7. Stability & performance improvements
General reliability and performance upgrades across the platform — faster page loads, reduced error rates, and smoother dashboard interactions.Enable Debug mode
Debug mode helps developers view detailed diagnostic logs and metadata for conversations, making it easier to analyze performance, latency, and behavior of both Assistant and User utterances.How to enable Debug mode
You can enable Debug mode in two ways:-
From the logo — Click the Interactly.ai logo at the top-left corner.

-
From your profile — Click the Profile icon at the top-right and toggle Debug mode ON.

Debug info in Call Logs
Open any conversation from Call Logs after enabling Debug mode. You’ll notice new metrics displayed at each Assistant and User utterance.
Assistant utterance metrics
- AI: 1093 ms — Time taken by the LLM to generate the response.
- TTS: 356 ms — Time taken by the TTS engine to generate the audio clip.
- AD — Audio Duration of that particular clip.
User utterance metrics
- AD — Audio Duration of the user’s spoken input.
- VAD: 6 ms (Voice Activity Detection) — Time the STT vendor waited after receiving the final word.
-
AVAD: 800 ms (Additional Voice Activity Detection) — Extra waiting time configured at the Assistant level under
Advanced → Start Speaking Plan → Smart Endpointing OFF.

Accessing Meta Info — Assistant utterances
To view detailed metadata, click on the Assistant key in the utterance view.
Key fields in meta info
-
command — Indicates whether the action is
PlayorEnd.Endsignals the call termination.
- messageId — Unique ID for the Assistant’s utterance.
- userMessageId — The ID of the corresponding user message that triggered this response.
AI Meta fields
-
finish_reason — Can be
stop,tool_calls, orlength.stop: Response finished naturally.length: Token limit reached.tool_calls: Model triggered a tool/function.
- queue_latency — Time the request waited before model processing started.
- response_latency — Time the model took to generate the response.
- trailing_messages — Last 6 utterances passed to the LLM for context.
Other useful fields
- audioDuration — Duration of the generated audio clip.
- isBargein — Whether the utterance was interrupted.
- isCachePlaying —
true/false; indicates if the clip was played from cache or generated fresh. - model / vendor — The TTS model and vendor used.
Accessing Meta Info — User utterances
Click on the User key in the utterance to view metadata for the user’s input.
Important fields
- messageId — ID of the user message.
- previousBotMessageId — ID of the last Assistant response before this message.
- skippedBotMessages — List of interrupted Assistant messages skipped due to this utterance.
- confidence — Confidence score returned by the STT engine for this transcript.