Hamming AI Blog
Featured Post
Hamming AI Partners with Cisco to Enhance Voice Agent Reliability
Hamming AI announces strategic partnership with Cisco to help their customers build and maintain reliable AI voice agents through automated testing and production monitoring.
Recent
A Guide to Quality Assurance for AI Voice Agents
Learn how to implement AI voice agent quality assurance with proven strategies for voice agent testing. Discover the 4-layer framework for ensuring AI voice agent QA across infrastructure, execution, user satisfaction, and business outcomes.
Voice Agent Analytics: Why Legacy Analytics Solutions Don’t Work Anymore
Legacy tools fail at understanding AI voice agents. Learn how modern analytics offer real-time insights, improve accuracy, and surface blind spots.
Hamming vs. Retell & Vapi QA Testing: Why platform QA isn’t enough
Scripted voice tests pass in the lab but fail in production. Compare Hamming’s stress-testing and live observability with Vapi and Retell’s happy-path suites, then learn how to harden your agent.
Hamming AI Raises $3.8M Seed Round
We’re excited to announce our $3.8M seed round led by Mischief, with participation from YCombinator, AI Grant, Pioneer, Coalition Operators, Coughdrop, and notable angels.
DTMF Support for Comprehensive Voice Agent Testing
Automate DTMF testing with Hamming AI’s new feature. Simulate keypad inputs, test menu navigation, and validate voice agent responses to DTMF tones in your automated test scenarios.
Selective Scenario Re-runs for Voice AI Testing
New feature enables teams to re-run selected scenarios from existing datasets, streamlining the testing process for voice AI agents with targeted testing.
Enhanced Call Debugging with SIP Status Tracking
New debugging features provide clear insights into call termination and SIP status, helping teams quickly identify and resolve voice AI agent issues.
Hamming AI Partners with Fluents.ai for Enhanced Voice AI Testing
Fluents.ai customers now get access to Hamming AI’s comprehensive voice agent testing suite, while Hamming AI customers receive 15% off Fluents.ai’s enterprise-grade AI Voice Agents workflows.
Hamming AI Launches Advanced Call Analytics for Voice Agent Testing
New analytics module provides comprehensive performance visualization for AI voice agent testing, including latency metrics, call durations, and LLM-based evaluations.
Hamming AI Integrates with Hume AI for Enhanced Voice Agent Monitoring
Access real-time emotional characteristics monitoring in production calls with Hume AI integration. Monitor pitch, tone, and rhythm to gauge caller sentiments during voice agent interactions.
Thanksgiving Update from Hamming AI
A brief note of thanks from our team as we continue building better voice AI testing solutions.
Multi-Language Support for Voice AI Testing
Test your AI Voice Agents in 11 languages including Dutch, English, French, German, Hindi, Indonesian, Italian, Japanese, Korean, Portuguese, and Spanish. Ensure reliable voice interactions across global markets.
AI Grant Demo Day: A Weekend of Learning and Connection
Reflecting on an energizing weekend at AI Grant Demo Day and doubling down on our mission to make voice AI testing more robust and accessible.
Customer Spotlight: How Lilac Labs (YC S24) Ensures Drive-Thru Order Accuracy with Hamming AI
Learn how Lilac Labs automates drive-thru order testing to ensure accuracy and handle complex scenarios like dietary restrictions and allergies.
Hamming AI Partners with Retell AI for Enhanced Voice Agent Testing
Hamming AI and Retell AI announce strategic partnership to provide real-time monitoring and automated testing for AI voice agents, offering immediate alerts for agent mistakes and hallucinations.
Can LLMs find bugs in large codebases?
We bet your LLM can find a bug in a snippet of code. But how about 25 pages of code? We propose a new ‘needle in a haystack’ analysis called ‘Bug in the Code Stack’ that tests how well LLMs can find bugs in large codebases.