Troubleshooting Mattermost Calls#
Available on all plans
Cloud and self-hosted deployments
This guide provides comprehensive troubleshooting steps for Mattermost Calls, particularly focusing on the dedicated RTCD deployment model. Follow these steps to identify and resolve common issues.
Common Issues#
Calls Not Connecting#
Symptoms: Users can start calls but cannot connect, or calls connect but drop quickly.
Possible causes and solutions:
Network connectivity issues:
Verify that UDP port 8443 (or your configured port) is open between clients and RTCD servers
Ensure TCP port 8045 is open between Mattermost and RTCD servers
Check that any load balancers are properly configured for UDP traffic
ICE configuration issues:
Verify the
rtc.ice_host_override
setting in RTCD configuration matches the publicly accessible hostname or IP of the RTCD serverIf this setting is incorrect, client browser console may show errors like:
com.mattermost.calls: peer error timed out waiting for rtc connection
Meanwhile, RTCD
trace
level logs might show internal IP addresses in ICE connection logs:{"timestamp":"2025-05-14 10:29:08.935 Z","level":"trace","msg":"Ping STUN from udp4 host 172.31.29.117:8443 (resolved: 172.31.29.117:8443) to udp4 host 192.168.64.1:59737 (resolved: 192.168.64.1:59737)","caller":"rtc/logger.go:54","origin":"ice/v4.(*Agent).sendBindingRequest github.com/pion/ice/v4@v4.0.3/agent.go:921"}
API connectivity:
Verify that Mattermost servers can reach the RTCD API endpoint
Check that the API key is correctly configured in both Mattermost and RTCD
Plugin configuration:
Ensure the Calls plugin is enabled and properly configured
Verify the RTCD service URL is correct in the System Console
Audio Issues#
Symptoms: Users can connect to calls, but audio is one-way, choppy, or not working.
Possible causes and solutions:
Client permissions:
Ensure browser/app has microphone permissions
Check if users are using multiple audio devices that might interfere
Network quality:
High latency or packet loss can cause audio issues
Try testing with TCP fallback enabled (requires RTCD v0.11+ and Calls v0.17+)
Audio device configuration:
Users should verify their audio input/output settings
Try different browsers or the desktop app
Call Quality Issues#
Symptoms: Calls connect but quality is poor, with latency, echo, or distortion.
Possible causes and solutions:
Server resources:
Check CPU usage on RTCD servers - high CPU can cause quality issues
Refer to the Calls Metrics and Monitoring guide for detailed instructions on monitoring and optimizing performance
Monitor network bandwidth usage
Network congestion:
Check for packet loss between clients and RTCD
Consider network QoS settings to prioritize real-time traffic
Client-side issues:
Browser or app limitations
Hardware limitations (CPU, memory)
Network congestion at the user’s location
Connectivity Troubleshooting#
Basic Connectivity Tests#
HTTP API connectivity test:
Test if the RTCD API is reachable:
curl http://YOUR_RTCD_SERVER:8045/version # Example response: {"buildDate":"2025-04-02 21:33","buildVersion":"v1.1.0","buildHash":"7bc1f7a","goVersion":"go1.23.6","goOS":"linux","goArch":"amd64"} ```
UDP connectivity test:
On the RTCD server:
nc -l -u -p 8443
On a client machine:
nc -v -u YOUR_RTCD_SERVER 8443
Type a message and press Enter. If you see the message on both sides, UDP connectivity is working.
TCP fallback connectivity test:
Same as the UDP test, but without the
-u
flag:On the RTCD server:
nc -l -p 8443
On a client machine:
nc -v YOUR_RTCD_SERVER 8443
Network Packet Analysis#
To capture and analyze network traffic:
Capture UDP traffic on the RTCD server:
sudo tcpdump -n 'udp port 8443' -i any
Capture TCP API traffic:
sudo tcpdump -n 'tcp port 8045' -i any
Analyze traffic patterns:
Verify packets are flowing both ways
Look for ICMP errors that might indicate firewall issues
Check for patterns of packet loss
Use Wireshark for deeper analysis:
For more detailed packet inspection, capture traffic with tcpdump and analyze with Wireshark:
sudo tcpdump -n -w calls_traffic.pcap 'port 8443'
Then analyze the
calls_traffic.pcap
file with Wireshark.
Firewall Configuration Checks#
Check iptables rules (Linux):
sudo iptables -L -n
Ensure there are no rules blocking UDP port 8443 or TCP ports 8045/8443.
Check cloud provider security groups:
Verify that security groups or network ACLs allow:
Inbound UDP on port 8443 from client networks
Inbound TCP on port 8045 from Mattermost server networks
Inbound TCP on port 8443 (if TCP fallback is enabled)
Check intermediate firewalls:
Corporate firewalls might block UDP traffic
Some networks might require TURN servers for traversal
Log Analysis#
RTCD Logs#
The RTCD service logs important events and errors. Set the log level to “debug” for troubleshooting:
In the configuration file:
[logger] enable_file = true file_level = "DEBUG"
Restart the RTCD service after making these changes
Common log patterns to look for:
Connection errors: Look for “failed to connect” or “connection error” messages
ICE negotiation failures: Look for “ICE failed” or “ICE timeout” messages
API authentication issues: Look for “unauthorized” or “invalid API key” messages
Mattermost Logs#
Check the Mattermost server logs for Calls plugin related issues:
Enable debug logging in System Console > Environment > Logging > File Log Level
Filter for Calls-related logs:
grep -i "calls" /path/to/mattermost.log
Look for common patterns:
Connection errors to RTCD
Plugin initialization issues
WebSocket connection problems
Browser Console Logs#
Instruct users to check their browser console logs:
In Chrome/Edge:
Press F12 to open Developer Tools
Go to the Console tab
Look for errors related to WebRTC, Calls, or media permissions
Specific patterns to look for:
“getUserMedia” errors (microphone permission issues)
“ICE connection” failures
WebSocket connection errors
Performance Issues#
Diagnosing High CPU Usage#
If RTCD servers show high CPU usage:
Check concurrent calls and participants:
Access the Prometheus metrics endpoint to see active sessions
Compare with the benchmark data in the Calls Metrics and Monitoring documentation’s Performance Baselines section
Profile CPU usage (Linux):
top -p $(pgrep rtcd)
Or for detailed per-thread usage:
ps -eLo pid,ppid,tid,pcpu,comm | grep rtcd
Enable pprof profiling (if needed):
Add to your RTCD configuration:
{ "debug": { "pprof": true, "pprofPort": 6060 } }
Then capture a CPU profile:
curl http://localhost:6060/debug/pprof/profile > cpu.profile
Analyze with:
go tool pprof -http=:8080 cpu.profile
Diagnosing Network Bottlenecks#
If you suspect network bandwidth issues:
Monitor network utilization:
iftop -n
Check for packet drops:
netstat -su | grep -E 'drop|error'
Verify system network buffers:
sysctl -a | grep net.core.rmem sysctl -a | grep net.core.wmem
Ensure these match the recommended values:
net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.optmem_max = 16777216
Recording and Transcription Issues#
For troubleshooting calls-offloader service issues including recording and transcription problems, see the Calls Offloader Setup and Configuration guide.
Calls-Offloader Docker Debugging#
If you’re running calls-offloader in Docker, use these commands for debugging:
Monitor Live Logs#
To view real-time logs from calls-offloader containers:
# Find and follow logs from all calls-related containers
docker ps --format "{{.ID}} {{.Image}}" | grep "calls" | awk '{print $1}' | xargs -I {} docker logs -f {}
This command finds all running containers with “calls” in the image name and follows their logs.
View Completed Jobs#
To view completed calls-offloader job containers (useful for debugging failed jobs):
# List all exited containers to see completed jobs
docker ps -a --filter "status=exited"
Look for containers with calls-offloader image names that have exited. You can then examine their logs:
# View logs from a specific completed container
docker logs <container_id>
Additional Docker Debugging Tips#
Check container resource usage:
docker stats
to see if containers are hitting resource limitsInspect container configuration:
docker inspect <container_id>
for detailed container settingsCheck container health:
docker inspect <container_id> | grep Health
if health checks are configured
Prometheus Metrics Analysis#
Use Prometheus metrics for real-time and historical performance data:
For detailed setup instructions on configuring Prometheus and Grafana for Calls monitoring, see the Calls Metrics and Monitoring guide.
When to Contact Support#
Consider contacting Mattermost Support when:
You’ve tried troubleshooting steps without resolution
You’re experiencing persistent connection failures across multiple clients
You notice unexpected or degraded performance despite proper configuration
You need help interpreting diagnostic information
You suspect a bug in the Calls plugin or RTCD service
When contacting support, please include:
RTCD version and configuration (with sensitive information redacted)
Mattermost server version
Calls plugin version
Client environments (browsers, OS versions)
Relevant logs and diagnostic information
Detailed description of the issue and steps to reproduce
Other Calls Documentation#
Calls Overview: Overview of deployment options and architecture
RTCD Setup and Configuration: Comprehensive guide for setting up the dedicated RTCD service
Calls Offloader Setup and Configuration: Setup guide for call recording and transcription
Calls Metrics and Monitoring: Guide to monitoring Calls performance using metrics and observability
Calls Deployment on Kubernetes: Detailed guide for deploying Calls in Kubernetes environments