.avif)
.avif)
When ProxySQL Clusters Refuse to Sync: A Real-World Debugging Story
ProxySQL is a high-performance MySQL proxy that sits between your application and the database, managing connection pooling, routing, and failover. One of its most powerful features is clustering. In a clustered setup, multiple ProxySQL nodes automatically stay in sync by sharing configuration changes. When functioning correctly, updates such as adding new users or backend servers propagate seamlessly across all nodes without manual intervention.
ProxySQL maintains this synchronization using specific internal tables, including:
- proxysql_servers: Defines the nodes participating in the cluster.
- mysql_users: Stores user credentials for backend access.
- mysql_servers: Contains backend database server details.
- global_variables: Holds runtime configuration, including cluster settings.
Behind the scenes, ProxySQL nodes communicate with each other using configured cluster credentials. Every node must be able to authenticate with its peers. If this authentication fails, synchronization stops completely, even if the configuration looks perfectly fine on the surface.
Here is a real-world debugging story of a ProxySQL cluster that simply refused to sync, and how we solved it.
Maintaining a synchronized cluster isn't just about performance; it's vital for security. For instance, if you are using ProxySQL to whitelist users and queries, a sync failure could leave one node unprotected. Learn how to set up these security layers in our guide on Building a MySQL Firewall with ProxySQL.
The Problem
A client reported that their ProxySQL cluster was no longer syncing. Users created on one node were not reflecting on the other. At first glance, both nodes appeared to have identical configurations.
There were no visible mismatches in the cluster variables, and network connectivity between the nodes was verified and working. From a surface-level check, everything seemed normal, but the cluster was clearly broken.
Initial Investigation
As with any cluster issue, the first step was to validate configuration consistency. We checked all cluster-related variables across both nodes, specifically focusing on the authentication variables:
The values matched exactly across the cluster. At this point, it did not look like a typical misconfiguration, which made the issue more confusing.
There were no visible mismatches in cluster variables, and connectivity between nodes was not an issue. To understand how ProxySQL normally uses checksums and epochs to maintain this synchronization, refer to our detailed guide on ProxySQL Series : ProxySQL Native Cluster.
Here is the output from our checks.
Node 1:
+---------------------------------------------------+--------------+
| Variable_name | Value |
+---------------------------------------------------+--------------+
| admin-cluster_username | proxycluster |
| admin-cluster_password | TnPdmRzKTI
+---------------------------------------------------+--------------+
+-------------+------+-------------------+---------+---------------------+--------------------+------------+
| hostname | port | name | version | epoch | checksum | diff_check |
+-------------+------+-------------------+---------+---------------------+--------------------+------------+
| 10.11.24.100 | 6032 | admin_variables | 4 | 2026-03-27 15:28:04 | 0x9D46207FCD1B0412 | 0 |
| 10.11.10.101 | 6032 | admin_variables | 4 | 2026-03-27 15:27:42 | 0x9D46207FCD1B0412 | 0 |
| 10.11.24.100 | 6032 | mysql_query_rules | 3 | 2024-11-07 08:09:11 | 0xEA00E4288CD91AF7 | 0 |
| 10.11.10.101 | 6032 | mysql_query_rules | 3 | 2024-11-07 08:09:11 | 0xEA00E4288CD91AF7 | 0 |
| 10.11.24.100 | 6032 | mysql_servers | 22 | 2025-09-23 05:58:28 | 0x1917BE0697E4AAE0 | 0 |
| 10.11.10.101 | 6032 | mysql_servers | 20 | 2025-09-23 05:58:25 | 0x1917BE0697E4AAE0 | 0 |
| 10.11.24.100 | 6032 | mysql_servers_v2 | 15 | 2025-09-18 10:57:01 | 0xBC06941099D8CB76 | 0 |
| 10.11.10.101 | 6032 | mysql_servers_v2 | 15 | 2025-09-18 10:57:01 | 0xBC06941099D8CB76 | 0 |
| 10.11.24.100 | 6032 | mysql_users | 30 | 2026-04-23 17:15:58 | 0x48290808B39E3563 | 0 |
| 10.11.10.101 | 6032 | mysql_users | 34 | 2026-04-23 17:15:58 | 0x48290808B39E3563 | 0 |
| 10.11.24.100 | 6032 | mysql_variables | 3 | 2025-09-23 05:34:50 | 0x66F47FA957DD78EF | 0 |
| 10.11.10.101 | 6032 | mysql_variables | 3 | 2025-09-23 05:34:50 | 0x66F47FA957DD78EF | 0 |
| 10.11.24.100 | 6032 | proxysql_servers | 2 | 2024-11-07 07:58:03 | 0x6986A6B875BC223F | 0 |
| 10.11.10.101 | 6032 | proxysql_servers | 2 | 2024-11-07 07:54:39 | 0x6986A6B875BC223F | 0 |
+-------------+------+-------------------+---------+---------------------+--------------------+------------+
Node 2:
+---------------------------------------------------+--------------+
| Variable_name | Value |
+---------------------------------------------------+--------------+
| admin-cluster_username | proxycluster |
| admin-cluster_password | TnPdmRzKTI
+---------------------------------------------------+--------------+
+-------------+------+-------------------+---------+---------------------+--------------------+------------+
| hostname | port | name | version | epoch | checksum | diff_check |
+-------------+------+-------------------+---------+---------------------+--------------------+------------+
| 10.11.24.100 | 6032 | admin_variables | 4 | 2026-03-27 15:28:04 | 0x9D46207FCD1B0412 | 0 |
| 10.11.10.101 | 6032 | admin_variables | 4 | 2026-03-27 15:27:42 | 0x9D46207FCD1B0412 | 0 |
| 10.11.24.100 | 6032 | mysql_query_rules | 3 | 2024-11-07 08:09:11 | 0xEA00E4288CD91AF7 | 0 |
| 10.11.10.101 | 6032 | mysql_query_rules | 3 | 2024-11-07 08:09:11 | 0xEA00E4288CD91AF7 | 0 |
| 10.11.24.100 | 6032 | mysql_servers | 22 | 2025-09-23 05:58:28 | 0x1917BE0697E4AAE0 | 0 |
| 10.11.10.101 | 6032 | mysql_servers | 20 | 2025-09-23 05:58:25 | 0x1917BE0697E4AAE0 | 0 |
| 10.11.24.100 | 6032 | mysql_servers_v2 | 15 | 2025-09-18 10:57:01 | 0xBC06941099D8CB76 | 0 |
| 10.11.10.101 | 6032 | mysql_servers_v2 | 15 | 2025-09-18 10:57:01 | 0xBC06941099D8CB76 | 0 |
| 10.11.24.100 | 6032 | mysql_users | 30 | 2026-04-23 17:15:58 | 0x48290808B39E3563 | 0 |
| 10.11.10.101 | 6032 | mysql_users | 34 | 2026-04-23 17:15:58 | 0x48290808B39E3563 | 0 |
| 10.11.24.100 | 6032 | mysql_variables | 3 | 2025-09-23 05:34:50 | 0x66F47FA957DD78EF | 0 |
| 10.11.10.101 | 6032 | mysql_variables | 3 | 2025-09-23 05:34:50 | 0x66F47FA957DD78EF | 0 |
| 10.11.24.100 | 6032 | proxysql_servers | 2 | 2024-11-07 07:58:03 | 0x6986A6B875BC223F | 0 |
| 10.11.10.101 | 6032 | proxysql_servers | 2 | 2024-11-07 07:54:39 | 0x6986A6B875BC223F | 0 |
+-------------+------+-------------------+---------+---------------------+--------------------+------------+
(For more information on monitoring cluster stats, refer to the ProxySQL official documentation on clustering.)
The Breakthrough
The turning point came when reviewing the ProxySQL error logs. They clearly showed an authentication failure whenever one node attempted to connect to the other:
MySQL_Session.cpp:5797:handler___status_CONNECTING_CLIENT___STATE_SERVER_HANDSHAKE_WrongCredentials(): [ERROR] ProxySQL Error: Access denied for user 'proxycluster'@'10.11.24.100' (using password: YES)This confirmed the problem was not about configuration synchronization itself, it was a failed authentication process between the cluster nodes.
The Root Cause
The root cause was a subtle but critical mismatch between configured credentials.
ProxySQL uses two distinct roles for this process:
- admin-admin_credentials: Defines the users allowed to log into the admin interface and sets their actual passwords.
- admin-cluster_username / admin-cluster_password: The credentials a ProxySQL node uses when attempting to connect to another node in the cluster.
When one node communicates with another, it essentially attempts to log into the peer node’s admin interface using the admin-cluster_username and admin-cluster_password.
However, the receiving node does not validate against its own admin-cluster_password. Instead, it checks whether the provided username and password exist and match its internal admin-admin_credentials.
This is where setups silently break. If Node A connects using a password defined in admin-cluster_password, but Node B has a different password stored for that user in admin-admin_credentials, authentication fails. The connection is rejected, and synchronization never occurs.
For clustering to function, the cluster user must exist in admin-admin_credentials, and its password must match exactly what is configured in admin-cluster_password.
In this client's environment, the actual password for the proxycluster user was set to secret1234, while admin-cluster_password was configured as TnPdmRzKTI.
Node 1 Checks:
proxysql>show variables like '%admin-admin%';
+-------------------------+------------------------------------------+
| Variable_name | Value |
+-------------------------+------------------------------------------+
| admin-admin_credentials | admin:proxypass;proxycluster:secret1234 |
+-------------------------+------------------------------------------+
+---------------------------------------------------+----------------+
| Variable_name | Value |
+---------------------------------------------------+----------------+
| admin-cluster_username | proxycluster |
| admin-cluster_password | TnPdmRzKTI
+---------------------------------------------------+----------------+Node 2 Checks:
proxysql>show variables like '%admin-admin%';
+-------------------------+------------------------------------------+
| Variable_name | Value |
+-------------------------+------------------------------------------+
| admin-admin_credentials | admin:proxypass;proxycluster:secret1234 |
+-------------------------+------------------------------------------+
+---------------------------------------------------+----------------+
| Variable_name | Value |
+---------------------------------------------------+----------------+
| admin-cluster_username | proxycluster |
| admin-cluster_password | TnPdmRzKTI
+---------------------------------------------------+----------------+Because both nodes had this same mismatch locally, each node tried to connect using the wrong password, and the receiving node rejected the connection. Neither node could authenticate. The tricky part is that the cluster variables matched perfectly across nodes, making the configuration look correct while authentication failed silently in the background.
The Fix
Once we identified the mismatch, the fix was straightforward. We needed to align the cluster password with the actual admin credentials. We executed the following commands on both nodes:
UPDATE global_variables SET variable_value = 'secret1234' WHERE variable_name = 'admin-cluster_password';
LOAD ADMIN VARIABLES TO RUNTIME;
SAVE ADMIN VARIABLES TO DISK;Synchronization resumed immediately and all configurations including users started replicating as expected.
Key Takeaways
For ProxySQL clustering to work correctly, keep the following in mind:
- Cluster credentials must match actual admin credentials locally, not just consistently across nodes.
- The admin-cluster_* values must always align perfectly with the definitions inside admin-admin_credentials.
- Even a small credential mismatch will completely break synchronization without throwing obvious errors outside of the internal logs.
Final Thoughts
This type of issue is easy to miss because everything appears correct at first glance. It is natural to suspect networking issues, firewall blocks, or broader configuration errors. However, in this case, the root cause was a simple credential mismatch.
If your ProxySQL cluster is not syncing despite looking properly configured, always verify this early in your debugging process: Are the cluster credentials actually valid against the local admin credentials? That single check can save you hours of troubleshooting.
Need Help with ProxySQL?
Struggling with ProxySQL routing, high availability, or cluster synchronization? Our database experts at Mydbops provide specialized consulting to keep your database proxy layer highly available, secure, and performant.

.avif)



.avif)
