Your MySQL database cluster contains your most business-critical data and therefore proper performance under load is critical to business health. If response time is slow, customers (and staff) get frustrated and the business suffers a slow-down.
If the database layer is unable to keep up with demand, all applications can and will suffer slow performance as a result.
To prevent this situation, use load tests to determine the throughput as objectively as possible.
In the sample
load.pl script below, increase load by increasing the thread quantity.
You could also run this on a database with data in it without polluting the existing data since new test databases are created to match each node’s hostname for uniqueness.
Note: The examples in this blog post assume that a Connector is running on each database node and listening on port 3306, and that the database itself is listening on non-standard port 13306.
rpm -Uvh percona-release-0.1-4.noarch.rpm
yum install sysbench
Create and Prepare the Test Databases
First, prepare the per-host test databases by writing via the Connector so that the
create database commands are replicated from the master to all nodes.
Repeat on all nodes as OS user
export HOST=`/bin/hostname -s`; mysql -u app_user -P3306 -psecret -h127.0.0.1 -e "create database test_$HOST;"
sysbench --db-driver=mysql --mysql-user=app_user --mysql-password=secret --mysql-host=127.0.0.1 --mysql-port=3306 --mysql-db=test_`/bin/hostname -s` --db-ps-mode=disable --range_size=100 --table_size=10000 --tables=2 --threads=1 --events=0 --time=60 --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua prepare
Create and run the load test script
touch load.pl; chmod 755 load.pl; vim load.pl
Below is the
our $time = 58;
our $threads = 2;
our $name = `/bin/hostname -s`;
our $cmd = <<EOT;
sysbench --db-driver=mysql --mysql-user=app_user --mysql-password=secret --mysql-host=127.0.0.1 --mysql-port=3306 --mysql-db=test_$name --db-ps-mode=disable --range_size=100 --table_size=10000 --tables=2 --threads=$threads --events=0 --time=$time --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua run
print "Executing: $cmd\n";
Here is an example run with threads set to 1 for a small test:
tungsten@db5:/home/tungsten # ./load.pl
Executing: sysbench --db-driver=mysql --mysql-user=app_user --mysql-password=secret --mysql-host=127.0.0.1 --mysql-port=3306 --mysql-db=test_db5 --db-ps-mode=disable --range_size=100 --table_size=10000 --tables=2 --threads=1 --events=0 --time=58 --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua run
sysbench 1.0.15 (using bundled LuaJIT 2.1.0-beta2)
Running the test with following options:
Number of threads: 1
Initializing random number generator from current time
Initializing worker threads...
transactions: 5086 (87.68 per sec.)
queries: 101720 (1753.69 per sec.)
ignored errors: 0 (0.00 per sec.)
reconnects: 0 (0.00 per sec.)
total time: 58.0014s
total number of events: 5086
95th percentile: 13.70
events (avg/stddev): 5086.0000/0.00
execution time (avg/stddev): 57.9783/0.00
Install into cron if desired
We use this script on our lab clusters to generate ongoing load. To do so, we simply add the 58-second sysbench load job (
-time=58) into cron to be executed once per minute:
tungsten@db3:/home/tungsten # crontab -e
* * * * * /home/tungsten/load.pl > /dev/null 2>&1
Increase load gradually by increasing the number of threads be host. Use the script on more than one host to increase load further. Use commands like
iostat to monitor system resource utilization.
By testing through the Connector, you are testing multiple things at once:
- Overall throughput via the Connector (also helps prove out the overall network bandwidth)
- MySQL read and write capabilities on the Master in terms of throughput and system utilization
- Replicator speed to the slaves – check the slave latency – are they up to date? This tests the replicator, the network and the write speed of the slave databases
- Behavior and erformance of the application (do a switch under load. Does it reconnect properly? Are there any issues or errors?)
Perform as much testing as possible BEFORE you go live and save yourself tons of headaches!
To learn about Continuent solutions in general, check out https://www.continuent.com/solutions
For more information about monitoring Tungsten clusters, please visit https://docs.continuent.com/tungsten-clustering-6.0/ecosystem-nagios.html.
Tungsten Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!
For more information, please visit https://www.continuent.com/solutions
Want to learn more or run a POC? Contact us.