Thursday, May 26, 2011

JMeter setup for QPS evaluation

Jmeter is a feature rich tool to load test and analyze your system. Here I plan to share the steps of setting up a simple Jmeter test to evaluate the throughput and determine the QPS (query per second) the system can support.

The thumb rule to determine the QPS is to keep increasing the request per second to the system and find a saturation point where your throughput dramatically drops with the increase in response time. With any load testing you will see that the response time stays constant (or minor fluctuation) and throughput increases linearly with the increase in the load or requests to the server. As you would reach the saturation point of the server, throughput will stop increasing and would rather have a sharp dip and the response time shoots up. The throughtput your server handled before reaching this saturation point is the MAX throughtput your server can handle, in other words the QPS your server can support.

To setup this test we will start with downloading and setting up your Jmeter.

Once you have JMeter setup, you can start it in your preferred way; I start it in the UI mode. To create a simple test for our purpose, here are the steps I followed.
Setup test plan

Step 1: Add a Thread Group to your Jmeter test plan.

Right Click on Test Plan and Add -> Threads (Users) -> Thread Group

Step2: Next we will add a HTTP Request Sampler

Right click on Thread Group and Add -> Sampler -> HTTP Request


Notice the path has a q parameter with the value substitution, as we will fill in unique values from a file to pass unique URL requests to the server.

Step 3: Now to see the results in Graph, we need to add Graph Results Listener

Right Click on HTTP Request and Add -> Listener -> Graph Results


Pass Unique Values

Step 4: To send different parameter values with each user request sent to the server, we can add "CSV Data Set Config" Config Element.

Right Click on HTTP Request and Add -> Config Element -> CSV Data Set Config

The MD5.csv file has unique id one per line. If you have multiple parameters you can add the parameters comma separated in Variable Names.
Monitor results /response

Step 5: To see the summarized result

Right Click on HTTP Request and Add -> Listener -> Summary Report

Now we can start generating the load for the server by going to the Thread Group section from the left panel. To simply show the effect of drop in throughput with the increase in load, in the example here I am increasing the users (or number of threads).

From the Thread Group tab, I will increase the concurrent users starting with 10 and setting the Ramp up period to zero.

Here is my summarized report of the experiment-

# of UsersAvg. resp. timeThroughputKB/sec
(concurrent)      (qps) 
1019333.44481605101.2881
5028042.08754209153.5357
7522578.45188285234.3444
100209105.9322034306.0768
15031483.01051467246.4645
200255198.764146317.7128
250266241.070028521.4105
275323251.8878357773.7123
30061988.13160987260.3781

This states that the QPS of my server is around 250-275. As we increase the load to 300, we can see a spike in the average response time and a dip in the throughput, stating that the server has reached it's saturation level. This summarized report also gives you the average response time your server can support. This information is very crucial in designing a system.

This experiment can be varied in different ways to introduce other factors that could be related to the use case more applicable to the system you are testing viz.; add load in steps or delays, create a set of user behavior and run it in loops etc.