How many times, within your performance test script, did you want to set up pacing to correctly follow the load requirements, only to find out that the selected think times were off the target over and over again? I guess that you must know the feeling provided that you really care about simulating the required load.

Performance testing tools have mechanisms that can automatically take care of pacing and mitigate this problem (at least in theory :). Today we’re going to look closely at how the JMeter tool tackles it – with a component called constant throughput timer.

Constant throughput timer to the rescue!

Let’s start with defining the perfect use case for utilizing the JMeter’s constant throughput timer (CTT).

When you have a defined throughput-like load requirement, which in this context is a number of occurrences in a period of time, then the CTT is definitely the way to go. See below for some examples of such requirements:

  • generate during a test a stable load of 100 requests per second
  • load the system with 500 specific business transactions per minute, concurrently with 100 different business transactions per minute etc.

 

Constant throughput timer lets you directly translate the figure from the requirements into load generated by JMeter. To follow the desired throughput set by user, CTT does a so-called dynamic pacing by computing the think times on the fly.

I have to admit that I rate the CTT component highly and use it a lot when testing with JMeter, however actually few seem to utilize constant throughput timer up to its potential. Let’s now go through it’s configuration screen and shed some light on possible ways of usage.

User interface

Component’s UI is quite simple. There is an input field to provide your throughput figure (in requests per minute) and an option to select the scope of how the throughput should be calculated (depending on where the component is placed):

The ‘Calculate throughput based on’ offers five options:

  • this thread only – each single thread will be exercising the specified amount of requests per minute
  • all active threads – all threads currently running will be executing that portion of samples per minute
  • all active threads (shared)
  • all active threads in current thread group – all running threads only in current thread group will be taken into account
  • all active threads in current thread group (shared)

Let’s skip the description of how ‘shared’ scope differs from non-shared one (we will come back to that later on) and go straight to the most important stuff – examples of how you can benefit from using CTT.

4 typical ways of using constant throughput timer in practice

  • Root level placement of the component

    An obvious one – CTT simply manages all active threads in the script, pacing the requests globally across all thread groups:

As a result of the set-up above, the script will utilise requests from all 4 thread groups to generate a throughput of 200 requests per minute.

  • Placing CTT inside a thread group and restricting the scope of throughput calculation to just the current thread group.

Might be useful when you want to simulate few user scenarios (placed in separate thread groups) with different load requirements for each one. JMeter then computes and applies think times dynamically in each thread group to reflect the needed throughput.

Bear in mind that we set the target throughput as the number of requests per minute, hence you always need to control how many requests (samplers) are put in a specific thread group and compute the throughput accordingly. In the case pictured above, the requirement is to simulate 4 user scenarios per minute. Since the scenario comprises 6 requests, that results in 4 * 6 = 24 target throughput set in CTT.

It’s worth to add that the dynamic think times are added before each sampler (apart from the first request execution due to the implementation of the CTT). The other thing is that when the number of samplers in the thread group changes, the throughput needs to be recalculated to follow the requirements for the user scenario.

  • Maintaining throughput on business transaction level

    That’s actually the most common CTT use case for me

Let’s begin by specifying the context, with assumption that we have a load requirements defined in a similar manner to the simple example below. Looking at figures in ‘load to simulate’ column, it actually makes sense to simulate the transactions by putting them in separate thread groups with constant throughput timers driving the throughput independently for them.

Another thing to consider is that quite often in modern applications a single user action triggers not one, but many requests to be sent concurrently or sequentially to the server. Normally, a constant throughput timer puts a think time before every request sent, so in order to make it to send samples immediately one after another, you may need to structure the script in a bit different way:

– each business transaction/user action is put in separate thread group.

– the CTT component should be placed under a ‘Test Action’ sampler. As a result, a dynamic think time will only added before the ‘Test Action’ component, the rest of the samplers will be executed sequentially immediately one after another.

– inside of the CTT component we can put the figure taken directly from the load requirements for the business scenarios shown above (e.g. 42 transactions per minute).

Now we are sure that the whole business transaction will be executed the desired amount of times, never mind how many requests it actually contains. Moreover, the execution of the requests is not “spread out”, think time is only put before the start of a business transaction,not before each individual request.

A drawback for this approach is that the various thread groups might be dependant on each other, e.g. in one thread group you do a login, create client in other and then account creation for the client in different one. This requires sharing variables between thread groups, e.g. using JMeter’s global properties and may be cumbersome for inexperienced performance testers.

The next approach simplifies things a bit.

  • Utilising a constant throughput timer inside a loop

    A mix between user scenario and business transaction approaches

This method lets you to fix some part of the flow of the script e.g. user login, to avoid daunting synchronizarion of some variables (session cookies in case of login) across many thread groups:

In the example shown above, user login will be done first and then your business transaction paced by constant throughout timer will be executed many times inside a loop. Loop controller component might be set do an arbitrary amount of iterations or to ‘forever’. The latter would mean that the login will be executed only once and any requests placed after the loop wouldn’t be done at all (as a logout step in the screenshot above).

Shared vs Non-Share mode of calculating throughput

Let’s go back to the selection of modes of calculating throughput presented by the constant throughput timer.
So what’s the story behind the ‘shared’ and non-shared modes? They are just different implementations of how the dynamic pause time is calculated by the CTT:

  • non-shared mode calculates the pause independently for each thread. To be more exact: test starts, thread becomes active and fires its first request, then, before execution of each of the next samples, CTT computes and applies the pause time knowing just the target throughput, number of currently active threads and when the previous sample was run in current thread.
  • ‘shared’ mode takes into account all threads that are executing requests. In comparison to the pause computation mechanism for the non-shared mode, now all threads are taken into account once checking when the last sample was run. It results in spreading the execution of the samples more evenly in time.

Check out the following example to better understand the differences between the modes. Two threads were started for both of the shared and non-shared mode, in both cases there was a short ramp-up for thread’s starting time.

Look at the patterns in reqest start times (2nd column), where for non-shared mode (marked by green rectangles) each thread executes requests independently approx each 13 seconds, while in shared mode threads equally share the responsibility for the sample execution. That results in more controlled spread of the request sending.

Which mode should you choose in your tests? Well, it depends on the use case, give both of the modes a try during dry runs of the script and decide which simulates throughput better in your context.

Drawbacks of using CTT

    • You can’t do a nice stable ramp-up of the load out of the box with CTT. Of course you can do a standard ramp up on threads, but all active threads are trying to reach the desired throughput right from the starte test. But fear not ?, actually there is an easy way to overcome this – I will show the ramp up thing┬áin the next post

 

  • The other minor issue is that the as soon as thread starts it executes the first request/transaction is executed as soon as the thread starts. It might lead to a spike in execution of requests at the beginning of your test that exceed your desired throughput, e.g.:

    It works like that even for the shared mode, it is how it is implemented. You can try to mitigate it by using a sensible ramp up of threads in thread group.