Spiria logo.

Scaling JMeter to load test AWS servers

September 24, 2020.

JMeter is a Java-based performance testing tool that challenges the user’s ability to grind the Java beans to extract the best from the tool. Our goal was to configure JMeter to put AWS servers to the test and ensure that enough load was generated to take them to the failing point, and we faced some challenges in doing so.

We used JMeter to do some performance testing for one of our clients, targeting 400 users concurrently. We encountered a few challenges initially, which we overcame by optimizing the code, by using the right kind of samplers and their properties, the right kind of thread groups to achieve the desired level of concurrency, and of course by using the listeners only for debugging, and not for actual load testing.

Further challenges came up when the target concurrency was bumped up to 1,000 users at a time, since JMeter could not handle that big a volume. For example, it started fading out while putting the same amount of load on AWS servers, and load test results were showing JMeter errors rather than actual server failures. Serious configuration changes were required to scale up JMeter to a level where it could generate the desired load and make AWS servers fail.

Here are our top tips to help scale up JMeter to put AWS servers through their paces:

Distributed Testing

Distributed testing is your first and best strategy to generate enough load to take an auto-scalable, perfectly load-balanced AWS server to the breaking point. Running JMeter locally limits the number of concurrent users that can run on a given system at a given time, meaning you rely squarely on the CPU and memory of the system the load test is running on.

Using master slave mode, running the test on two or more servers spreads the load across systems, which helps scale up JMeter to the size and space of all the master-slave machines combined. Distributing the machines not only spreads the load to generate an enormous number of threads every second, but it also distributes test data CSV files and huge test result files for more scalability. The online community attests to the running of 250 to 500 concurrent threads per server. You can add machines to the distributed system based on the overall load generation needs.

Increasing the number of TCP connection ports

A typical Windows machine will only make outbound TCP/IP connections using ports 1024-5000, and takes up to 4 minutes to recycle them. Running load tests requires a lot of connections in a short amount of time, quickly saturating that port range. When we attempted to target 400 concurrent users to load test the AWS services, the JMeter machine running on Windows OS could not even reach this target concurrency.

To increase the maximum number of ephemeral ports, follow these steps:

  1. On each JVM, start Registry Editor.
  2. Locate the following subkey in the registry, and then click Parameters:
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters.
  3. Right-click on Parameters.
  4. Create a new DWORD value with name MaxUserPort.
  5. Right-click on MaxUserPort.
  6. Enter 65534 for the Value Data field.
  7. Under the radio button selection for Base, select the Decimal radio button.
  8. Click OK. Close the registry.
  9. Restart each JMeter load agent machine.

Increasing heap size

After increasing the number of TCP connections using ports, we scaled up JMeter to the desired concurrency level. Soon, however, JMeter machines started producing “Out of memory” errors. It was essential to run load tests using GUI mode, as manual intervention was required to start and stop multiple thread groups while the application kept changing its state from inactive, to active, to closed. Even though only Aggregate Listener was used with the least amount of configuration to save space, we kept getting errors due to memory issues.

JVM can be fine-tuned to increase the allocated RAM. When a Java program like JMeter requires a fair amount of memory, RAM is commonly reallocated.

JMeter uses up to 512MB of RAM by default, as stated in jmeter.sh or jmeter.bat launch scripts. We increased the heap size to 4G:

set HEAP=-Xms1024m -Xmx4096m -XX:MaxMetaspaceSize=1024m

However, running JMeter after increasing the heap size was producing the following error:

Invalid initial heap size: -Xms4g
The specified size exceeds the maximum representable size.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
errorlevel=1

To address this error, we had to upgrade Java (JDK specifically) and JMeter to the latest versions. Indeed, the latest JDK contains a server optimized compiler that is better at optimizing Java code execution, thus JMeter Execution. After downloading and installing the latest JDK version, we restarted the JMeter server, and voila!, no more errors cropped up.

We were able to successfully run the load tests with 1,000 concurrent users, spiking up to 3,000.

References