Load Testing With Artillery and Continuous Integration
As web applications become more complex, more organizations have come to rely on splitting their applications into smaller, discrete services to help maintain them in the long run. Instead of building a huge, monolithic codebase that does everything, developers are beginning to use APIs and microservices to isolate specific functionality. This service-oriented architecture helps larger teams to work independently and make their systems easier to develop, test and deploy.
Since these separate applications now rely on each other for the entire system to function well, performance and reliability are crucial requirements for this architecture. If one application stops working for any reason, it’ll likely affect other services that rely on it. Service-oriented architectures need to be able to withstand performance degradation and inevitable failures. Speed and resiliency are no longer optional.
To help ensure that your services can handle sudden changes in usage, such as a spike in traffic or an unforeseen event that rapidly increases the system’s workload, you need to test for it ahead of time. As a result, performance and load testing are becoming increasingly critical to guarantee that each part of your system can deal with temporary or sustained traffic without breaking down.
Load testing will help you validate that your system’s environment works as intended. For instance, you want to know that your logging and alerts get triggered if a service fails or if your auto-scaling system spins up additional servers due to the increased workload. You’ll also want to keep track of performance during the software development lifecycle. If you deploy a slow application, you’ll inevitably turn away potential customers.
While performing these tests occasionally to make sure things are working well, it’s not enough to run them sporadically. Running load tests continuously becomes essential not only to keep your systems up and running but to ensure that you’re providing the level of performance - often defined in service-level objectives or SLOs - expected by your team and customers.
In this article, we’ll show you how to load-test an HTTP API service using Artillery. First, we’ll set up a test script that will verify that the service can handle hundreds of requests at a time. Then, we’ll add a few conditions to cover the SLOs defined for the service to let us know when our pre-defined thresholds aren’t met. Finally, we’ll integrate Artillery with CircleCI to ensure you always keep tabs on how your service performs over time.
What we’ll test with Artillery
For this article, we’ll use an example HTTP service of a JSON API running the backend of an e-commerce website. The API has a few endpoints for searching products by keyword, fetching product information, and adding items to a cart for purchase. In our fictional e-commerce site, let’s say we’ve observed that the typical customer performs the following actions when they want to purchase a product:
- They search for a product using a few keywords.
- After receiving the search results, they click on a product - usually the first result.
- They read the product information for a few seconds.
- They decide this is the product they want and add the item to their cart.
This flow is vital for the company’s business and needs to work at all times. If a customer can’t find a product quickly or add it to their cart, they’ll probably search for the product elsewhere and never return to our store. We want to test this flow to ensure the service will work properly at all times, especially during heavy-traffic periods such as Black Friday and Cyber Monday. Let’s do that with Artillery.
Setting up an Artillery test script
Artillery test scripts are YAML files containing two main sections:
config
: Allows you to set the target for performing the test, the load phases for defining how many virtual users you want to send to the target, and additional configuration settings.scenarios
: Allows you to describe which action you want a virtual user (VU) to take on your target service.
We’ll create our test script for the e-commerce service inside the same code repository, under the tests/performance
sub-directory. Inside this directory, we’ll add a new file called search-and-add.yml
. The name of the test script or sub-directory doesn’t matter - you can place your tests anywhere.
Defining your target and load phases
First, let’s write the configuration for our test script. The primary settings we need to set for Artillery in this section are the target URL where our service is located for testing and the load phases during the test. Deciding on the type of load you want to send for your service depends on your particular situation, such as the typical usage for your web application and your system architecture.
For this example, we’ve decided on the following load phases for our Artillery tests:
- First, we’ll warm up the HTTP service by sending a small amount of traffic. In this phase, Artillery will send five VUs to the service every second for one minute.
- Next, we want to increase the load to the service gradually. After the warm-up phase, Artillery will go from 5 VUs to 50 VUs per second for two minutes, slowly sending more VUs every second.
- Finally, we want to see how our service deals with a sustained load. we’ll hammer the service with 50 new VUs every second for 10 minutes.
With our load phases defined, we can write it in our Artillery test script:
config:
target: 'https://ecommerce.test-app.dev'
phases:
- duration: 60
arrivalRate: 5
name: Warm up
- duration: 120
arrivalRate: 5
rampTo: 50
name: Ramp up load
- duration: 600
arrivalRate: 50
name: Sustained load
Injecting data from an external file
Next, we want to set up some additional configuration to help with our test scenarios. As mentioned earlier, one of the things we want to test is when a user searches for a product. A good practice when running load tests is to check how your system behaves when accessing dynamic content by using different data during testing.
Artillery allows you to define a list of variables with different values, which you can then use in your test scenarios. You can do this in one of two ways - you can define the variables and data inside the test script using the config.variables
setting, or you can load an external CSV file using the config.payload
setting. For this example, we’ll load data from a CSV file to use later when we define our test scenarios.
We’ll place a file in the tests/performance
sub-directory called keywords.csv
in our code repository. This file will contain a few different keywords that customers typically use in the e-commerce website:
computer
video game
vacuum cleaner
toys
hair dryer
To load this data in the test script, we’ll set up the config.payload
setting in our test script:
config:
target: 'https://ecommerce.test-app.dev'
phases:
- duration: 60
arrivalRate: 5
name: Warm up
- duration: 120
arrivalRate: 5
rampTo: 50
name: Ramp up load
- duration: 600
arrivalRate: 50
name: Sustained load
payload:
path: 'keywords.csv'
fields:
- 'keyword'
config.payload
uses the path
setting to point to the file containing the data, and the fields
setting to define the variable name for use in our scenarios. The example CSV file only contains one value, so we only need to define one variable, called keyword
. You’ll see this variable in use later.
Configuring test scenario and steps
With our basic configuration prepared for the load test, it’s now time to define the steps you want each VU to go through during the test.
In an Artillery test script, you can define one or more scenarios for each VU using the scenarios.flow
setting. Each scenario contains one or more actions that the VU will go through. Since we’re testing an HTTP service, you can define typical HTTP requests such as GET
and POST
like you would under regular use.
For this article, we’ll define a single scenario which will go through the typical customer flow described earlier. Each VU that Artillery sends to the HTTP service will search using one of the keywords from the configured CSV payload, fetch the details, pause for a few seconds, and add it to their cart. Let’s see how this is defined in an Artillery test script:
scenarios:
- name: 'Search and add to cart'
flow:
- post:
url: '/search'
json:
kw: '{{ keyword }}'
capture:
- json: '$.results[0].id'
as: 'productId'
- get:
url: '/products/{{ productId }}/details'
- think: 5
- post:
url: '/cart'
json:
productId: '{{ productId }}'
The scenario defined above is named “Search and add to cart”, and it contains four steps under the flow
setting.
The first step in the flow makes a POST
request to our service’s /search
endpoint. The API endpoint expects a JSON object in the request body, with a key called kw
containing the value of a keyword. We want to use one of the values from the CSV payload file, which we can access using " {{ keyword }}"
. By default, Artillery will choose a random value from the CSV file. For instance, one virtual user will make a request with the body of {"kw": "computer"}
, and the next virtual user makes a request with the body of {"kw": "video game"}
.
The HTTP service returns a JSON response containing an array of different products found on the backend on a successful request. We want to grab the product ID of the first value returned to use in future steps in our scenario. Artillery allows you to do this by using the capture
setting. This setting will enable us to parse the JSON response for the id
property of the first object in the returned array (using JSONPath) and store the value into a variable called productId
.
The second step of the flow uses the productId
variable to make a GET
request to the /products/:productId/details
endpoint. This request uses the variable that we captured earlier and interpolates it as part of the URL to have each VU make different requests according to the keyword used in the first step.
Next, we want to simulate a customer spending time on the website after fetching the details of a product. In Artillery, we can easily accomplish this with the think
setting. This setting pauses the test execution for the VU for a specified number of seconds before moving to the next step. Here, we’re pausing the test for five seconds before we go to the final step.
Finally, the last step of this scenario is the action of adding the product to the cart. The HTTP service handles this action through the POST /cart
endpoint. The endpoint also expects a JSON object as part of the request. The request body requires a productId
key containing the ID of the product to add to the customer’s cart. We already have this value stored in the productId
variable, so we just need to include it in the JSON request.
With this, we’ve covered a full end-to-end flow for our load test using Artillery. Here’s the complete test script with the configuration and scenarios defined:
config:
target: 'https://ecommerce.test-app.dev'
phases:
- duration: 60
arrivalRate: 5
name: Warm up
- duration: 120
arrivalRate: 5
rampTo: 50
name: Ramp up load
- duration: 600
arrivalRate: 50
name: Sustained load
payload:
path: 'keywords.csv'
fields:
- 'keyword'
scenarios:
- name: 'Search and add to cart'
flow:
- post:
url: '/search'
json:
kw: '{{ keyword }}'
capture:
- json: '$.results[0].id'
as: 'productId'
- get:
url: '/products/{{ productId }}/details'
- think: 5
- post:
url: '/cart'
json:
productId: '{{ productId }}'
Running a load test
Let’s make sure that the test script works as intended. If you have installed Artillery in your local system, you can run the test with the following command:
artillery run tests/performance/search-and-add.yml
This command will begin sending virtual users to your HTTP service, starting with the warm-up phase. Every 10 seconds, Artillery will print a report on the console for the number of scenarios executed during that period. At the end of the performance test, you’ll receive a complete summary, including the number of scenarios launched and completed, the number of requests completed, response times, status codes, and any errors that may have occurred.
After your load test has finished, you’ll receive a summary of the complete test defined in this section. Here’s an example of what it will look like:
All virtual users finished
Summary report @ 08:28:52(+0000) 2021-06-05
Scenarios launched: 33689
Scenarios completed: 33689
Requests completed: 101067
Mean response/sec: 128.57
Response time (msec):
min: 9
max: 206
median: 13
p95: 99
p99: 111
Scenario counts:
Search and add to cart: 33689 (100%)
Codes:
200: 67378
201: 33689
Meeting SLOs in your load test
As part of our testing, we also want to check that our HTTP service maintains a certain level of performance and reliability while under load. We can place some assertions inside of our test script with Artillery to verify performance-related issues during load testing. These checks can help us meet some of our service-level objectives and detect potential problems quickly.
After discussing with the team and observing how the service behaves in normal circumstances, we decided upon validating the following in our test:
- 99% of the requests made in the scenario (99th percentile) should have an aggregate response time of 200 milliseconds or less.
- Less than 0.5% of all requests are allowed to fail.
We can define these SLOs as part of the test using the ensure
setting as part of the test configuration as part of the config
section:
config:
target: "https://ecommerce.test-app.dev"
phases:
- duration: 60
arrivalRate: 5
name: Warm up
- duration: 120
arrivalRate: 5
rampTo: 50
name: Ramp up load
- duration: 600
arrivalRate: 50
name: Sustained load
payload:
path: "keywords.csv"
fields:
- "keyword"
ensure:
p99: 200
maxErrorRate: 0.5
scenarios:
- name: "Search and add to cart"
flow:
- post:
url: "/search"
json:
kw: "{{ keyword }}"
capture:
- json: "$.results[0].id"
as: "productId"
- get:
url: "/products/{{ productId }}/details"
- think: 5
- post:
url: "/cart"
json:
productId: "{{ productId }}"
With this setting, the tests run the same way as before. However, Artillery will check if the aggregate p99
latency and the number of errors at the end of the test run. If the p99
latency is over 200ms or over 0.5% of requests failed to complete, Artillery with exit with a non-zero exit code. In a continuous integration pipeline, it will cause the test run to fail.
Setting up CircleCI
Now that we have a working load test for our HTTP service, the next step is to set up a continuous integration service to run these scenarios frequently. We can find plenty of excellent choices for CI/CD services out there. For this article, we’ll use CircleCI as the place to continuously run our Artillery tests.
CircleCI has a generous free plan to help you get started. To sign up for their service and begin using their free tier, you’ll need a GitHub or BitBucket account. CircleCI currently only supports these two services. Once you authenticate through GitHub or BitBucket and grant the necessary permissions to CircleCI, we can see our existing code repositories.
In this example, we have a code repository on GitHub called ecommerce-backend
, containing the codebase for the HTTP service and the Artillery test script we created in this article. Click on the "Set Up Project" button for this project, and CircleCI will help you choose a programming language to get a pre-populated configuration file. We’ll create the CircleCI configuration file from scratch, so we can skip this step.
CircleCI works by connecting to your code repository and looking for a configuration file called .circleci/config.yml
. This file contains all the steps required to build an environment and execute your workflow. After setting up the project on CircleCI, it will check the configuration file after each commit. Let’s start by building a basic CircleCI configuration that will load an environment with Artillery ready to run our tests.
The contents of the .circleci/config.yml
file will be the following:
version: 2.1
jobs:
build:
docker:
- image: artilleryio/artillery:1.7.2
steps:
- checkout
- run:
name: Execute load tests
command: /home/node/artillery/bin/artillery run tests/performance/search-and-add.yml
All CircleCI configuration files require a version
setting to tell CircleCI which version of their platform to use for our builds. The latest available version at this time is version 2.1, so we’ll use it for this example.
Next, we define the jobs
setting to define the environment we want for our builds and the steps to perform during each build. We can define one or more jobs for each build and set it up as a workflow, but we’ll create a single job called build
for this article. Inside our build
job, we’ll do two things - set up the environment where we’ll run our load test, and define which steps to take to perform the test.
CircleCI provides multiple options for starting an environment to run any workflow with their service, like spinning up a Windows or macOS environment. One of the easiest and most common ways to create an environment for our builds on CircleCI is to use Docker. Artillery provides pre-defined images with all we need to run our test script. Using the docker
setting in the configuration file, we can define the image name using image
, and CircleCI will pull the image and run it as a container.
Next, we use the steps
setting to define what actions to take inside the Docker container. This section allows us to set a series of steps. Here, we only need to do two steps. First, we’ll have to place the code from our repository inside the container, so Artillery has access to the test script. CircleCI has a special step called checkout
, which does this for us.
With the test script inside the Docker container, we can now run the load test since Artillery is already pre-installed. The run
step allows us to execute any command inside the container. Currently, Artillery is not in the environment’s PATH
(which will be corrected soon), so we’ll have to point to the location of the binary (/home/node/artillery/bin/artillery
). The command will run inside of the directory containing our codebase, so we can point to the test script directly there.
Now that the CircleCI configuration is complete, we can commit it to the code repository and push it to GitHub. CircleCI will detect the commit, find the configuration file, and trigger a new build with the defined steps. If everything is set up correctly, CircleCI will pull the Docker image, start it up, place the code inside, and execute the Artillery load test. After a few minutes, Artillery will finalize the test, and if the defined SLOs pass, we’ll have our first passing build.
Generate a report
While it’s nice to see the aggregate report of our load test, it doesn’t give us too much of an indicator at first glance on how our HTTP backend handled the different phases. We can scroll through the output provided by Artillery, but it’s tedious to go through the entire list. Instead, we can use Artillery’s support to generate an HTML report of the complete test, which gives us excellent oversight on the totality of the test run.
To generate an HTML report of an Artillery test run, we need to perform two steps. First, we need to create a JSON report of the test run. We can do this by modifying the step that runs our tests and include the --output
flag:
- run:
name: Execute load tests
command: /home/node/artillery/bin/artillery run --output reports/report.json tests/performance/search-and-add.yml
The second step in the process is to take the generated JSON report and convert it to an HTML report using the artillery report
command. This command will take a JSON report created by Artillery and create a nicely formatted HTML report. We can add this as an additional step in our CircleCI configuration file:
- run:
name: Generate HTML report
command: /home/node/artillery/bin/artillery report --output reports/report.html reports/report.json
To view these reports after your test run completes on CircleCI, you need to store them as artifacts. CircleCI has another special step called store_artifacts
that allows you to upload any artifacts created during a build and store them for up to 30 days. You can then access any uploaded files through the CircleCI build results page. We’ll add this step after generating the HTML report:
- store_artifacts:
path: reports/
If you notice, we’re generating both the JSON and HTML reports in the reports
directory, which doesn’t exist in our code repository. Without this directory, Artillery will raise an error, and our tests will fail, so we’ll have to create the directory before running our test and generating the reports. Our updated CircleCI configuration file looks like this:
version: 2.1
jobs:
build:
docker:
- image: artilleryio/artillery:1.7.2
steps:
- checkout
- run:
name: Make reports directory
command: mkdir reports
- run:
name: Execute load tests
command: /home/node/artillery/bin/artillery run --output reports/report.json tests/performance/search-and-add.yml
- run:
name: Generate HTML report
command: /home/node/artillery/bin/artillery report --output reports/report.html reports/report.json
- store_artifacts:
path: reports/
When we commit the file to GitHub, our tests will run again. After the tests complete, you will see both the JSON and HTML reports under the "Artifacts" tab of the CircleCI build.
Run a nightly build
We’ve been triggering a build on CircleCI after each commit to the code repository, but this isn’t ideal. The load test takes about 13 minutes to complete, which can cause delays to your entire team’s development workflow. If you have multiple team members committing code all day, you’ll quickly find your CircleCI builds backed up, especially on the free tier, which only runs one build at a time.
Instead of running our tests after every commit, let’s say we only want to run them once a day during off-peak hours. This way, we can keep track of our service’s performance every day while preventing slow build times for the development team while they’re working.
CircleCI uses workflows to give you the choice to run jobs after committing to the codebase or by scheduling them at specific times. A workflow is a way for CircleCI to orchestrate defined jobs in different ways. For instance, you may have three jobs you want to run sequentially, or you only want to trigger a build if someone makes a code commit to a specific branch in your repository.
Our existing CircleCI configuration doesn’t have a workflow defined, so CircleCI triggers a build every time we commit new code to the repository. Let’s change that to only run our load test every day at 12:00 AM UTC. To schedule a recurring job on CircleCI, we have to define a workflows
section in our configuration outside of the jobs
section. Our example will have the following workflow definition:
workflows:
version: 2
nightly:
jobs:
- build
triggers:
- schedule:
cron: "0 0 * * *"
filters:
branches:
only:
- main
Like the CircleCI configuration itself, workflows also require a version
. At this time, CircleCI only has version 2 available. After that, we define a workflow by giving it a name. In this example, our workflow is called nightly
.
Under our job name, we need to define which jobs we want to run in this workflow using the jobs
setting. We can define one or more jobs, but we currently only have one job, build
, so we’ll include it here.
The other setting, triggers
, is where we’ll specify how we want to execute this workflow. The only option available for the triggers
setting is the schedule
key, which tells CircleCI we want to run this on a recurring schedule. The schedule can be set using the cron
key, which is required and uses standard POSIX crontab syntax. The configuration above sets the schedule to midnight UTC every single day.
The schedule
key also requires the filters
key, a map that limits which branches you want to run the workflow on. We’re only interested in running the load test from the primary main
branch, so we’ll define it using the branches.only
setting under filters
. If we don’t specify a filter, CircleCI will run the job on all branches in your repository, so it’s important to control where we want to trigger any jobs.
The finalized CircleCI configuration will now look like this:
version: 2.1
jobs:
build:
docker:
- image: artilleryio/artillery:1.7.2
steps:
- checkout
- run:
name: Make reports directory
command: mkdir reports
- run:
name: Execute load tests
command: /home/node/artillery/bin/artillery run --output reports/report.json tests/performance/search-and-add.yml
- run:
name: Generate HTML report
command: /home/node/artillery/bin/artillery report --output reports/report.html reports/report.json
- store_artifacts:
path: reports/
workflows:
version: 2
nightly:
jobs:
- build
triggers:
- schedule:
cron: "0 0 * * *"
filters:
branches:
only:
- main
When committing this file, it won’t trigger a build as it did previously. The load test will now only execute at around 12:00 AM UTC, giving you a chance to check your service every day without blocking the rest of your team.
Caveats of load testing in CI
While the tests in this article validate that the service performs as intended when under load, it comes with a few potential pitfalls you need to be aware of.
Generally speaking, continuous integration environments are not the ideal place for executing load-generating tests. These systems often aren’t under our control, so we won’t know how the build environment can handle testing. The environments provided by these services usually are lower-powered virtualized servers, making it challenging to produce realistic load in a performant way. It also limits our ability to scale, preventing us from generating additional load or running tests in parallel. Another issue is that the CI service runs in a single location, which doesn’t guarantee your service will perform well across the globe.
Artillery Pro solves these issues. Instead of directly executing the load tests from CI, you can use your continuous integration service to trigger the load tests from an environment outside of the continuous integration service. It will give you more control over the build environment and the load you want to send to your service. Artillery Pro also resolves scalability issues by allowing you to spin up additional workers distributed in different regions of the world to give you a better sense of how your services perform, especially in production.
Where to go from here?
In this article, we’ve set up a continuous load test that will keep our HTTP service in check and send automatic alerts if the service doesn’t meet our SLOs. This setup helps with maintaining reliability and performance at the levels we expect at all times. Still, there’s more we can do with our load testing besides daily testing.
We can use our continuous integration environment to run other tests, like unit and end-to-end tests, alongside our Artillery load test when the primary branch in the repo receives an update. With a robust testing pipeline in place, we can increase our confidence in our application, allowing us to set up continuous delivery and automatically deploy new changes to our customers.
We can also improve upon our reporting and observability. While Artillery has basic checks for SLOs, publishing load testing metrics to an external monitoring/observability system like Datadog or Lightstep will help us better define and evaluate our SLOs. The artillery-plugin-publish-metrics plugin integrates easily with our existing test scripts to automatically send Artillery test results to take our monitoring to the next level.
It’s essential not only to verify that your application’s services work as expected but that they perform as expected regardless of the workload. Using a robust load testing tool like Artillery alongside a continuous integration service such as CircleCI will help you regularly check that your team meets SLOs, maintain fast and reliable production systems, and keep your team and customers happy.