Getting Started With Artillery

Install Node.js

Artillery is written in Node.js (but you do not need to know Node.js or JS to use Artillery). Grab the appropriate package from nodejs.org or install Node.js with your favorite package manager first. We recommend Node 10 for running Artillery, but any version above 6 should work.

Install Artillery

Once Node.js is installed, install Artillery with:

npm install -g artillery

To check that the installation succeeded, run:

artillery -V

You should see Artillery print its version if the installation has been successful.

You are ready to run your first load test now!

(If you like dinosaurs, you can try artillery dino too.)

Run a quick test

Artillery has a quick command which allows you to use it for ad-hoc testing (in a manner similar to ab). Run:

artillery quick --count 10 -n 20 https://artillery.io/

This command will create 10 "virtual users" each of which will send 20 HTTP GET requests to https://artillery.io/.

Run a test script

While the quick command can be useful for very simple tests, the full power of Artillery lies in being able to simulate realistic user behavior with scenarios. Let’s see how we’d run one of those.

Copy the following code into a hello.yml file:

config:
  target: 'https://artillery.io'
  phases:
    - duration: 60
      arrivalRate: 20
  defaults:
    headers:
      x-my-service-auth: '987401838271002188298567'
scenarios:
  - flow:
    - get:
        url: "/docs"

What our test does

In this script, we specify that we are testing a service running on https://artillery.io which will be talking to over HTTP. We define one load phase, which will last 60 seconds with 20 new virtual users (arriving every second (on average).

Then we define one possible scenario for every new virtual user to pick from, which consists of one GET request.

We also set an x-my-service-auth header to be sent with every request.

Running the test

Run the test with:

artillery run hello.yml

As Artillery runs the test, an intermediate report will be printed to the terminal every 10 seconds, followed by an aggregate report at the end of the test.

An aggregate report will look similar to this:

Complete report @ 2019-01-02T17:32:36.653Z
  Scenarios launched:  300
  Scenarios completed: 300
  Requests completed:  600
  RPS sent: 18.86
  Request latency:
    min: 52.1
    max: 11005.7
    median: 408.2
    p95: 1727.4
    p99: 3144
  Scenario counts:
    0: 300 (100%)
  Codes:
    200: 300
    302: 300

If you see NaN ("not a number") reported as a value, that means not enough responses have been received to calculate the percentile.

If there are any errors (such as socket timeouts), those will be printed under Errors in the report as well.