How to get started with Yandex Tank

Yandex Tank

Yandex.Tank is an extensible load testing utility for UNIX systems. Some of its features:

  • 100 000+ RPS – load engine is written in pure C++ to generate a big amount of load from one machine.
  • Interactive reports and Monitoring plugin – you can see how your system behaves under load while running the test and collect all your system and business metrics by configuring the monitoring plugin.
  • Integration – Use Yandex.Tank with Jenkins or other CI software to automate your load tests. Store reports online. Stop your tests automatically using customizable criteria.
  • Multiple load engines – you can use JMeter to test complex scenarios or BFG (experimental) for exotic protocols. Implement your own module for your favorite tool and use Tank’s features like OnlineReport with it.

There is wonderful documentation to get started with, but I’ve decided to write this article just to sum up all the first steps and show how it works.

Installation

As it had shown in the documentation, there are multiple ways to install yandex-tank:

  • Installation .deb packages
  • Installation from PyPi
  • Docker container

I recommend installation using Docker container. So if you familiar with it, the installation will be simple and fast, and if you don’t know about it, then it’s a great opportunity to learn more about this product. Please check out Docker documentation to learn more about it.

Small hint: there is a way to create an alias for docker command. Just use

echo 'alias yandex-tank-docker="docker run -v $(pwd):/var/loadtest -v $SSH_AUTH_SOCK:/ssh-agent -e SSH_AUTH_SOCK=/ssh-agent --net host -it direvius/yandex-tank"' >> ~/.bashrc
exec bash

So next time to run Yandex.Tank test just use yandex-tank-docker -c load.yaml(don’t forget to )

Get started with

To make a test you need to create file load.yaml on a server with Yandex.Tank. Example:

phantom:
  address: 203.0.113.1:80 # [Target's address]:[target's port]
  uris:
    - /
  load_profile:
    load_type: rps # schedule load by defining requests per second
    schedule: line(1, 10, 10m) # starting from 1rps growing linearly to 10rps during 10 minutes
console:
  enabled: true # enable console output
telegraf:
  enabled: false # let's disable telegraf monitoring for the first time

Phantom is a load generator module. This file tells it what and how should be loaded. There are other load generators which you can use: JMeter, BFG, Pandora.  But let’s focus on Phantom.

Phantom has 3 primitives for describing load scheme:

  1. step (a,b,step,dur) – makes stepped load, wherea,b are start/end load values, step – increment value, dur – step duration.
    • Example: step(25, 5, 5, 60) – stepped load from 25 to 5 rps, with 5 rps steps, step duration 60s.
      Where rps is requests per second (RPS, also known as queries per second or QPS)
  2. line (a,b,dur) – makes linear load, where a,b are start/end load, dur – the time for linear load increase from a to b.
    • Example: line(10, 1, 10m) – linear load from 10 to 1 rps, duration – 10 minutes.
  3. const (load,dur) makes a constant load. load – rps amount, dur – load duration.
    • Examples:
      const(10,10m) – constant load for 10 rps for 10 minutes.
      const(0, 10) – 0 rps for 10 seconds, in fact, 10s pause in a test.

So using these settings you can create any sort of loading scheme. And it’s the greatest Yandex.Tank feature. Because you don’t usually have loads from zero to thousands instantaneously. In practice, it increases from some “normal” value to peaks and then it can decrease or stay at the same level.

Before you start to load your system, you should learn more about load scheme you’ll have.  If the application is already in production, then take a look at analytics and get information about its normal and peak load. If you don’t have production or there is no analytics data, then ask the stakeholder/product manager about the requirements. So you will have a test plan and goals you want to achieve before the test. Therefore you can analyze the result of load testing and how it meets the requirements.

My load.yaml example:

overload:
  enabled: true 
  token_file: "token.txt"
  job_name: test

phantom:
  address: google.com
  load_profile:
    load_type: rps
    schedule: step(50, 100, 10, 5) 
  uris:
    - "/"      
console:
  enabled: true
telegraf:
  enabled: false # let's disable telegraf monitoring for the first time

Overload is a plugin for reporting results to Yandex Overload. More information about it in “Results, graph and statistics” below.

Preparing requests

Once you have the test plan, it’s time to prepare requests. Each test scenario should be separated into HTTP requests. Extract necessary requests using recording traffic tools, it could be just Chrome dev tools Network tab or some proxy server. Also, you can debug each request using some API testing software, for example, Postman. After you have prepared a set of requests you can choose your loading mode type.

There are several ways to set up requests:

  • Access mode – simple load with  GET/HEAD requests. You can specify headers and URIs. These headers will be applied to all URIs.
  • URI-style – if you need a specific header or URI to each GET/HEAD query, then use this method. Create a file(ammo.txt) with declared requests header and URIs and specify all URIs with headers.
  • URI+POST – load with POST requests. Create a file(ammo.txt) with declared requests headers, URIs, and bodies.
  • request-style – For more complex requests, like PUT/PATCH/DELETE etc, you’ll have to create a special file using a specific format where each of these queries will be detailed.

Run tests

  1. Request specs in load.yaml – run as yandex-tank -c load.yaml
  2. Request specs in ammo.txt – run as yandex-tank -c load.yaml ammo.txt

yandex-tank here is an executable file name of Yandex.Tank. It could be yandex-tank-docker if you created this alias for docker.

The best way to ensure your test results are as accurate and as real as possible is by running the test on your live production site. This way will let you catch real errors and bottlenecks. Of course, you have to choose wisely what time you start such a test, so you don’t affect real user business.  Take a look at your analytics statistics to find traffic downtime. But if you can’t run it on production or you haven’t got prod environment yet, then try to run tests on your staging environment first and then create a prod replica as similar as possible using some cloud server, for example, so you don’t have to buy actual hardware.

Results, graph and statistics

During test execution, you’ll see HTTP and net errors, answer times distribution, progressbar, and other interesting data. At the same time file phout.txt is being written, which could be analyzed later.

test run example

If you need a more human-readable report, you can try monitoring plugin Yandex Overload. It’s a great tool, you’ll see all the results online and even can stop test from there. Check out documentation how to set up it in your load.yaml. First, you have to log in to Yandex Overload (using Github or Yandex OAuth2). Then click on your profile image and choose “My api token”. Save it to some file, for example, token.txt  and specify this file inload.yaml. Settings example:

overload:
  token_file: token.txt
  job_name: test #(Optional) Description of a job to be displayed in Yandex.Overload
  job_dsc: test description #(Optional) Name of a job to be displayed in Yandex.Overload
Here is my test result from the example above.
Yandex Tank

If you need to upload results to external storage, such as Graphite or InfluxDB, you can use one of the existing artifacts uploading modules Modules. Then using data from external storage you’ll have to set up graphs in some data visualization tool, like Grafana.  Tutorial how to set up InfluxDB in Grafana. To build a graph you need to learn more about phout.txt structure. It has the following fields: time, tag, interval_real, connect_time, send_time, latency, receive_time, interval_event, size_out, size_in, net_code proto_code

phout.txt example:

1326453006.582          1510    934     52      384     140     1249    37      478     0 
4041326453006.582 others 1301 674 58 499 70 1116 37 478 0
4041326453006.587 heavy 377 76 33 178 90 180 37 478 0
4041326453006.587 294 47 27 146 74 147 37 478 0
4041326453006.588 345 75 29 166 75 169 37 478 0
4041326453006.590 276 72 28 119 57 121 53 476 0
4041326453006.593 255 62 27 131 35 134 37 478 0
4041326453006.594 304 50 30 147 77 149 37 478 0

Conclusion

Fast set up using Docker and quick results visualization in Yandex.Overload makes Yandex.Tank a really great tool to use.

Once you’ve completed your test plan and got all results and statistics, you’ll face the next most challenging part of the performance testing process – the analysis of test results and the identification of bottlenecks. And well, that’s a whole other post.

Leave a Reply

Your email address will not be published. Required fields are marked *