In this directory you can find scripts to test
s3gw in several different
ways. Below is a description of each type of test.
However, note that each test requires an existing gateway running somewhere
accessible by the tests. This may be an
s3gw container, or a
from a source repository. It doesn't matter whether these running on the local
machine or on a remote host, as long as they are accessible to the tests.
Basic test battery to smoke out errors and potential regressions.
This script takes a mandatory argument in the form
ADDRESS[:PORT[/LOCATION]]. For example,
127.0.0.1:7480/s3gw, where we know
we will be able to find the
At the moment, these tests mainly rely on
s3cmd, which requires to be
installed and available in the
Runs a comprehensive test battery against a running
radosgw. It relies on
ceph/s3-tests, and will clone this
repository for each test run.
This script also takes a mandatory argument in the form
must be the address where the
radosgw can be found.
PORT defaults to
Each run will be kept in a directory of the form
ceph/s3-tests repository, as well as logs, will be kept within
Test reports may be generated using the
This script requires the resulting log file from an
create-s3tests-report.sh --help for more information.
With tracking our improvement over time in mind, we are benchmarking
radosgw with the file based backend
This allows us to identify potential performance regressions, as
well as understand whether the changes we're making are actually having an
impact, and at the desired scale.
This script relies on MinIO's warp tool. To
run it will need this tool to be available on the user's
PATH. That means
having it installed with
go install github.com/minio/warp@latest, and have
GOPATH (by default it should be
~/go/bin) in the user's
This script also requires a
HOST[:PORT] parameter, similarly to the
other tests so that
warp knows where to find the
radosgw being benchmarked.
Additionally, this script takes one of three options:
--large, writing 6000
objects for 10 minutes;
--medium, writing 1000 objects for 5 minutes; and
--small for 50 objects during 1 minute.
The test will run
wrap 3 times, for object sizes of 1MiB, 10MiB, and 100MiB.
wrap is run, a file will be created containing the results of
the benchmark. This file can later on be used to compare results between runs.
For more information, check
For the purpose of stress testing
s3gw, you can rely on
The tool is equipped with an HTTP client that can also act as an
You can therefore use the tool to issue concurrent and serial operations against
For a basic stress testing activity you should normally want to shot a series
Such workload can be modeled with a fio jobfile.
For example, you can customize the following jobfile and tuning it to realize
the test you wish to perform.
[global] ioengine=http filename=/foo/obj http_verbose=0 https=off http_mode=s3 http_s3_key=test http_s3_keyid=test http_host=localhost:7480 [s3-write] numjobs=4 rw=write size=16m bs=16m [s3-read] numjobs=4 rw=read size=16m bs=16m [s3-trim] stonewall numjobs=1 rw=trim size=16m bs=16m
Once you have created your jobfile:
s3gw.fio you can launch
the workload with:
$ fio s3gw.fio Starting 9 processes ...
This jobfile connects to an S3 gateway listening on
and operates on a object
obj which resides inside an existing
This example launches 3 types of jobs:
s3-trim (DELETE); the actual operation verb is defined by
property. The actual number of processes performing the same operation is
global section is inherited from all defined jobs.
For this specific example the I/O activity is defined by:
meaning that, a 16mb file will be
trim with a single
16mb weighting I/O operation. As result of this, supposing that no
job has been defined, you would find in the bucket a 16mb object:
$ s3cmd ls s3://foo 2022-07-19 13:14 16777216 s3://foo/obj_0_16777216
By modifying the
bs property to the value of
4m, you are diminishing
the weight of a single I/O operation over an overall
As result of this, you would find 4 (16mb/4mb) single objects in the bucket:
$ s3cmd ls s3://foo 2022-07-19 13:21 4194304 s3://foo/obj_0_4194304 2022-07-19 13:21 4194304 s3://foo/obj_12582912_4194304 2022-07-19 13:21 4194304 s3://foo/obj_4194304_4194304 2022-07-19 13:21 4194304 s3://foo/obj_8388608_4194304
To build more complex
fio workloads, refer to the