table of contents
CEPH-SYN(8) | Ceph | CEPH-SYN(8) |
NAME¶
ceph-syn - ceph synthetic workload generator
SYNOPSIS¶
ceph-syn [ -m monaddr:port ] --syn command ...
DESCRIPTION¶
ceph-syn is a simple synthetic workload generator for the Ceph distributed file system. It uses the userspace client library to generate simple workloads against a currently running file system. The file system need not be mounted via ceph-fuse(8) or the kernel client.
One or more --syn command arguments specify the particular workload, as documented below.
OPTIONS¶
- -d
- Detach from console and daemonize after startup.
- -c ceph.conf, --conf=ceph.conf
- Use ceph.conf configuration file instead of the default /etc/ceph/ceph.conf to determine monitor addresses during startup.
- -m monaddress[:port]
- Connect to specified monitor (instead of looking through ceph.conf).
- --num_client num
- Run num different clients, each in a separate thread.
- --syn workloadspec
- Run the given workload. May be specified as many times as needed. Workloads will normally run sequentially.
WORKLOADS¶
Each workload should be preceded by --syn on the command line. This is not a complete list.
- mknap path snapname
- Create a snapshot called snapname on path.
- rmsnap path snapname
- Delete snapshot called snapname on path.
- rmfile path
- Delete/unlink path.
- writefile sizeinmb blocksize
- Create a file, named after our client id, that is sizeinmb MB by writing blocksize chunks.
- readfile sizeinmb blocksize
- Read file, named after our client id, that is sizeinmb MB by writing blocksize chunks.
- rw sizeinmb blocksize
- Write file, then read it back, as above.
- makedirs numsubdirs numfiles depth
- Create a hierarchy of directories that is depth levels deep. Give each directory numsubdirs subdirectories and numfiles files.
- walk
- Recursively walk the file system (like find).
AVAILABILITY¶
ceph-syn is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to the Ceph documentation at http://ceph.com/docs for more information.
SEE ALSO¶
COPYRIGHT¶
2010-2023, Inktank Storage, Inc. and contributors. Licensed under Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)
February 16, 2023 | dev |