sbatch(1) | SLURM Commands | sbatch(1) |
NAME¶
sbatch - Submit a batch script to SLURM.SYNOPSIS¶
sbatch [ options] script [args...]DESCRIPTION¶
sbatch submits a batch script to SLURM. The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input. The batch script may contain options preceded with "#SBATCH" before any executable commands in the script.OPTIONS¶
- -A, --account=<account>
- Charge resources used by this job to specified account. The
account is an arbitrary string. The account name may be changed
after job submission using the scontrol command.
- --acctg-freq=<seconds>
- Define the job accounting sampling interval. This can be
used to override the JobAcctGatherFrequency parameter in SLURM's
configuration file, slurm.conf. A value of zero disables the
periodic job sampling and provides accounting information only on job
termination (reducing SLURM interference with the job).
- -B --extra-node-info=<sockets[:cores[: threads]]>
- Request a specific allocation of resources with details as
to the number and type of computational resources within a cluster: number
of sockets (or physical processors) per node, cores per socket, and
threads per core. The total amount of resources being requested is the
product of all of the terms. Each value specified is considered a minimum.
An asterisk (*) can be used as a placeholder indicating that all available
resources of that type are to be utilized. As with nodes, the individual
levels can also be specified in separate options if desired:
--sockets-per-node=<sockets> --cores-per-socket=<cores> --threads-per-core=<threads>
If task/affinity plugin is enabled, then specifying an allocation in this manner also sets a default --cpu_bind option of threads if the -B option specifies a thread count, otherwise an option of cores if a core count is specified, otherwise an option of sockets. If SelectType is configured to select/cons_res, it must have a parameter of CR_Core, CR_Core_Memory, CR_Socket, or CR_Socket_Memory for this option to be honored. This option is not supported on BlueGene systems (select/bluegene plugin is configured). If not specified, the scontrol show job will display 'ReqS:C:T=*:*:*'.
- --begin=<time>
- Submit the batch script to the SLURM controller
immediately, like normal, but tell the controller to defer the allocation
of the job until the specified time.
--begin=16:00 --begin=now+1hour --begin=now+60 (seconds by default) --begin=2010-01-20T12:34:00
Notes on date/time specifications:
- Although the 'seconds' field of the HH:MM:SS time specification is allowed by the code, note that the poll time of the SLURM scheduler is not precise enough to guarantee dispatch of the job on the exact second. The job will be eligible to start on the next poll following the specified time. The exact poll interval depends on the SLURM scheduler (e.g., 60 seconds with the default sched/builtin).
- If no time (HH:MM:SS) is specified, the default is (00:00:00).
- If a date is specified without a year (e.g., MM/DD) then the current year is assumed, unless the combination of MM/DD and HH:MM:SS has already passed for that year, in which case the next year is used.
- Although the 'seconds' field of the HH:MM:SS time specification is allowed by the code, note that the poll time of the SLURM scheduler is not precise enough to guarantee dispatch of the job on the exact second. The job will be eligible to start on the next poll following the specified time. The exact poll interval depends on the SLURM scheduler (e.g., 60 seconds with the default sched/builtin).
- If no time (HH:MM:SS) is specified, the default is (00:00:00).
- If a date is specified without a year (e.g., MM/DD) then the current year is assumed, unless the combination of MM/DD and HH:MM:SS has already passed for that year, in which case the next year is used.
- --checkpoint=<time>
- Specifies the interval between creating checkpoints of the
job step. By default, the job step will have no checkpoints created.
Acceptable time formats include "minutes",
"minutes:seconds", "hours:minutes:seconds",
"days-hours", "days-hours:minutes" and
"days-hours:minutes:seconds".
- --checkpoint-dir=<directory>
- Specifies the directory into which the job or job step's
checkpoint should be written (used by the checkpoint/blcrm and
checkpoint/xlch plugins only). The default value is the current working
directory. Checkpoint files will be of the form
"<job_id>.ckpt" for jobs and
"<job_id>.<step_id>.ckpt" for job steps.
- --comment=<string>
- An arbitrary comment enclosed in double quotes if using
spaces or some special characters.
- -C, --constraint=<list>
- Specify a list of constraints. The constraints are features
that have been assigned to the nodes by the slurm administrator. The
list of constraints may include multiple features separated by
ampersand (AND) and/or vertical bar (OR) operators. For example:
--constraint="opteron&video" or
--constraint="fast|faster". In the first example, only
nodes having both the feature "opteron" AND the feature
"video" will be used. There is no mechanism to specify that you
want one node with feature "opteron" and another node with
feature "video" in case no node has both features. If only one
of a set of possible options should be used for all allocated nodes, then
use the OR operator and enclose the options within square brackets. For
example: " --constraint=[rack1|rack2|rack3|rack4]" might
be used to specify that all nodes must be allocated on a single rack of
the cluster, but any of those four racks can be used. A request can also
specify the number of nodes needed with some feature by appending an
asterisk and count after the feature name. For example " sbatch
--nodes=16 --constraint=graphics*4 ..." indicates that the job
requires 16 nodes and that at least four of those nodes must have the
feature "graphics." Constraints with node counts may only be
combined with AND operators. If no nodes have the requested features, then
the job will be rejected by the slurm job manager.
- --contiguous
- If set, then the allocated nodes must form a contiguous
set. Not honored with the topology/tree or topology/3d_torus
plugins, both of which can modify the node ordering.
- --cores-per-socket=<cores>
- Restrict node selection to nodes with at least the
specified number of cores per socket. See additional information under
-B option above when task/affinity plugin is enabled.
- --cpu_bind=[{quiet,verbose},]type
- Bind tasks to CPUs. Used only when the task/affinity plugin
is enabled. The configuration parameter TaskPluginParam may
override these options. For example, if TaskPluginParam is
configured to bind to cores, your job will not be able to bind tasks to
sockets. NOTE: To have SLURM always report on the selected CPU binding for
all commands executed in a shell, you can enable verbose mode by setting
the SLURM_CPU_BIND environment variable value to "verbose".
SLURM_CPU_BIND_VERBOSE SLURM_CPU_BIND_TYPE SLURM_CPU_BIND_LIST
- q[uiet]
- Quietly bind before task runs (default)
- v[erbose]
- Verbosely report binding before task runs
- no[ne]
- Do not bind tasks to CPUs (default)
- rank
- Automatically bind by task rank. Task zero is bound to socket (or core or thread) zero, etc. Not supported unless the entire node is allocated to the job.
- map_cpu:<list>
- Bind by mapping CPU IDs to tasks as specified where <list> is <cpuid1>,<cpuid2>,...<cpuidN>. CPU IDs are interpreted as decimal values unless they are preceded with '0x' in which case they are interpreted as hexadecimal values. Not supported unless the entire node is allocated to the job.
- mask_cpu:<list>
- Bind by setting CPU masks on tasks as specified where <list> is <mask1>,<mask2>,...<maskN>. CPU masks are always interpreted as hexadecimal values but can be preceded with an optional '0x'.
- sockets
- Automatically generate masks binding tasks to sockets. Only the CPUs on the socket which have been allocated to the job will be used. If the number of tasks differs from the number of allocated sockets this can result in sub-optimal binding.
- cores
- Automatically generate masks binding tasks to cores. If the number of tasks differs from the number of allocated cores this can result in sub-optimal binding.
- threads
- Automatically generate masks binding tasks to threads. If the number of tasks differs from the number of allocated threads this can result in sub-optimal binding.
- ldoms
- Automatically generate masks binding tasks to NUMA locality domains. If the number of tasks differs from the number of allocated locality domains this can result in sub-optimal binding.
- help
- Show this help message
- -c, --cpus-per-task=<ncpus>
- Advise the SLURM controller that ensuing job steps will
require ncpus number of processors per task. Without this option,
the controller will just try to allocate one processor per task.
- -d, --dependency=<dependency_list>
- Defer the start of this job until the specified dependencies have been satisfied completed. < dependency_list> is of the form < type:job_id[:job_id][,type:job_id[:job_id]]>. Many jobs can share the same dependency and these jobs may even belong to different users. The value may be changed after job submission using the scontrol command.
- after:job_id[:jobid...]
- This job can begin execution after the specified jobs have begun execution.
- afterany:job_id[:jobid...]
- This job can begin execution after the specified jobs have terminated.
- afternotok:job_id[:jobid...]
- This job can begin execution after the specified jobs have terminated in some failed state (non-zero exit code, node failure, timed out, etc).
- afterok:job_id[:jobid...]
- This job can begin execution after the specified jobs have successfully executed (ran to completion with an exit code of zero).
- expand:job_id
- Resources allocated to this job should be used to expand the specified job. The job to expand must share the same QOS (Quality of Service) and partition. Gang scheduling of resources in the partition is also not supported.
- singleton
- This job can begin execution after any previously launched jobs sharing the same job name and user have terminated.
- -D, --workdir=<directory>
- Set the working directory of the batch script to
directory before it is executed.
- -e, --error=<filename pattern>
- Instruct SLURM to connect the batch script's standard error
directly to the file name specified in the " filename
pattern". By default both standard output and standard error are
directed to a file of the name "slurm-%j.out", where the
"%j" is replaced with the job allocation number. See the
--input option for filename specification options.
- --exclusive
- The job allocation can not share nodes with other running
jobs. This is the opposite of --share, whichever option is seen last on
the command line will be used. The default shared/exclusive behavior
depends on system configuration and the partition's Shared option
takes precedence over the job's option.
- --export=<environment variables | ALL | NONE>
- Identify which environment variables are propagated to the
batch job. Multiple environment variable names should be comma separated.
Environment variable names may be specified to propagate the current value
of those variables (e.g. "--export=EDITOR") or specific values
for the variables may be exported (e.g..
"--export=EDITOR=/bin/vi"). This option particularly important
for jobs that are submitted on one cluster and execute on a different
cluster (e.g. with different paths). By default all environment variables
are propagaged. If the argument is NONE or specific environment
variable names, then the --get-user-env option will implicitly be
set to load other environment variables based upon the user's
configuration on the cluster which executes the job.
- --export-file=<filename | fd>
- If a number between 3 and OPEN_MAX is specified as the
argument to this option, a readable file descriptor will be assumed (STDIN
and STDOUT are not supported as valid arguments). Otherwise a filename is
assumed. Export environment variables defined in < filename>
or read from < fd> to the job's execution environment. The
content is one or more environment variable definitions of the form
NAME=value, each separated by a null character. This allows the use of
special characters in environment definitions.
- -F, --nodefile=<node file>
- Much like --nodelist, but the list is contained in a file
of name node file. The node names of the list may also span
multiple lines in the file. Duplicate node names in the file will be
ignored. The order of the node names in the list is not important; the
node names will be sorted by SLURM.
- --get-user-env[=timeout][mode]
- This option will tell sbatch to retrieve the login
environment variables for the user specified in the --uid option.
The environment variables are retrieved by running something of this sort
"su - <username> -c /usr/bin/env" and parsing the output.
Be aware that any environment variables already set in sbatch's
environment will take precedence over any environment variables in the
user's login environment. Clear any environment variables before calling
sbatch that you do not want propagated to the spawned program. The
optional timeout value is in seconds. Default value is 8 seconds.
The optional mode value control the "su" options. With a
mode value of "S", "su" is executed without the
"-" option. With a mode value of "L",
"su" is executed with the "-" option, replicating the
login environment. If mode not specified, the mode established at
SLURM build time is used. Example of use include
"--get-user-env", "--get-user-env=10"
"--get-user-env=10L", and "--get-user-env=S". This
option was originally created for use by Moab.
- --gid=<group>
- If sbatch is run as root, and the --gid
option is used, submit the job with group's group access
permissions. group may be the group name or the numerical group ID.
- --gres=<list>
- Specifies a comma delimited list of generic consumable
resources. The format of each entry on the list is
"name[:count[*cpu]]". The name is that of the consumable
resource. The count is the number of those resources with a default value
of 1. The specified resources will be allocated to the job on each node
allocated unless "*cpu" is appended, in which case the resources
will be allocated on a per cpu basis. The available generic consumable
resources is configurable by the system administrator. A list of available
generic consumable resources will be printed and the command will exit if
the option argument is "help". Examples of use include
"--gres=gpus:2*cpu,disk=40G" and "--gres=help".
- -H, --hold
- Specify the job is to be submitted in a held state
(priority of zero). A held job can now be released using scontrol to reset
its priority (e.g. " scontrol release <job_id>").
- -h, --help
- Display help information and exit.
- --hint=<type>
- Bind tasks according to application hints
- compute_bound
- Select settings for compute bound applications: use all cores in each socket, one thread per core
- memory_bound
- Select settings for memory bound applications: use only one core in each socket, one thread per core
- [no]multithread
- [don't] use extra threads with in-core multi-threading which can benefit communication intensive applications
- help
- show this help message
- -I, --immediate
- The batch script will only be submitted to the controller
if the resources necessary to grant its job allocation are immediately
available. If the job allocation will have to wait in a queue of pending
jobs, the batch script will not be submitted.
- -i, --input=<filename pattern>
- Instruct SLURM to connect the batch script's standard input
directly to the file name specified in the " filename
pattern".
- %j
- Job allocation number.
- %N
- Node name. Only one file is created, so %N will be replaced by the name of the first node in the job, which is the one that runs the script.
- -J, --job-name=<jobname>
- Specify a name for the job allocation. The specified name
will appear along with the job id number when querying running jobs on the
system. The default is the name of the batch script, or just
"sbatch" if the script is read on sbatch's standard input.
- --jobid=<jobid>
- Allocate resources as the specified job id. NOTE: Only
valid for user root.
- -k, --no-kill
- Do not automatically terminate a job of one of the nodes it
has been allocated fails. The user will assume the responsibilities for
fault-tolerance should a node fail. When there is a node failure, any
active job steps (usually MPI jobs) on that node will almost certainly
suffer a fatal error, but with --no-kill, the job allocation will not be
revoked so the user may launch new job steps on the remaining nodes in
their allocation.
- -L, --licenses=<license>
- Specification of licenses (or other resources available on
all nodes of the cluster) which must be allocated to this job. License
names can be followed by an asterisk and count (the default count is one).
Multiple license names should be comma separated (e.g.
"--licenses=foo*4,bar").
- -M, --clusters=<string>
- Clusters to issue commands to. Multiple cluster names may
be comma separated. The job will be submitted to the one cluster providing
the earliest expected job initiation time. The default value is the
current cluster. A value of ' all' will query to run on all
clusters. Note the --export option to control environment variables
exported between clusters.
- -m, --distribution=
- <block|cyclic|arbitrary|plane=<options>[:block|cyclic]>
- block
- The block distribution method will distribute tasks to a node such that consecutive tasks share a node. For example, consider an allocation of three nodes each with two cpus. A four-task block distribution request will distribute those tasks to the nodes with tasks one and two on the first node, task three on the second node, and task four on the third node. Block distribution is the default behavior if the number of tasks exceeds the number of allocated nodes.
- cyclic
- The cyclic distribution method will distribute tasks to a node such that consecutive tasks are distributed over consecutive nodes (in a round-robin fashion). For example, consider an allocation of three nodes each with two cpus. A four-task cyclic distribution request will distribute those tasks to the nodes with tasks one and four on the first node, task two on the second node, and task three on the third node. Note that when SelectType is select/cons_res, the same number of CPUs may not be allocated on each node. Task distribution will be round-robin among all the nodes with CPUs yet to be assigned to tasks. Cyclic distribution is the default behavior if the number of tasks is no larger than the number of allocated nodes.
- plane
- The tasks are distributed in blocks of a specified size.
The options include a number representing the size of the task block. This
is followed by an optional specification of the task distribution scheme
within a block of tasks and between the blocks of tasks. For more details
(including examples and diagrams), please see
- arbitrary
- The arbitrary method of distribution will allocate processes in-order as listed in file designated by the environment variable SLURM_HOSTFILE. If this variable is listed it will override any other method specified. If not set the method will default to block. Inside the hostfile must contain at minimum the number of hosts requested and be one per line or comma separated. If specifying a task count ( -n, --ntasks=<number>), your tasks will be laid out on the nodes in the order of the file.
- block
- The block distribution method will distribute tasks to sockets such that consecutive tasks share a socket.
- cyclic
- The cyclic distribution method will distribute tasks to sockets such that consecutive tasks are distributed over consecutive sockets (in a round-robin fashion).
- --mail-type=<type>
- Notify user by email when certain event types occur. Valid
type values are BEGIN, END, FAIL, REQUEUE, and ALL (any state
change). The user to be notified is indicated with --mail-user.
- --mail-user=<user>
- User to receive email notification of state changes as
defined by --mail-type. The default value is the submitting user.
- --mem=<MB>
- Specify the real memory required per node in MegaBytes.
Default value is DefMemPerNode and the maximum value is
MaxMemPerNode. If configured, both of parameters can be seen using
the scontrol show config command. This parameter would generally be
used if whole nodes are allocated to jobs (
SelectType=select/linear). Also see --mem-per-cpu.
--mem and --mem-per-cpu are mutually exclusive.
- --mem-per-cpu=<MB>
- Mimimum memory required per allocated CPU in MegaBytes.
Default value is DefMemPerCPU and the maximum value is
MaxMemPerCPU (see exception below). If configured, both of
parameters can be seen using the scontrol show config command. Note
that if the job's --mem-per-cpu value exceeds the configured
MaxMemPerCPU, then the user's limit will be treated as a memory
limit per task; --mem-per-cpu will be reduced to a value no larger
than MaxMemPerCPU; --cpus-per-task will be set and value of
--cpus-per-task multiplied by the new --mem-per-cpu value
will equal the original --mem-per-cpu value specified by the user.
This parameter would generally be used if individual processors are
allocated to jobs ( SelectType=select/cons_res). Also see
--mem. --mem and --mem-per-cpu are mutually
exclusive.
- --mem_bind=[{quiet,verbose},]type
- Bind tasks to memory. Used only when the task/affinity
plugin is enabled and the NUMA memory functions are available. Note
that the resolution of CPU and memory binding may differ on some
architectures. For example, CPU binding may be performed at the level
of the cores within a processor while memory binding will be performed at
the level of nodes, where the definition of "nodes" may differ
from system to system. The use of any type other than
"none" or "local" is not recommended. If you
want greater control, try running a simple test code with the options
"--cpu_bind=verbose,none --mem_bind=verbose,none" to determine
the specific configuration.
SLURM_MEM_BIND_VERBOSE SLURM_MEM_BIND_TYPE SLURM_MEM_BIND_LIST
- q[uiet]
- quietly bind before task runs (default)
- v[erbose]
- verbosely report binding before task runs
- no[ne]
- don't bind tasks to memory (default)
- rank
- bind by task rank (not recommended)
- local
- Use memory local to the processor in use
- map_mem:<list>
- bind by mapping a node's memory to tasks as specified where <list> is <cpuid1>,<cpuid2>,...<cpuidN>. CPU IDs are interpreted as decimal values unless they are preceded with '0x' in which case they interpreted as hexadecimal values (not recommended)
- mask_mem:<list>
- bind by setting memory masks on tasks as specified where <list> is <mask1>,<mask2>,...<maskN>. memory masks are always interpreted as hexadecimal values. Note that masks must be preceded with a '0x' if they don't begin with [0-9] so they are seen as numerical values by srun.
- help
- show this help message
- --mincpus=<n>
- Specify a minimum number of logical cpus/processors per
node.
- -N, --nodes=<minnodes[-maxnodes]>
- Request that a minimum of minnodes nodes be
allocated to this job. A maximum node count may also be specified with
maxnodes. If only one number is specified, this is used as both the
minimum and maximum node count. The partition's node limits supersede
those of the job. If a job's node limits are outside of the range
permitted for its associated partition, the job will be left in a PENDING
state. This permits possible execution at a later time, when the partition
limit is changed. If a job node limit exceeds the number of nodes
configured in the partition, the job will be rejected. Note that the
environment variable SLURM_NNODES will be set to the count of nodes
actually allocated to the job. See the ENVIRONMENT VARIABLES
section for more information. If -N is not specified, the default
behavior is to allocate enough nodes to satisfy the requirements of the
-n and -c options. The job will be allocated as many nodes
as possible within the range specified and without delaying the initiation
of the job. The node count specification may include a numeric value
followed by a suffix of "k" (multiplies numeric value by 1,024)
or "m" (multiplies numeric value by 1,048,576).
- -n, --ntasks=<number>
- sbatch does not launch tasks, it requests an allocation of
resources and submits a batch script. This option advises the SLURM
controller that job steps run within the allocation will launch a maximum
of number tasks and to provide for sufficient resources. The
default is one task per node, but note that the --cpus-per-task
option will change this default.
- --network=<type>
- Specify the communication protocol to be used. This option
is supported on AIX systems. Since POE is used to launch tasks, this
option is not normally used or is specified using the SLURM_NETWORK
environment variable. The interpretation of type is system
dependent. For systems with an IBM Federation switch, the following
comma-separated and case insensitive types are recognized: IP (the
default is user-space), SN_ALL, SN_SINGLE, BULK_XFER
and adapter names (e.g. SNI0 and SNI1). For more
information, on IBM systems see poe documentation on the
environment variables MP_EUIDEVICE and MP_USE_BULK_XFER.
Note that only four jobs steps may be active at once on a node with the
BULK_XFER option due to limitations in the Federation switch
driver.
- --nice[=adjustment]
- Run the job with an adjusted scheduling priority within
SLURM. With no adjustment value the scheduling priority is decreased by
100. The adjustment range is from -10000 (highest priority) to 10000
(lowest priority). Only privileged users can specify a negative
adjustment. NOTE: This option is presently ignored if
SchedulerType=sched/wiki or SchedulerType=sched/wiki2.
- --no-requeue
- Specifies that the batch job should not be requeued after
node failure. Setting this option will prevent system administrators from
being able to restart the job (for example, after a scheduled downtime).
When a job is requeued, the batch script is initiated from its beginning.
Also see the --requeue option. The JobRequeue configuration
parameter controls the default behavior on the cluster.
- --ntasks-per-core=<ntasks>
- Request the maximum ntasks be invoked on each core.
Meant to be used with the --ntasks option. Related to
--ntasks-per-node except at the core level instead of the node
level. Masks will automatically be generated to bind the tasks to specific
core unless --cpu_bind=none is specified. NOTE: This option is not
supported unless SelectTypeParameters=CR_Core or
SelectTypeParameters=CR_Core_Memory is configured.
- --ntasks-per-socket=<ntasks>
- Request the maximum ntasks be invoked on each
socket. Meant to be used with the --ntasks option. Related to
--ntasks-per-node except at the socket level instead of the node
level. Masks will automatically be generated to bind the tasks to specific
sockets unless --cpu_bind=none is specified. NOTE: This option is
not supported unless SelectTypeParameters=CR_Socket or
SelectTypeParameters=CR_Socket_Memory is configured.
- --ntasks-per-node=<ntasks>
- Request the maximum ntasks be invoked on each node.
Meant to be used with the --nodes option. This is related to
--cpus-per-task= ncpus, but does not require knowledge of
the actual number of cpus on each node. In some cases, it is more
convenient to be able to request that no more than a specific number of
tasks be invoked on each node. Examples of this include submitting a
hybrid MPI/OpenMP app where only one MPI "task/rank" should be
assigned to each node while allowing the OpenMP portion to utilize all of
the parallelism present in the node, or submitting a single
setup/cleanup/monitoring job to each node of a pre-existing allocation as
one step in a larger job script.
- -O, --overcommit
- Overcommit resources. Normally, sbatch will allocate
one task per processor. By specifying --overcommit you are
explicitly allowing more than one task per processor. However no more than
MAX_TASKS_PER_NODE tasks are permitted to execute per node.
- -o, --output=<filename pattern>
- Instruct SLURM to connect the batch script's standard
output directly to the file name specified in the " filename
pattern". By default both standard output and standard error are
directed to a file of the name "slurm-%j.out", where the
"%j" is replaced with the job allocation number. See the
--input option for filename specification options.
- --open-mode=append|truncate
- Open the output and error files using append or truncate
mode as specified. The default value is specified by the system
configuration parameter JobFileAppend.
- -p, --partition=<partition_names>
- Request a specific partition for the resource allocation.
If not specified, the default behavior is to allow the slurm controller to
select the default partition as designated by the system administrator. If
the job can use more than one partition, specify their names in a comma
separate list and the one offering earliest initiation will be used.
- --propagate[=rlimits]
- Allows users to specify which of the modifiable (soft) resource limits to propagate to the compute nodes and apply to their jobs. If rlimits is not specified, then all resource limits will be propagated. The following rlimit names are supported by Slurm (although some options may not be supported on some systems):
- ALL
- All limits listed below
- AS
- The maximum address space for a process
- CORE
- The maximum size of core file
- CPU
- The maximum amount of CPU time
- DATA
- The maximum size of a process's data segment
- FSIZE
- The maximum size of files created. Note that if the user sets FSIZE to less than the current size of the slurmd.log, job launches will fail with a 'File size limit exceeded' error.
- MEMLOCK
- The maximum size that may be locked into memory
- NOFILE
- The maximum number of open files
- NPROC
- The maximum number of processes available
- RSS
- The maximum resident set size
- STACK
- The maximum stack size
- -Q, --quiet
- Suppress informational messages from sbatch. Errors will
still be displayed.
- --qos=<qos>
- Request a quality of service for the job. QOS values can be
defined for each user/cluster/account association in the SLURM database.
Users will be limited to their association's defined set of qos's when the
SLURM configuration parameter, AccountingStorageEnforce, includes
"qos" in it's definition.
- --requeue
- Specifies that the batch job should be requeued after node
failure. When a job is requeued, the batch script is initiated from its
beginning. Also see the --no-requeue option. The JobRequeue
configuration parameter controls the default behavior on the cluster.
- --reservation=<name>
- Allocate resources for the job from the named reservation.
- -s, --share
- The job allocation can share nodes with other running jobs.
This is the opposite of --exclusive, whichever option is seen last on the
command line will be used. The default shared/exclusive behavior depends
on system configuration and the partition's Shared option takes
precedence over the job's option. This option may result the allocation
being granted sooner than if the --share option was not set and allow
higher system utilization, but application performance will likely suffer
due to competition for resources within a node.
- --signal=<sig_num>[@<sig_time>]
- When a job is within sig_time seconds of its end
time, send it the signal sig_num. Due to the resolution of event
handling by SLURM, the signal may be sent up to 60 seconds earlier than
specified. sig_num may either be a signal number or name (e.g.
"10" or "USR1"). sig_time must have integer
value between zero and 65535. By default, no signal is sent before the
job's end time. If a sig_num is specified without any
sig_time, the default time will be 60 seconds.
- --sockets-per-node=<sockets>
- Restrict node selection to nodes with at least the
specified number of sockets. See additional information under -B
option above when task/affinity plugin is enabled.
- --switches=<count>[@<max-time>]
- When a tree topology is used, this defines the maximum
count of switches desired for the job allocation and optionally the
maximum time to wait for that number of switches. If SLURM finds an
allocation containing more switches than the count specified, the job
remain pending until it either finds an allocation with desired switch
count or the time limit expires. By default there is no switch count limit
and there is no delay in starting the job. The job's maximum time delay
may be limited by the system administrator using the
SchedulerParameters configuration parameter with the
max_switch_wait parameter option.
- -t, --time=<time>
- Set a limit on the total run time of the job allocation. If
the requested time limit exceeds the partition's time limit, the job will
be left in a PENDING state (possibly indefinitely). The default time limit
is the partition's time limit. When the time limit is reached, each task
in each job step is sent SIGTERM followed by SIGKILL. The interval between
signals is specified by the SLURM configuration parameter KillWait.
A time limit of zero requests that no time limit be imposed. Acceptable
time formats include "minutes", "minutes:seconds",
"hours:minutes:seconds", "days-hours",
"days-hours:minutes" and "days-hours:minutes:seconds".
- --tasks-per-node=<n>
- Specify the number of tasks to be launched per node.
Equivalent to --ntasks-per-node.
- --threads-per-core=<threads>
- Restrict node selection to nodes with at least the
specified number of threads per core. See additional information under
-B option above when task/affinity plugin is enabled.
- --time-min=<time>
- Set a minimum time limit on the job allocation. If
specified, the job may have it's --time limit lowered to a value no
lower than --time-min if doing so permits the job to begin
execution earlier than otherwise possible. The job's time limit will not
be changed after the job is allocated resources. This is performed by a
backfill scheduling algorithm to allocate resources otherwise reserved for
higher priority jobs. Acceptable time formats include "minutes",
"minutes:seconds", "hours:minutes:seconds",
"days-hours", "days-hours:minutes" and
"days-hours:minutes:seconds".
- --tmp=<MB>
- Specify a minimum amount of temporary disk space.
- -u, --usage
- Display brief help message and exit.
- --uid=<user>
- Attempt to submit and/or run a job as user instead
of the invoking user id. The invoking user's credentials will be used to
check access permissions for the target partition. User root may use this
option to run jobs as a normal user in a RootOnly partition for example.
If run as root, sbatch will drop its permissions to the uid
specified after node allocation is successful. user may be the user
name or numerical user ID.
- -V, --version
- Display version information and exit.
- -v, --verbose
- Increase the verbosity of sbatch's informational messages.
Multiple -v's will further increase sbatch's verbosity. By default
only errors will be displayed.
- -w, --nodelist=<node name list>
- Request a specific list of node names. The list may be
specified as a comma-separated list of node names, or a range of node
names (e.g. mynode[1-5,7,...]). Duplicate node names in the list will be
ignored. The order of the node names in the list is not important; the
node names will be sorted by SLURM.
- --wait-all-nodes=<value>
- Controls when the execution of the command begins. By default the job will begin execution as soon as the allocation is made.
- 0
- Begin execution as soon as allocation can be made. Do not wait for all nodes to be ready for use (i.e. booted).
- 1
- Do not begin execution until all nodes are ready for use.
- --wckey=<wckey>
- Specify wckey to be used with job. If TrackWCKey=no
(default) in the slurm.conf this value is ignored.
- --wrap=<command string>
- Sbatch will wrap the specified command string in a simple
"sh" shell script, and submit that script to the slurm
controller. When --wrap is used, a script name and arguments may not be
specified on the command line; instead the sbatch-generated wrapper script
is used.
- -x, --exclude=<node name list>
- Explicitly exclude certain nodes from the resources granted
to the job.
- --blrts-image=<path>
- Path to Blue GeneL Run Time Supervisor, or blrts, image for
bluegene block. BGL only. Default from blugene.conf if not set.
- --cnload-image=<path>
- Path to compute node image for bluegene block. BGP only.
Default from blugene.conf if not set.
- --conn-type=<type>
- Require the partition connection type to be of a certain
type. On Blue Gene the acceptable of type are MESH, TORUS and NAV.
If NAV, or if not set, then SLURM will try to fit a TORUS else MESH. You
should not normally set this option. SLURM will normally allocate a TORUS
if possible for a given geometry. If running on a BGP system and wanting
to run in HTC mode (only for 1 midplane and below). You can use HTC_S for
SMP, HTC_D for Dual, HTC_V for virtual node mode, and HTC_L for Linux
mode. A comma separated lists of connection types may be specified, one
for each dimension.
- -g, --geometry=<XxYxZ>
- Specify the geometry requirements for the job. The three
numbers represent the required geometry giving dimensions in the X, Y and
Z directions. For example "--geometry=2x3x4", specifies a block
of nodes having 2 x 3 x 4 = 24 nodes (actually base partitions on Blue
Gene).
- --ioload-image=<path>
- Path to io image for bluegene block. BGP only. Default from
blugene.conf if not set.
- --linux-image=<path>
- Path to linux image for bluegene block. BGL only. Default
from blugene.conf if not set.
- --mloader-image=<path>
- Path to mloader image for bluegene block. Default from
blugene.conf if not set.
- -R, --no-rotate
- Disables rotation of the job's requested geometry in order
to fit an appropriate block. By default the specified geometry can rotate
in three dimensions.
- --ramdisk-image=<path>
- Path to ramdisk image for bluegene block. BGL only. Default
from blugene.conf if not set.
- --reboot
- Force the allocated nodes to reboot before starting the
job.
INPUT ENVIRONMENT VARIABLES¶
Upon startup, sbatch will read and handle the options set in the following environment variables. Note that environment variables will override any options set in a batch script, and command line options will override any environment variables.- SBATCH_ACCOUNT
- Same as -A, --account
- SBATCH_ACCTG_FREQ
- Same as --acctg-freq
- SLURM_CHECKPOINT
- Same as --checkpoint
- SLURM_CHECKPOINT_DIR
- Same as --checkpoint-dir
- SBATCH_CLUSTERS or SLURM_CLUSTERS
- Same as --clusters
- SBATCH_CONN_TYPE
- Same as --conn-type
- SBATCH_CPU_BIND
- Same as --cpu_bind
- SBATCH_DEBUG
- Same as -v, --verbose
- SBATCH_DISTRIBUTION
- Same as -m, --distribution
- SBATCH_EXCLUSIVE
- Same as --exclusive
- SLURM_EXIT_ERROR
- Specifies the exit code generated when a SLURM error occurs (e.g. invalid options). This can be used by a script to distinguish application exit codes from various SLURM error conditions.
- SBATCH_EXPORT
- Same as --export
- SBATCH_GEOMETRY
- Same as -g, --geometry
- SBATCH_IMMEDIATE
- Same as -I, --immediate
- SBATCH_JOBID
- Same as --jobid
- SBATCH_JOB_NAME
- Same as -J, --job-name
- SBATCH_MEM_BIND
- Same as --mem_bind
- SBATCH_NETWORK
- Same as --network
- SBATCH_NO_REQUEUE
- Same as --no-requeue
- SBATCH_NO_ROTATE
- Same as -R, --no-rotate
- SBATCH_OPEN_MODE
- Same as --open-mode
- SBATCH_OVERCOMMIT
- Same as -O, --overcommit
- SBATCH_PARTITION
- Same as -p, --partition
- SBATCH_QOS
- Same as --qos
- SBATCH_SIGNAL
- Same as --signal
- SBATCH_TIMELIMIT
- Same as -t, --time
- SBATCH_WAIT_ALL_NODES
- Same as --wait-all-nodes
OUTPUT ENVIRONMENT VARIABLES¶
The SLURM controller will set the following variables in the environment of the batch script.- BASIL_RESERVATION_ID
- The reservation ID on Cray systems running ALPS/BASIL only.
- SLURM_CPU_BIND
- Set to value of the --cpu_bind option.
- SLURM_JOB_ID (and SLURM_JOBID for backwards compatibility)
- The ID of the job allocation.
- SLURM_JOB_CPUS_PER_NODE
- Count of processors available to the job on this node. Note the select/linear plugin allocates entire nodes to jobs, so the value indicates the total count of CPUs on the node. The select/cons_res plugin allocates individual processors to jobs, so this number indicates the number of processors on this node allocated to the job.
- SLURM_JOB_DEPENDENCY
- Set to value of the --dependency option.
- SLURM_JOB_NAME
- Name of the job.
- SLURM_JOB_NODELIST (and SLURM_NODELIST for backwards compatibility)
- List of nodes allocated to the job.
- SLURM_JOB_NUM_NODES (and SLURM_NNODES for backwards compatibility)
- Total number of nodes in the job's resource allocation.
- SLURM_MEM_BIND
- Set to value of the --mem_bind option.
- SLURM_TASKS_PER_NODE
- Number of tasks to be initiated on each node. Values are comma separated and in the same order as SLURM_NODELIST. If two or more consecutive nodes are to have the same task count, that count is followed by "(x#)" where "#" is the repetition count. For example, "SLURM_TASKS_PER_NODE=2(x3),1" indicates that the first three nodes will each execute three tasks and the fourth node will execute one task.
- MPIRUN_NOALLOCATE
- Do not allocate a block on Blue Gene systems only.
- MPIRUN_NOFREE
- Do not free a block on Blue Gene systems only.
- SLURM_NTASKS_PER_CORE
- Number of tasks requested per core. Only set if the --ntasks-per-core option is specified.
- SLURM_NTASKS_PER_NODE
- Number of tasks requested per node. Only set if the --ntasks-per-node option is specified.
- SLURM_NTASKS_PER_SOCKET
- Number of tasks requested per socket. Only set if the --ntasks-per-socket option is specified.
- SLURM_RESTART_COUNT
- If the job has been restarted due to system failure or has been explicitly requeued, this will be sent to the number of times the job has been restarted.
- SLURM_SUBMIT_DIR
- The directory from which sbatch was invoked.
- MPIRUN_PARTITION
- The block name on Blue Gene systems only.
EXAMPLES¶
Specify a batch script by filename on the command line. The batch script specifies a 1 minute time limit for the job.- $ cat myscript
- $ sbatch -N4 <<EOF
COPYING¶
Copyright (C) 2006-2007 The Regents of the University of California. Copyright (C) 2008-2010 Lawrence Livermore National Security. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE-OCEC-09-009. All rights reserved. This file is part of SLURM, a resource management program. For details, see <http://www.schedmd.com/slurmdocs/>. SLURM is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. SLURM is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.SEE ALSO¶
sinfo(1), sattach(1), salloc(1), squeue(1), scancel(1), scontrol(1), slurm.conf(5), sched_setaffinity (2), numa (3)SLURM 2.3 | August 2011 |