Slurm Workload Manager: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m Added a citation to the "60 % of supercomputers..." fact
→‎External links: https://s3-us-west-2.amazonaws.com/imss-hpc/index.html
 
(35 intermediate revisions by 29 users not shown)
Line 1: Line 1:
{{short description|Free and open-source job scheduler for Linux and similar computers}}
{{Multiple issues|
{{primary sources|date=July 2010}}
{{primary sources|date=July 2010}}
{{Notability|date=July 2010}}
}}
{{Infobox software
{{Infobox software
| title = Slurm
| title = Slurm
Line 8: Line 6:
| logo = Slurm logo.svg
| logo = Slurm logo.svg
| logo caption =
| logo caption =
| developer = [[SchedMD]]
| screenshot = <!-- Image name is enough -->
| screenshot = <!-- Image name is enough -->
| caption =
| caption =
Line 14: Line 13:
| released = <!-- {{Start date and age|YYYY|MM|DD|df=yes/no}} -->
| released = <!-- {{Start date and age|YYYY|MM|DD|df=yes/no}} -->
| discontinued =
| discontinued =
| latest release version = 18.08.1, 17.11.10
| latest release version = {{URL|https://www.schedmd.com/downloads.php}}
| latest release date = <!-- {{Start date and age|YYYY|MM|DD|df=yes/no}} -->
| latest release date = <!-- {{Start date and age|YYYY|MM|DD|df=yes/no}} -->
| programming language = [[C (programming language)|C]]
| programming language = [[C (programming language)|C]]
Line 29: Line 28:
}}
}}


The '''Slurm Workload Manager''' (formerly known as Simple Linux Utility for Resource Management or SLURM), or '''Slurm''', is a [[free and open-source]] [[job scheduler]] for [[Linux]] and [[Unix-like]] [[Kernel (operating system)|kernels]], used by many of the world's [[supercomputer]]s and [[computer cluster]]s.
The '''Slurm Workload Manager''', formerly known as '''Simple Linux Utility for Resource Management''' ('''SLURM'''), or simply '''Slurm''', is a [[free and open-source]] [[job scheduler]] for [[Linux]] and [[Unix-like]] [[kernel (operating system)|kernels]], used by many of the world's [[supercomputer]]s and [[computer cluster]]s.


It provides three key functions:
It provides three key functions:
* allocating exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work,
* allocating exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work,
* providing a framework for starting, executing, and monitoring work (typically a parallel job such as [[Message Passing Interface|MPI]]) on a set of allocated nodes, and
* providing a framework for starting, executing, and monitoring work, typically a parallel job such as [[Message Passing Interface]] (MPI) on a set of allocated nodes, and
* arbitrating contention for resources by managing a queue of pending jobs.
* arbitrating contention for resources by managing a queue of pending jobs.


Slurm is the workload manager on about 60% of the [[TOP500]] supercomputers.<ref>{{Cite web|url=https://hpcc.usc.edu/support/documentation/slurm/|title=Running a Job on HPC using Slurm {{!}} HPC {{!}} USC|website=hpcc.usc.edu|access-date=2019-03-05}}</ref>
Slurm is the workload manager on about 60% of the [[TOP500]] supercomputers.<ref>{{Cite web|url=https://hpcc.usc.edu/support/documentation/slurm/|title=Running a Job on HPC using Slurm {{!}} HPC {{!}} USC|website=hpcc.usc.edu|access-date=2019-03-05|archive-date=2019-03-06|archive-url=https://web.archive.org/web/20190306044130/https://hpcc.usc.edu/support/documentation/slurm/|url-status=dead}}</ref>


Slurm uses a [[curve fitting|best fit algorithm]] based on [[Hilbert curve scheduling]] or [[fat tree]] network topology in order to optimize locality of task assignments on parallel computers.<ref name=Eitan>{{Cite conference|doi=10.1007/978-3-642-04633-9_8|title=Effects of Topology-Aware Allocation Policies on Scheduling Performance|conference=Job Scheduling Strategies for Parallel Processing|series=Lecture Notes in Computer Science|year=2009|last1=Pascual|first1=Jose Antonio|last2=Navaridas|first2=Javier|last3=Miguel-Alonso|first3=Jose|isbn=978-3-642-04632-2|volume=5798|pages=138–144}}</ref>
Slurm uses a [[Best-fit_bin_packing|best fit algorithm]] based on [[Hilbert curve scheduling]] or [[fat tree]] network topology in order to optimize locality of task assignments on parallel computers.<ref name=Eitan>{{Cite conference|doi=10.1007/978-3-642-04633-9_8|title=Effects of Topology-Aware Allocation Policies on Scheduling Performance|conference=Job Scheduling Strategies for Parallel Processing|series=Lecture Notes in Computer Science|year=2009|last1=Pascual|first1=Jose Antonio|last2=Navaridas|first2=Javier|last3=Miguel-Alonso|first3=Jose|isbn=978-3-642-04632-2|volume=5798|pages=138–144}}</ref>


==History==
==History==
Slurm began development as a collaborative effort primarily by [[Lawrence Livermore National Laboratory]], [[SchedMD]],<ref>{{cite web|url=https://www.schedmd.com/ |title=Slurm Commercial Support, Development, and Installation |publisher=SchedMD |date= |accessdate=2014-02-23}}</ref> Linux NetworX, [[Hewlett-Packard]], and [[Groupe Bull]] as a Free Software resource manager. It was inspired by the closed source [[QuadricsRms|Quadrics RMS]] and shares a similar syntax. The name is a reference to the [[Fry and the Slurm Factory#Slurm|soda]] in [[Futurama]].<ref>{{cite web|url=https://slurm.schedmd.com/slurm_design.pdf |title=SLURM: Simple Linux Utility for Resource Management |date=23 June 2003 |accessdate=11 January 2016}}</ref> Over 100 people around the world have contributed to the project. It has since evolved into a sophisticated batch scheduler capable of satisfying the requirements of many large computer centers.
Slurm began development as a collaborative effort primarily by [[Lawrence Livermore National Laboratory]], [[SchedMD]],<ref>{{cite web|url=https://www.schedmd.com/ |title=Slurm Commercial Support, Development, and Installation |publisher=SchedMD |access-date=2014-02-23}}</ref> Linux NetworX, [[Hewlett-Packard]], and [[Groupe Bull]] as a Free Software resource manager. It was inspired by the closed source [[Quadrics_(company)| Quadrics RMS]] and shares a similar syntax. The name is a reference to the [[Fry and the Slurm Factory#Slurm|soda]] in [[Futurama]].<ref>{{cite web|url=https://slurm.schedmd.com/slurm_design.pdf |title=SLURM: Simple Linux Utility for Resource Management |date=23 June 2003 |access-date=11 January 2016}}</ref> Over 100 people around the world have contributed to the project. It has since evolved into a sophisticated batch scheduler capable of satisfying the requirements of many large computer centers.


{{As of|2017|November}}, [[TOP500]] list of most powerful computers in the world indicates that Slurm is the workload manager on six of the top ten systems including the number 1 system, [[Sunway TaihuLight]] with 10,649,600 computing cores.
{{As of|2021|November}}, [[TOP500]] list of most powerful computers in the world indicates that Slurm is the workload manager on more than half of the top ten systems.


==Structure==
==Structure==
Slurm's design is very modular with about 100 optional plugins. In its simplest configuration, it can be installed and configured in a couple of minutes. More sophisticated configurations provide database integration for accounting, management of resource limits and workload prioritization.
Slurm's design is very modular with about 100 optional plugins. In its simplest configuration, it can be installed and configured in a couple of minutes. More sophisticated configurations provide database integration for accounting, management of resource limits and workload prioritization.


==Features==
==Notable features==
Notable Slurm features include the following:{{Citation needed|date=September 2014}}
Slurm features include:{{Citation needed|date=September 2014}}


* No single point of failure, backup daemons, fault-tolerant job options
* No single point of failure, backup daemons, fault-tolerant job options
Line 66: Line 65:
* Real-time accounting down to the task level (identify specific tasks with high CPU or memory usage)
* Real-time accounting down to the task level (identify specific tasks with high CPU or memory usage)
* Resource limits by user or bank account
* Resource limits by user or bank account
* Accounting for power usage by job
* Accounting for power consumption by job
* Support of IBM Parallel Environment (PE/POE)
* Support of IBM Parallel Environment (PE/POE)
* Support for job arrays
* Support for job arrays
* Job profiling (periodic sampling of each task's CPU use, memory use, power consumption, network and file system use)
* Job profiling (periodic sampling of each task's CPU use, memory use, power consumption, network and file system use)
* Accounting for a job's power consumption
* Sophisticated multifactor job prioritization algorithms
* Sophisticated multifactor job prioritization algorithms
* Support for MapReduce+
* Support for MapReduce+
* Support for [[burst buffer]] that accelerates scientific data movement


The following features are announced for version 14.11 of Slurm, was released in November 2014:<ref>{{cite web|url=https://slurm.schedmd.com/news.html |title=Slurm - What's New |publisher=SchedMD |date= |accessdate=2014-08-29}}</ref>
The following features are announced for version 14.11 of Slurm, was released in November 2014:<ref>{{cite web|url=https://slurm.schedmd.com/news.html |title=Slurm - What's New |publisher=SchedMD |access-date=2014-08-29}}</ref>


* Improved job array data structure and scalability
* Improved job array data structure and scalability
Line 88: Line 87:
* [[Cray]] XT, XE and Cascade
* [[Cray]] XT, XE and Cascade
* [[Tianhe-2]] a 33.9 petaflop system with 32,000 Intel Ivy Bridge chips and 48,000 Intel Xeon Phi chips with a total of 3.1 million cores
* [[Tianhe-2]] a 33.9 petaflop system with 32,000 Intel Ivy Bridge chips and 48,000 Intel Xeon Phi chips with a total of 3.1 million cores
* [http://www-03.ibm.com/systems/software/parallel IBM Parallel Environment]
* IBM Parallel Environment
* [[Anton (computer)|Anton]]
* [[Anton (computer)|Anton]]


==License==
==License==
Slurm is available under the [[GNU General Public License]] V2.
Slurm is available under the [[GNU General Public License#History|GNU General Public License v2]].


==Commercial support==
==Commercial support==
In 2010, the developers of Slurm founded ''SchedMD'', which maintains the canonical source, provides development, level 3 commercial support and training services. Commercial support is also available from [[Bright Computing]], ''Bull'', ''Cray'', and ''Science + Computing''.
In 2010, the developers of Slurm founded SchedMD, which maintains the canonical source, provides development, level 3 commercial support and training services. Commercial support is also available from Bull, Cray, and Science + Computing.

== Usage ==
The `slurm` system has three main parts:

* a central `slurmctld` (slurm control) [[Daemon (computing)|daemon]] running on a single control node (optionally with [[failover]] backups);
* many computing nodes, each with one or more `slurmd` daemons;
* clients that connect to the manager node, often with [[Secure Shell|ssh]].

The clients can issue commands to the control daemon, which would accept and divide the workload to the computing daemons.

For clients, the main commands are `srun` (queue up an interactive job), `sbatch` (queue up a job), `squeue` (print the job queue), `scancel` (remove a job from the queue).

Jobs can be run in [[Batch processing|batch mode]] or [[Interactive computing|interactive mode]]. For interactive mode, a compute node would start a shell, connects the client into it, and run the job. From there the user may observe and interact with the job while it is running. Usually, interactive jobs are used for initial debugging, and after debugging, the same job would be submitted by `sbatch`. For a batch mode job, its `stdout` and `stderr` outputs are typically directed to text files for later inspection.


==See also==
==See also==
Line 106: Line 118:
* [[TORQUE]]
* [[TORQUE]]
* [[Univa Grid Engine]]
* [[Univa Grid Engine]]
* [[Platform LSF]]


==References==
==References==
Line 114: Line 127:
* {{Cite conference|doi=10.1007/978-3-540-78699-3_3|title=Enhancing an Open Source Resource Manager with Multi-core/Multi-threaded Support|conference=Job Scheduling Strategies for Parallel Processing|series=[[Lecture Notes in Computer Science]]|year=2008|last1=Balle|first1=Susanne M.|last2=Palermo|first2=Daniel J.|isbn=978-3-540-78698-6|volume=4942|page=37}}
* {{Cite conference|doi=10.1007/978-3-540-78699-3_3|title=Enhancing an Open Source Resource Manager with Multi-core/Multi-threaded Support|conference=Job Scheduling Strategies for Parallel Processing|series=[[Lecture Notes in Computer Science]]|year=2008|last1=Balle|first1=Susanne M.|last2=Palermo|first2=Daniel J.|isbn=978-3-540-78698-6|volume=4942|page=37}}
* {{Cite journal|last1=Jette|first1= M. |first2= M. |last2=Grondona|url=https://slurm.schedmd.com/slurm_design.pdf |title=SLURM: Simple Linux Utility for Resource Management|journal=Proceedings of ClusterWorld Conference and Expo|location=San Jose, California|date=June 2003}}
* {{Cite journal|last1=Jette|first1= M. |first2= M. |last2=Grondona|url=https://slurm.schedmd.com/slurm_design.pdf |title=SLURM: Simple Linux Utility for Resource Management|journal=Proceedings of ClusterWorld Conference and Expo|location=San Jose, California|date=June 2003}}
* {{cite journal|last=Layton|first= Jeffrey B. |url=http://www.linux-mag.com/id/7239/1/|title= Caos NSA and Perceus: All-in-one Cluster Software Stack|journal= Linux Magazine|date=5 February 2009}}
* {{cite journal|last=Layton|first= Jeffrey B. |url=http://www.linux-mag.com/id/7239/1/|archive-url=https://web.archive.org/web/20090211041650/http://www.linux-mag.com/id/7239/1/|url-status=usurped|archive-date=February 11, 2009|title= Caos NSA and Perceus: All-in-one Cluster Software Stack|journal= Linux Magazine|date=5 February 2009}}
* {{cite conference|doi=10.1007/10968987_3|title=SLURM: Simple Linux Utility for Resource Management|conference=Job Scheduling Strategies for Parallel Processing|series=Lecture Notes in Computer Science|year=2003|last1=Yoo|first1=Andy B.|last2=Jette|first2=Morris A.|last3=Grondona|first3=Mark|isbn=978-3-540-20405-3|volume=2862|page=44|citeseerx=10.1.1.10.6834}}
* {{cite conference|doi=10.1007/10968987_3|title=SLURM: Simple Linux Utility for Resource Management|conference=Job Scheduling Strategies for Parallel Processing|series=Lecture Notes in Computer Science|year=2003|last1=Yoo|first1=Andy B.|last2=Jette|first2=Morris A.|last3=Grondona|first3=Mark|isbn=978-3-540-20405-3|volume=2862|page=[https://archive.org/details/jobschedulingstr0000jssp_q2o1/page/44 44]|citeseerx=10.1.1.10.6834|url-access=registration|url=https://archive.org/details/jobschedulingstr0000jssp_q2o1/page/44}}
{{Div col end}}
{{Div col end}}

==Slurm Commands==

The following is a list of useful commands available for Slurm. Some of these were built by CCR to allow easier reporting for users.

For usage information for these commands, use {{code|--help}} (example: {{code|sinfo --help}}).

Use the Linux command {{code|man}} for more information about most of these commands (example: {{code|man sinfo}}).

Italicized font on the commands below indicates user supplied information. Brackets indicate optional flags.
{| class="wikitable"
|-
! List Slurm commands !! slurmhelp
|-
| View information about Slurm nodes & partitions || sinfo ''[-p partition_name or -M cluster_name]''
|-
| List example Slurm scripts || ls -p /util/slurm-scripts less
|-
| Submit a job script for later execution || sbatch ''script-file''
|-
| Cancel a pending or running job || scancel ''jobid''
|-
| Check the state of a user's jobs || squeue—user=''username''
|-
| Allocate compute nodes for interactive use || salloc
|-
| Run a command on allocated compute nodes || srun
|-
| Display node information || snodes ''[node cluster/partition state]''
|-
| Launch an interactive job || fisbatch ''[various sbatch options]''
|-
| List priorities of queued jobs ||sranks
|-
| Get the efficiency of a running job || sueff ''user-name''
|-
| Get Slurm accounting information for a user's jobs from start date to now || suacct start-date ''user-name''
|-
| Get Slurm accounting and node information for a job || slist ''jobid''
|-
| Get resource usage and accounting information for a user's jobs from start date to now || slogs ''start-date user-list''
|-
| Get estimated starting times for queued jobs || stimes ''[various squeue options]''
|-
| Monitor performance of a Slurm job || /util/ccrjobvis/slurmjobvis ''jobid''
|}


==External links==
==External links==
Line 168: Line 135:
* [https://www.schedmd.com SchedMD]
* [https://www.schedmd.com SchedMD]
* [https://www.open-mpi.org/video/slurm/Slurm_EMC_Dec2012.pdf Slurm Workload Manager Architecture Configuration and Use ]
* [https://www.open-mpi.org/video/slurm/Slurm_EMC_Dec2012.pdf Slurm Workload Manager Architecture Configuration and Use ]
* [https://s3-us-west-2.amazonaws.com/imss-hpc/index.html Caltech HPC Center: Job Script Generator]


{{Linux kernel}}
{{Linux kernel}}
Line 176: Line 144:
[[Category:Grid computing]]
[[Category:Grid computing]]
[[Category:Cluster computing]]
[[Category:Cluster computing]]
[[Category:Free software]]
[[Category:Free software programmed in C]]
[[Category:Software using the GPL license]]

Latest revision as of 20:09, 13 February 2024

Slurm
Developer(s)SchedMD
Stable release
Repository
Written inC
Operating systemLinux, BSDs
TypeJob Scheduler for Clusters and Supercomputers
LicenseGNU General Public License
Websiteslurm.schedmd.com

The Slurm Workload Manager, formerly known as Simple Linux Utility for Resource Management (SLURM), or simply Slurm, is a free and open-source job scheduler for Linux and Unix-like kernels, used by many of the world's supercomputers and computer clusters.

It provides three key functions:

  • allocating exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work,
  • providing a framework for starting, executing, and monitoring work, typically a parallel job such as Message Passing Interface (MPI) on a set of allocated nodes, and
  • arbitrating contention for resources by managing a queue of pending jobs.

Slurm is the workload manager on about 60% of the TOP500 supercomputers.[1]

Slurm uses a best fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers.[2]

History[edit]

Slurm began development as a collaborative effort primarily by Lawrence Livermore National Laboratory, SchedMD,[3] Linux NetworX, Hewlett-Packard, and Groupe Bull as a Free Software resource manager. It was inspired by the closed source Quadrics RMS and shares a similar syntax. The name is a reference to the soda in Futurama.[4] Over 100 people around the world have contributed to the project. It has since evolved into a sophisticated batch scheduler capable of satisfying the requirements of many large computer centers.

As of November 2021, TOP500 list of most powerful computers in the world indicates that Slurm is the workload manager on more than half of the top ten systems.

Structure[edit]

Slurm's design is very modular with about 100 optional plugins. In its simplest configuration, it can be installed and configured in a couple of minutes. More sophisticated configurations provide database integration for accounting, management of resource limits and workload prioritization.

Features[edit]

Slurm features include:[citation needed]

  • No single point of failure, backup daemons, fault-tolerant job options
  • Highly scalable (schedules up to 100,000 independent jobs on the 100,000 sockets of IBM Sequoia)
  • High performance (up to 1000 job submissions per second and 600 job executions per second)
  • Free and open-source software (GNU General Public License)
  • Highly configurable with about 100 plugins
  • Fair-share scheduling with hierarchical bank accounts
  • Preemptive and gang scheduling (time-slicing of parallel jobs)
  • Integrated with database for accounting and configuration
  • Resource allocations optimized for network topology and on-node topology (sockets, cores and hyperthreads)
  • Advanced reservation
  • Idle nodes can be powered down
  • Different operating systems can be booted for each job
  • Scheduling for generic resources (e.g. Graphics processing unit)
  • Real-time accounting down to the task level (identify specific tasks with high CPU or memory usage)
  • Resource limits by user or bank account
  • Accounting for power consumption by job
  • Support of IBM Parallel Environment (PE/POE)
  • Support for job arrays
  • Job profiling (periodic sampling of each task's CPU use, memory use, power consumption, network and file system use)
  • Sophisticated multifactor job prioritization algorithms
  • Support for MapReduce+
  • Support for burst buffer that accelerates scientific data movement

The following features are announced for version 14.11 of Slurm, was released in November 2014:[5]

  • Improved job array data structure and scalability
  • Support for heterogeneous generic resources
  • Add user options to set the CPU governor
  • Automatic job requeue policy based on exit value
  • Report API use by user, type, count and time consumed
  • Communication gateway nodes improve scalability

Supported platforms[edit]

Slurm is primarily developed to work alongside Linux distributions, although there is also support for a few other POSIX-based operating systems, including BSDs (FreeBSD, NetBSD and OpenBSD).[6] Slurm also supports several unique computer architectures, including:

  • IBM BlueGene/Q models, including the 20 petaflop IBM Sequoia
  • Cray XT, XE and Cascade
  • Tianhe-2 a 33.9 petaflop system with 32,000 Intel Ivy Bridge chips and 48,000 Intel Xeon Phi chips with a total of 3.1 million cores
  • IBM Parallel Environment
  • Anton

License[edit]

Slurm is available under the GNU General Public License v2.

Commercial support[edit]

In 2010, the developers of Slurm founded SchedMD, which maintains the canonical source, provides development, level 3 commercial support and training services. Commercial support is also available from Bull, Cray, and Science + Computing.

Usage[edit]

The `slurm` system has three main parts:

  • a central `slurmctld` (slurm control) daemon running on a single control node (optionally with failover backups);
  • many computing nodes, each with one or more `slurmd` daemons;
  • clients that connect to the manager node, often with ssh.

The clients can issue commands to the control daemon, which would accept and divide the workload to the computing daemons.

For clients, the main commands are `srun` (queue up an interactive job), `sbatch` (queue up a job), `squeue` (print the job queue), `scancel` (remove a job from the queue).

Jobs can be run in batch mode or interactive mode. For interactive mode, a compute node would start a shell, connects the client into it, and run the job. From there the user may observe and interact with the job while it is running. Usually, interactive jobs are used for initial debugging, and after debugging, the same job would be submitted by `sbatch`. For a batch mode job, its `stdout` and `stderr` outputs are typically directed to text files for later inspection.

See also[edit]

References[edit]

  1. ^ "Running a Job on HPC using Slurm | HPC | USC". hpcc.usc.edu. Archived from the original on 2019-03-06. Retrieved 2019-03-05.
  2. ^ Pascual, Jose Antonio; Navaridas, Javier; Miguel-Alonso, Jose (2009). Effects of Topology-Aware Allocation Policies on Scheduling Performance. Job Scheduling Strategies for Parallel Processing. Lecture Notes in Computer Science. Vol. 5798. pp. 138–144. doi:10.1007/978-3-642-04633-9_8. ISBN 978-3-642-04632-2.
  3. ^ "Slurm Commercial Support, Development, and Installation". SchedMD. Retrieved 2014-02-23.
  4. ^ "SLURM: Simple Linux Utility for Resource Management" (PDF). 23 June 2003. Retrieved 11 January 2016.
  5. ^ "Slurm - What's New". SchedMD. Retrieved 2014-08-29.
  6. ^ Slurm Platforms

Further reading[edit]

External links[edit]