Using Longleaf ITS Research Computing Karl Eklund Sandeep

Using Longleaf ITS Research Computing Karl Eklund Sandeep

Using Longleaf ITS Research Computing Karl Eklund Sandeep Sarangi Mark Reed Outline What is a (compute) cluster? What is HTC? HTC tips and tricks What is special about LL? LL technical specifications types of nodes

What does a job scheduler do? SLURM fundamentals a) submitting b) querying File systems Logging in and transferring files User environment (modules) and applications Lab exercises Cover how to set up environment and run some commonly used apps SAS,

R, python, matlab, ... What is a compute cluster? What exactly is Longleaf? What is a compute cluster? Some Compute Cluster Advantages fast interconnect, tightly coupled aggregated compute resources

can run parallel jobs to access more compute power and more memory large (scratch) file spaces installed software base scheduling and job management high availability

data backup Why might you use a cluster? Working with large data files. Needing to run jobs at scale. Wanting access to specialized hardware: large memory, gpus, multi-core machines, etc. Desktop is no longer sufficient. General computing concepts Serial computing: code that uses one compute core. Multi-core computing: code that uses multiple cores on a single machine. Also

referred to as threaded or shared-memory Due to heat issues, clock speeds have plateaued, you get more cores instead. Parallel computing: code that uses more than one core Shared cores all on the same host (machine) Distributed cores can be spread across different machines; Massively parallel: using thousands or more cores, possibly with an accelerator such as GPU or PHI Longleaf Geared towards HTC

Focus on large numbers of serial and single node jobs Large Memory High I/O requirements SLURM job scheduler Whats in a name?

The pine tree is the official state tree and 8 species of pine are native to NC including the longleaf pine. Longleaf Nodes Four types of nodes: General compute Big nodes Data, High I/O Very large memory nodes

GPGPU nodes File Spaces Longleaf Storage Your home directory: /nas/longleaf/home/ Quota: 50 GB soft, 75 GB hard Your /scratch space: /pine/scr/// Quota: 30 TB soft, 40 TB hard

36-day file deletion policy Pine is a high-performance and high-throughput parallel filesystem (GPFS; a.k.a., IBM SpectrumScale). The Longleaf compute nodes include local SSD disks for a GPFS Local Read-Only Cache (LRoC) that optimizes the most frequent metadata data/file requests to the node itself, thus eliminating traversals of the network fabric and disk subsystem. Mass Storage long term archival storage

access via ~/ms looks like ordinary disk file system data is actually stored on tape limitless capacity Actually 2 TB then talk to us

data is backed up For storage only, not a work directory (i.e. dont run jobs from here) if you have many small files, use tar or zip to create a single file for better performance

Sign up for this service on onyen.unc.edu To infinity and beyond - Buzz Lightyear User Environment - modules Modules The user environment is managed by modules. This provides a convenient way customize your environment. Allows you to easily run your applications. Modules modify the user environment by

modifying and adding environment variables such as PATH or LD_LIBRARY_PATH Typically you set these once and leave them Optionally you can have separate named collections of modules that you load/unload Using Longleaf Once on Longleaf you can use module commands to update your Longleaf environment with applications you plan to use, e.g. module add matlab module save There are many module commands available for

controlling your module environment: http://help.unc.edu/help/modules-approachto-software-management/ Common Module Commands module list module add module rm module save module avail

module keyword module spider module help More on modules see http://help.unc.edu/CCM3_006660 http://lmod.readthedocs.org Job Scheduling and Management SLURM What does a Job Scheduler and batch

system do? Manage Resources allocate user tasks to resource monitor tasks process control manage input and output report status, availability, etc

enforce usage policies SLURM SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters. As a cluster workload manager, SLURM has three key functions. allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. provides a framework for starting, executing, and monitoring work on the set of allocated nodes

arbitrates contention for resources by managing a queue of pending work https://slurm.schedmd.com/overview.html Simplified view of Batch Job Submission Jobs Queued job_J job_F myjob job_7 Login Node job routed to

queue sbatch myscript.sbatch user logged in to login node submits job job dispatched to run on available host which satisfies job requirements Running Programs on Longleaf Upon ssh-ing to Longleaf, you are on the Login node. Programs SHOULD NOT be run on Login node. Submit programs to one of the many, many compute nodes.

Submit jobs using SLURM via the sbatch command (more on this later). Common SLURM commands sbatch submit jobs squeue view info on jobs is scheduling queue squeue u

scancel kill/cancel submitted job sinfo -s shows all partitions sacct job accounting information sacct -j --format='JobID,user,elapsed, cputime, totalCPU,MaxRSS,MaxVMSize, ncpus,NTasks,ExitCode Use man pages to get much more info! man sbatch

Submitting Jobs: sbatch Submit Jobs - sbatch Run large jobs out of scratch space, smaller jobs can run out of your home space sbatch [sbacth_options] script_name Common sbatch options: -o (--output=) -p (--partition=) -N (--nodes=) --mem=

-t (--time=) -J (--jobname) -n (--ntasks) used for parallel threaded jobs Two methods to submit non-interactive batch jobs The most SLURM terminology SLURM groups nodes into collections called partitions and each partition has a name.

On Longleaf the general partition has the general compute nodes and big data nodes, the bigmem partition has the bigmem nodes, and the gpu and volta-gpu partitions have the gpu nodes. The partitions also serve special purposes, i.e. the interact partition is for interactive jobs, the bigmem partition is for very large memory jobs, the gpu and volta-gpu partitions are for gpu jobs, etc. SLURM specialized partitions SNP (Single Node Parallel) The SNP partition is designed to support parallel jobs that are not wide enough to warrant

multimode processing on Dogwood, yet require a sufficient percentage of cores/memory from single node to be worthwhile scheduling a full node. Available upon request only, with PI approval. Need -p snp and qos=snp_access Interactive job submissions To bring up the Matlab GUI: srun -n1 -p interact --x11=first matlab -singleCompThred To bring up the Stata GUI: salloc -n1 -p interact --x11=first xstata-se To bring up a bash session: srun -n1 -p interact --x11=first --pty /bin/bash Note. For the GUI to display locally you will need a

X connection to the cluster. Matlab sample job submission script #1 #!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH -t 07-00:00:00 #SBATCH --mem=10g #SBATCH -n 1 matlab -nodesktop -nosplash -singleCompThread -r mycode -logfile mycode.out Submits a single cpu Matlab job. general partition, 7-day runtime limit, 10 GB memory limit.

Matlab sample job submission script #2 #!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH -t 02:00:00 #SBATCH --mem=3g #SBATCH -n 24 matlab -nodesktop -nosplash -singleCompThread -r mycode -logfile mycode.out Submits a 24-core, single node Matlab job (i.e. using Matlabs Parallel Computing Toolbox). general partition, 2-hour runtime limit, 3 GB memory limit. Matlab sample job submission script #3

#!/bin/bash #SBATCH -p gpu #SBATCH -N 1 #SBATCH -t 30 #SBATCH --qos gpu_access #SBATCH --gres=gpu:1 #SBATCH -n 1 matlab -nodesktop -nosplash -singleCompThread -r mycode -logfile mycode.out Submits a single-gpu Matlab job. gpu partition, 30 minute runtime limit. Matlab sample job submission script #4 #!/bin/bash #SBATCH -p bigmem

#SBATCH -N 1 #SBATCH -t 7#SBATCH --qos bigmem_access #SBATCH -n 1 #SBATCH --mem=500g matlab -nodesktop -nosplash -singleCompThread -r mycode -logfile mycode.out Submits a single-cpu, single node large memory Matlab job. bigmem partition, 7-day runtime limit, 500 GB memory limit R sample job submission script #1 #!/bin/bash #SBATCH -p general #SBATCH -N 1

#SBATCH -t 07-00:00:00 #SBATCH --mem=10g #SBATCH -n 1 R CMD BATCH --no-save mycode.R mycode.Rout Submits a single cpu R job. general partition, 7-day runtime limit, 10 GB memory limit. R sample job submission script #2 #!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH -t 02:00:00 #SBATCH --mem=3g

#SBATCH -n 24 R CMD BATCH --no-save mycode.R mycode.Rout Submits a 24-core, single node R job (i.e. using one of Rs parallel libraries). general partition, 2-hour runtime limit, 3 GB memory limit. R sample job submission script #3 #!/bin/bash #SBATCH -p bigmem #SBATCH -N 1 #SBATCH -t 7#SBATCH --qos bigmem_access #SBATCH -n 1 #SBATCH --mem=500g

R CMD BATCH --no-save mycode.R mycode.Rout Submits a single-cpu, single node large R memory job. bigmem partition, 7-day runtime limit, 500 GB memory limit Python sample job submission script #1 #!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH -t 07-00:00:00 #SBATCH --mem=10g #SBATCH -n 1 python mycode.py

Submits a single cpu Python job. general partition, 7-day runtime limit, 10 GB memory limit. Python sample job submission script #2 #!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH -t 02:00:00 #SBATCH --mem=3g #SBATCH -n 24 python mycode.py Submits a 24-core, single node Python job (i.e. using one of pythons parallel packages).

general partition, 2-hour runtime limit, 3 GB memory limit. Python sample job submission script #3 #!/bin/bash #SBATCH -p bigmem #SBATCH -N 1 #SBATCH -t 7#SBATCH --qos bigmem_access #SBATCH -n 1 #SBATCH --mem=500g python mycode.py Submits a single-cpu, single node large Python memory job. bigmem partition, 7-day runtime limit, 500 GB memory

limit Stata sample job submission script #1 #!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH -t 07-00:00:00 #SBATCH --mem=10g #SBATCH -n 1 stata-se -b do mycode.do Submits a single cpu Stata job. general partition, 7-day runtime limit, 10 GB memory limit.

Stata sample job submission script #2 #!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH -t 02:00:00 #SBATCH --mem=3g #SBATCH -n 8 stata-mp -b do mycode.do Submits a 8-core, single node Stata/MP job. general partition, 2-hour runtime limit, 3 GB memory limit. Stata sample job submission script #3 #!/bin/bash

#SBATCH -p bigmem #SBATCH -N 1 #SBATCH -t 7#SBATCH --qos bigmem_access #SBATCH -n 1 #SBATCH --mem=500g stata-se -b do mycode.do Submits a single-cpu, single node large Stata memory job. bigmem partition, 7-day runtime limit, 500 GB memory limit Printing Job Info at end (using Matlab script #1)

#!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH -t 07-00:00:00 #SBATCH --mem=10g #SBATCH -n 1 matlab -nodesktop -nosplash -singleCompThread -r mycode -logfile mycode.out sacct -j $SLURM_JOB_ID --format='JobID,user,elapsed, cputime, totalCPU,MaxRSS,MaxVMSize, ncpus,NTasks,ExitCode' sacct command at the end prints out some useful information for this job. Note the use SLURM environment variable with the jobid The format picks out some useful info. See man sacct for

a complete list of all options. Run job from command line You can submit without a batch script, simply use the --wrap option and enclose your entire command in double quotes ( ) Include all the additional sbatch options that you want on the line as well sbatch -t 10:00 -n 1 -o slurm.%j --wrap=R CMD BATCH --no-save mycode.R mycode.Rout Email example #!/bin/bash #SBATCH --partition=general #SBATCH --nodes=1

#SBATCH --time=04-16:00:00 #SBATCH --mem=6G #SBATCH --ntasks=1 # comma separated list #SBATCH --mail-type=BEGIN, END #SBATCH [email protected] # Here are your mail-type options: NONE, BEGIN, END, FAIL, # REQUEUE, ALL, TIME_LIMIT, TIME_LIMIT_90, TIME_LIMIT_80, # ARRAY_TASKS date hostname echo "Hello, world!" Dependencies Job 1:

#!/bin/bash #SBATCH --job-name=My_First_Job #SBATCH --partition=general #SBATCH --nodes=1 #SBATCH --time=04:00:00 #SBATCH --ntasks=1 sleep 10 Job 2: -bash-4.2$ cat job2.sbatch #!/bin/bash #SBATCH --job-name=My_Second_Job #SBATCH --partition=general #SBATCH --nodes=1 #SBATCH --time=04:00:00

#SBATCH --ntasks=1 sleep 10 % sbatch job1.sbatch Submitted batch job 5405575 % sbatch --dependency=after:5405575 job2.sbatch Submitted batch job 5405576 Other options: sbatch --dependency=after:5405575 job2.sbatch sbatch --dependency=afterany:5405575 job2.sbatch sbatch --dependency=aftercorr:5405575 job2.sbatch sbatch --dependency=afternotok:5405575 job2.sbatch sbatch --dependency=afterok:5405575 job2.sbatch sbatch --dependency=expand:5405575 job2.sbatch

sbatch --dependency=singleton job2.sbatch Scaling up workflow Use SLURM job arrays (Matlab example): #!/bin/bash #SBATCH -N 1 #SBATCH -n 1 #SBATCH -p general #SBATCH -t 1#SBATCH --array=1-30 matlab -nodesktop -nosplash -singleCompThread -r 'main('$SLURM_ARRAY_TASK_ID')' Submit SNP job Use Single Node Parallel Partition #!/bin/bash #SBATCH -N 1

#SBATCH -n 36 #SBATCH -p snp #SBATCH qos=snp_access #SBATCH -t 1-00:00:00 #SBATCH --array=1-30 mpirun myParallelCode Workflow tips First use an interactive session in the interact partition to debug your code. Then submit a few test jobs to the interact using a SLURM submission script to make sure the job submission is correct. When youve verified everything is working as expected run the actual jobs in the general

partition. Only ask for resources you need. Dont over specify resources (particularly the memory limit!) Keep your working directory well organized. Demo Lab Exercises Sample workflow cd ~ cp -r /pine/scr/d/e/deep/walkthrough . Supplemental Material Longleaf General Compute Nodes

Intel Xeon processers, E5-2680 v3 Haswell microarchitecture (22 nm lithography) Dual socket, 12-core (24 cores per node) Hyperthreading is on so 48 scheduable threads

2.50 GHz processors for each core DDR4 memory, 2133 MHz 9.6 GT/s QPI 256 GB memory 30 MB L3 cache 1x400 GB-SSD 2x 10Gbps Ethernet 120 MW TDP Longleaf Big Data Nodes Intel Xeon processers, E5-2643 v3 Haswell microarchitecture (22 nm lithography) Dual socket, 6-core (12 cores per node) Hyperthreading

is on so 24 scheduable threads 3.40 GHz processors for each core DDR4 memory, 2133 MHz 9.6 GT/s QPI

256 GB memory 20 MB L3 cache 2x800 GB-SSD 2x 10Gbps Ethernet 135 MW TDP Longleaf Extreme Memory Nodes Intel Xeon processers, E7-8867 v3 Haswell microarchitecture (22 nm lithography) Quad socket, 16-core (64 cores per node) Hyperthreading

is on so 128 scheduable threads 2.50 GHz processors for each core DDR4 memory, 2133 MHz 9.6 GT/s QPI 3.0 TB memory 45 MB L3 cache 1.6 TB SSD 2x 10Gbps Ethernet

165 MW TDP Longleaf GPU Nodes, Compute Host Intel Xeon processers, E5-2623 v4 Broadwell microarchitecture (14 nm lithography) Dual socket, 4-core (8 cores per node) Hyperthreading

is on so 16 scheduable threads 2.60 GHz processors for each core DDR4 memory, 2133 MHz 8.0 GT/s QPI 64 GB memory 10 MB L3 cache no SSD 2x 10Gbps Ethernet 85 MW TDP Longleaf GPU Nodes, Device Nvidia GeForce GtX 1080 graphics

card Pascal Architecture 8 GPUs per node 2560 CUDA cores per GPU 1.73 GHz clock 8 GB Memory 320 GB/s mem b/w PCIe 3.0 bus Tiered storage on Longleaf Getting an account: To apply for your Longleaf or cluster account simply go to http://onyen.unc.edu

Subscribe to Services Login to Longleaf Use ssh to connect: ssh longleaf.unc.edu ssh [email protected] SSH Secure Shell with Windows see http://shareware.unc.edu/software.html For use with X-Windows Display:

ssh X longleaf.unc.edu ssh Y longleaf.unc.edu Off-campus users (i.e. domains outside of unc.edu) must use VPN connection X Windows An X windows server allows you to open a GUI from a remote machine (e.g. the cluster) onto your desktop. How you do this varies by your OS Linux already installed Mac - get Xquartz which is open source https://www.xquartz.org/

MS Windows - need an application such as Xwin32. See http://help.unc.edu/help/research-computing-applica tion-x-win32/ File Transfer Different platforms have different commands and applications you can use to transfer files between your local machine and Longleaf: Linux scp, rsync scp: https://kb.iu.edu/d/agye rsync: https://en.wikipedia.org/wiki/Rsync

Mac- scp, Fetch Fetch: http://software.sites.unc.edu/shareware/#f Windows- SSH Secure Shell Client, MobaXterm SSH Secure Shell Client: http://software.sites.unc.edu/shareware/#s MobaXterm: https://mobaxterm.mobatek.net/ File Transfer Globus good for transferring large files or large numbers of files. A client is available for Linux,

Mac, and Windows. http://help.unc.edu/?s=globus https://www.globus.org/ Links Longleaf page with links http://help.unc.edu/subject/research-computing/longleaf SLURM examples http://help.unc.edu/help/getting-started-example-slurm-on-longleaf Linux cheat sheet: https://files.fosswire.com/2007/08/fwunixref.pdf Basic Linux tutorial:

http://faculty.washington.edu/rjl/uwhpsc-coursera/unix.html Advanced Linux tutorial (very well written!): http://linuxcommand.org/lc3_learning_the_shell.php Emacs reference: https://www.gnu.org/software/emacs/refcards/pdf/refcard.pdf

Recently Viewed Presentations

  • Macbeth Key Scenes analysis - Mrsjgibbs

    Macbeth Key Scenes analysis - Mrsjgibbs

    Key scenes. Act 5 scene 1: Lady Macbeth sleepwalks 'out damned spot!' Doctor and gentlewoman realise she's mad. Act 5 sc2: The English army arrives in Scotland to attack Macbeth.
  • Outpatient Treatment - Captasa

    Outpatient Treatment - Captasa

    Outpatient requires the appropriate licensure, but service may be delivered in physician offices, health clinics, mental health clinics, etc. Intensive Outpatient requires licensure and availability of comprehensive services with treatment planning, individual and group therapy, and family therapy.
  • Canada - Richmond County School System

    Canada - Richmond County School System

    Canada's Political Boundaries/Divisions. Because so many people in Canada live such a small portion of the country I (within 100 miles of the Canada / United States border) And Because so few people in Canada live in such a large...
  • Plant Diversity - Tyler Flisik

    Plant Diversity - Tyler Flisik

    Phylum Psilophyta (Monilophyta) Whisk ferns. Homosporous _____ roots and leaves. Rhizoids. Spores. Sporangia. Check You Understanding. True or False: Gametophytes are typically unicellular and sporophytes are typically multicellular.
  • Chapter 11

    Chapter 11

    House arrest) Imprisonment (incarceration) Dangerous offender Long-term offender Wrongful Convictions Donald Marshall David Milgaard Guy Paul Morin Rubin "Hurricane" Carter Steven Truscott Romeo Phillion AIDWYC and the Innocence Project CHAPTER 11 Delivering Criminal Justice Plea Bargaining >80% of cases are...
  • Quaestio (Aim):

    Quaestio (Aim):

    (Spanish, Italian, French, Portuguese, Romanian) 3. It builds our English vocabulary. ... Culminates in the FLACS Examination (sponsored by the Classical Association of Western NY) ... National Latin Exam. Offered in March of each year. Different exams for different levels.
  • PowerPoint-præsentation

    PowerPoint-præsentation

    For the actual design, teachers were encouraged to use Gilly Salmon's e-tivity concept that is probably also familiar to you. The idea is that you present students with one page of instructions that will guide them through online activities in...
  • Grappling with ProQuest Ebook Central integration in Alma

    Grappling with ProQuest Ebook Central integration in Alma

    need to activate the resource after purchase for discoverability in Primo for access. delay not going to work. require the job to run as soon as the file becomes available ... ExLibris, Charles Sturt University team, ProQuest. Lots and lots...