Thursday, September 29, 2016

Job submission on HPC using SLURM

Load spark and other needed modules
module load zlib/1.2.8 openssl/1.0.1e java/1.8.0_31 protobuf/2.5.0 myhadoop/Sep012015 spark/1.5.1

First need to assign the environment variable:
export HADOOP_OLD_DIR=/scratch/scratch3/${USER}_hadoop.$SLURM_JOB_ID

Then
sbatch run.sh

View your pending jobs (xxxx is your ID)
squeue -u xxxxxx -t pending

View your running jobs (xxxx is your ID)
squeue -u xxxxxx -t running

View all running jobs
squeue -t running 

Cancel a job (12345 is the job ID)
scancel 12345