Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Updated the Owner of the ada partition to Tamamis Group/General per Jeffrey Polasek

...

Altas is a Linux-based high-performance computing (HPC) cluster supported by the College of Technology Services - Engineering.  This cluster is available to all research groups in Engineering.  

...

  • A login node - 16 Core Xeon with 64GB RAM

  • Two storage nodes - 137 TB storage

  • 121 133 Compute Nodes

Partition 

Nodes

Hardware

Owner

enrgy 

15

2X E5-2660 v2 (20 cores), 24GB RAM

Energy Institute

Isen 

10

1X. E5-2276G (6 cores), 64GB RAM

ISEN

dwc 

13

2X E5-2660 v4 (28 cores), 256GB RAM 

Turbomachinery Lab

bigmo 

11

2X E5-2660 v3 (20 cores), 256GB RAM

Turbomachinery Lab

ada 

3672

2X E5-2670 v2 (20 cores), 64GB RAM

Energy Institute

ada-2

36Tamamis Group/General

normal

12

2X E5-2670 v2 (20 cores), 64GB RAM

General Use/default

...

The cluster can run either batch or interactive jobs.  Batch jobs run in the background and require no user interaction.  Interactive jobs run in a terminal window.  Both job types require the user to specify the resources needed for the job.  Resources include the number of cores, nodes, amount of RAM, and job run time,.  When you submit your job request, the Slurm scheduler will allocate the resources requested to your job.  Your job now owns these resources and the resources are not available to other jobs.  

...

Code Block
languagebash
#!/bin/sh

#SBATCH --job-name=matlab_test         # Job name
#SBATCH --mail-type=ALL                # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=NetID@tamu.edu     # Where to send mail
#SBATCH --nodes=1                      # Use one node
#SBATCH --ntasks=1                     # Run a single task
#SBATCH --cpus-per-task=1              # Number of CPU cores per task
#SBATCH --time=01:05:00                # Time limit hrs:min:sec
#SBATCH --output=parallel_%j.out       # Standard output and error log
#SBATCH --partition=ada-2 normal              # Partition/Queue to run in (ADA-2normal IS DEFAULT)

#Load matlab environment module
module load MATLABR2021A

# set working directory - if you do not set this your working directory will
# be the directory you submitted the script from
# cd /working/directory
# run matlab job without GUI
 matlab -nodisplay < matlab_parfor.m

To use the ada queue, you will need to specify a quality of service (qos).  This affects the prioritization of resources in the ada queue.  An example script with the qos specification is below:

Code Block
languagebash
#!/bin/sh
#SBATCH --job-name=matlab_test         # Job name
#SBATCH --mail-type=ALL                # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=j-polasek@tamu.edu # Where to send mail	
#SBATCH --nodes=1                      # Use one node
#SBATCH --ntasks=1                     # Run a single task	
#SBATCH --cpus-per-task=1              # Number of CPU cores per task
#SBATCH --time=01:05:00                # Time limit hrs:min:sec
#SBATCH --output=parallel_%j.out       # Standard output and error log
#SBATCH --partition=ada    	           # Partition/Queue to run in
#SBATCH --qos=atlas-ada


#Load matlab environment module
module load MATLABR2021A

# set working directory - if you do not set this your working directory will
# be the directory you submitted the script from
# cd /working/directory

# run matlab job without GUI
 matlab -nodisplay < matlab_parfor.m

It is extremely important that jobs only request the resources required.  Requesting multiple cores for a single-threaded program does not improve the performance of the job.  Doing this will reduce the number of available resources for other users.

...

Once on the compute node you can use the module load command and run software interactively.

To run in the ada queue/partition, you will need to add the -q  and -p qualifier.

Code Block
languagebash
srun --cpus-per-task=2 -p ada  -q atlas-ada --pty --x11=first bash

Example Scripts

Example scripts for running batch jobs are located in the /mnt/share/scripts directory.

...

  • AdHocTrex - Instructions are in the following file:  /mnt/share/sw/README-AdHocTrex

  • Autodock Vina 

  • CHARMM an example script file is located at /mnt/share/scripts/charmmi.sh

  • CPLEX_2010

  • dligand2

  • Fluent 2021R1

  • Fread - login node only

  • GAMS

  • GROMACS an  example script file is located at /mnt/share/scripts/namd_2.14-smp-mpi.sh

  • Matlab 2021a - available without module

  • MGL Tools (AutoDock Tools)

  • Modeller

  • NAMD -  an  example script file is located at /mnt/share/scripts/namd_2.14-smp-mpi.sh

  • Open Babel - available without module

  • R - available without module

  • Scwrl4

  • Smina - Vinardo is implemented as an optional scoring function.

  • VMD - only on login node

  • Wordom

  • Zdock

...