Olympus CPU Research User Information

This page provides basic information for researchers using the Olympus cluster for CPU based research

Requirements to Login to Olympus.

  1. You will need PI approval to have your account enabled in the research QOS groups.

    1. Send an email to linux-engr-helpdesk@tamu.edu.

    2. Additional instructions on getting PI approval will be provided in the ticket.

  2. A scratch working directory will be setup when your access is approved. Your directory is mounted at /mnt/shared-scratch/<your_PI>/<your-netid>. THIS DIRECTORY IS NOT BACKED UP!

  3. You will also have access to your research groups network storage directory. This will be mounted at /mnt/research/<your_PI>. There is a Shared directory and Students/<your-netid> directory located here.

  4. If you are using X11 interactive programs, you will need an ssh/xwindows client on your computer.

    1. On windows systems, install MobaXTerm personal edition. 

    2. Putty and XMing are also an option for Windows users.

    3. On Macintosh install the XQuartz software.  Detailed instructions for accessing Olympus from off campus can be found here:

Graphical Applications on the Olympus Cluster and ECEN Interactive Machines from Off-Campus

How go login to Olympus

  1. Open MobaXTerm on windows or the terminal program on Mac

  2. ssh to olympus.ece.tamu.edu, i.e. ssh -Y <netid>@olympus.ece.tamu.edu (replace <netid> with your NetID)

  3. Log in using your NetID password

Software available on CPU resources

The following vendors have software installed on the CPU nodes: Cadence, Mathworks, Mentor Graphics, Synopsys and Xilinx.

If you need additional software please email linux-engr-helpdesk@tamu.edu with the specific software required.

How to access CPU resources

IT IS EXTREMELY IMPORTANT THAT YOU ALLOCATE RESOURCES PROPERLY. If the program you are running is single threaded only allocate 1 cpu to the job. If you are unsure of the processor requirements for your job, please contact the Linux helpdesk linux-engr-helpdesk@tamu.edu.

IF YOUR JOB REQUESTS MORE RESOURCES THAN IT CAN UTILIZE, IT MAY BE TERMINATED. Please remember that Olympus is a limited resource shared by many users.

SLURM Instructions

The slurm program controls resource allocation on the Olympus cluster.

Interactive Jobs

The following commands instruct slurm to open a bash shell on one of the CPU-Research nodes.

load-research - requests interactive session on new EPYC nodes with one dedicated cores. This is recommended for single threaded Cadence and Synopsys Jobs

load-research2 - requests interactive session on new EPYC nodes with two dedicated cores.

If your interactive job is multi-threaded and can make use of additional cores, you can use the command below

srun --cpus-per-task=2 -J 2-core -p cpu-research -q research --pty --x11=first bash

The srun command tells slurm you want to run a command on a compute node.

–-cpus-per-task=2 sets the number of cores for this job to 2.

-J 2-core sets the slurm job name to 2-core. Job names can not contain spaces.

-p cpu-research tell slurm to use the nodes in the cpu-research partition

-q research tells slurm to use the research qos

-pty - connects its stdoutand stderr to your current session

--x11=first - Enables you to run X11 programs (Matlab, etc). Not needed if you your are not using X11

bash - tells slurm to start a bash shell on a compute node.

Batch Jobs

Batch jobs run in the background with no interactive shell. A script file is required to submit batch jobs to the scheduler and must be submitted to the queue using the sbatch command. An example script file for a Matlab job would look like the following. The lines starting with #SBATCH are slurm command lines, not comments.

#!/bin/sh #SBATCH --job-name=ecen_test         # Job name #SBATCH --mail-type=NONE             # Mail events (NONE, BEGIN, END, FAIL, ALL) #SBATCH --mail-user=Your_Email@address  # Where to send mail #SBATCH --nodes=1                    # Use one node - Multiple node jobs require MPI #SBATCH --ntasks=1                   # Run a single task #SBATCH --cpus-per-task=2            # Number of CPU cores per task #SBATCH --time=01:05:00              # Time limit hrs:min:sec #SBATCH --output=parallel_%j.out     # Standard output and error log #SBATCH --partition=cpu-research       # Partition/Queue to run in #SBATCH --qos=research # QOS to use # set woring directory if different than the directory you run the script from # cd /working/directory echo "Running program on $SLURM_CPUS_ON_NODE CPU cores" time matlab -nodisplay < matlab_parfor.m

The #SBATCH lines tell slurm the resources you need for your job.

If you wish to get email on the status of your job, Please change the following two lines

#SBATCH --mail-type=ALL

and enter your email address in the following line

#SBATCH --mail-user=Your_Email@address

After you have created your job script, you must submit it using the sbatch command like in the following line

sbatch script.sh