Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This document details how to use the ECEN Olympus cluster to remotely access software used in the ECEN Zachry Linux labs and gpu resources.

What is the Cluster

The Olympus cluster consists of the login node (olympus.ece.tamu.edu), six non-GPU compute nodes and three five GPU compute nodes.    The cluster has software that ensures users receive the resources needed for their labs by distributing users across the compute nodes based on their course requirements.

Table of Contents

...

Cluster Usage Limitations

...

  1. Each user is allowed two simultaneous interactive sessions on the non-GPU compute nodes.  In other words, you Users can log in to Olympus using ssh with two different sessions and run the proper load-ecen-### command in each ssh session. 

  2. Each interactive session is limited to a maximum of 12 hours.

...

  1. On windows systems, install MobaXTerm personal edition.  On Macintosh install the XQuartz software.  Putty and XMing are also an option for Windows users. Detailed instructions for accessing Olympus from off campus can be found here:

Graphical Applications on the Olympus Cluster and ECEN Interactive Machines from Off-Campus

if you are on campus, you can connect directly to Olympus from your personal computer.

  1. Open MobaXTerm on windows or the terminal program on Mac

  2. ssh to olympus.ece.tamu.edu, i.e.  ssh -Y netid@olympus.ece.tamu.edu (replace netid with your NetID)

  3. Log in using your NetID password

  4. Next, you will need to connect to an available compute node.  Enter the proper load-ecen-###command at the prompt and hit return. The command that you will run depends on which course you are taking. The following are valid commands:

    1. load-ecen-248

    2. load-ecen-350

    3. load-ecen-403

    4. load-ecen-403-img

    5. load-ecen-425

    6. load-ecen-449

    7. load-ecen-454

    8. load-ecen-468

    9. load-ecen-474

    10. loadoad-ecen-475

    11. load-ecen-651

    12. load-ecen-655

    13. load-ecen-676

    14. load-ecen-704

    15. load-ecen-714

16.  load-ecen-749

5. Source the same file that you use in the Zachry Linux Labs.

How to start a non-interactive (batch)

...

Jobs are submitted using a script file.  An example script file is located at:

Code Block
languagenone
/mnt/lab_files/ECEN403-404/submit-gpu.sh

This file has comment lines detailing what each command does.  Copy this file to your home directory and update it to match your virtual environment and program. Once this has been done, submit the script to the scheduler using the command: sbatch name_of_shell_file.sh. If you did not change the name of the script file, the command would be sbatch submit-gpu.sh. You can check the status of your job using the command qstat or squeue.

You can observe the progress of your job by checking the log files that are generated.  These files are updated as your program runs.

Instructions for Using Singularity Containers for GPU and specialty programs on Olympus

See Singularity Containers on Olympus GPU Nodes for instructions.