...
The Olympus cluster consists of the login node (olympus.ece.tamu.edu), six non-GPU compute nodes and three GPU compute nodes. The cluster has software that ensures users receive the resources needed for their labs by distributing users across the compute nodes based on their course requirements.
Table of Contents
Table of Contents |
---|
Cluster Usage Limitations
...
To assure resources are available to all students, the following limitations are enforced.
Each user is allowed two simultaneous interactive sessions on the non-GPU compute nodes. In other words, you can log in to Olympus using ssh with two different sessions and run the proper
load-ecen-###
command in each ssh session.Each interactive session is limited to a maximum of 12 hours.
How to Use the Cluster
On windows systems, install MobaXTerm personal edition. On Macintosh install the XQuartz software. Putty and XMing are also an available option for Windows users. Detailed instructions can be found here:
Open MobaXTerm on windows or the terminal program on Mac
ssh to
olympus.ece.tamu.edu
, i.e. sshssh -Y netid@olympus.ece.tamu.edu
(ensure to replace netid with your NetID)Login Log in using your NetID password
Next, you will need to connect to an available compute node. Enter the proper
load-ecen-###
command at the prompt and hit return. The command that you will run depends on which course you are taking. The following are valid commands:load-ecen-248
load-ecen-350
load-ecen-403
load-ecen-403-img
load-ecen-425
load-ecen-449
load-ecen-454
load-ecen-468
load-ecen-474
load-ecen-475
load-ecen-651
load-ecen-655
load-ecen-676
load-ecen-704
load-ecen-714
load-ecen-749
Source the same file that you use in the Zachry Linux Labs.
Cluster Usage Limitations
To assure resources are available to all students, the following limitations are enforced.
...
Each user is allowed two simultaneous interactive sessions on the non-GPU compute nodes. In other words, you can log in to Olympus using ssh with two different sessions and run the proper load-ecen-### command in each ssh session.
...
How to start a non-interactive (batch)
Once you have set up your environment and debugged your programs in the interactive session, you can submit a job to run in batch mode. These jobs run in the background on the cluster and do not require an active terminal session once submitted.
The GPU queue has the following limitations:
Maximum of 8 CPU cores per job
Maximum of 1 GPU per job
Maximum of 1 Job running per user. You can queue multiple jobs in the system.
Maximum runtime of 36 hours per job
Jobs are submitted using a script file. An example script file is located at:
Code Block |
---|
/mnt/lab_files/ECEN403-404/submit-gpu.sh |
This file has comment lines detailing what each command does. Copy this file to your home directory and update it to match your virtual environment and program. Once this has been done, submit the script to the scheduler using the command: sbatch name_of_shell_file.sh
. If you did not change the name of the script file, the command would be sbatch submit-gpu.sh
. You can check the status of your job using the command qstat
.
You can observe the progress of your job by checking the log files that are generated. These files are updated as your program runs.
Instructions for Using Singularity Containers on Olympus
...