...
Table of Contents | ||||
---|---|---|---|---|
|
What is the Cluster
The Olympus cluster consists of the login node (olympus.ece.tamu.edu), six non-GPU compute nodes and five GPU compute nodes. The cluster has software that ensures users receive the resources needed for their labs by distributing users across the compute nodes based on their course requirements. There is limited software installed on the Olympus head node.
Each of the six compute nodes have dual Xeon E5-2650 v3 CPU and 256GB RAM.
Three of the GPU nodes have dual Xeon Gold 6130 with 392GB RAM and four Nvidia V100 GPUs
Two of the GPU nodes have dual Xeon Gold 6326 with 256GB RAM and four Nvidia A100 GPUs
Cluster Usage Limitations
To assure resources are available to all students, the following limitations are enforced.
Non-GPU limitations:
Undergraduate users (academic)
are allowed two simultaneous interactive sessions on the non-GPU compute nodes. Users can log in to Olympus using ssh with two different sessions and run the proper
load-ecen-###
command in each ssh session.Each interactive session is limited to a maximum of 12 hours.
Graduate Users (academic)
are allowed to use up to eight cores on the non-GPU compute nodes. Users can log in to Olympus using ssh with four different sessions and run the proper
load-ecen-###
command in each ssh session.Each interactive session is limited to a maximum of 12 hours.
Research Users
are allowed to use up to 10 cores on the non-GPU compute nodes.
Each job is limited to a maximum of 48 hours
GPU Limitations
GPU nodes are available for faculty and students for approved instructional and research use. If you need GPU access please have your professor contact the Linux support team.
Undergraduate Users
are limited to using 8 cpu cores and 1 gpu
Graduate/ Research Users
are limited to using a total of 32 cpu cores and 4 gpus
How to Use the Cluster
Requirements to Login to Olympus.
You will need an ssh/xwindows client on your computer.
On windows systems, install MobaXTerm personal edition.
Putty and XMing are also an option for Windows users.
On Macintosh install the XQuartz software. Detailed instructions for accessing Olympus from off campus can be found here:
Graphical Applications on the Olympus Cluster and ECEN Interactive Machines from Off-Campus
How go login to Olympus
Open MobaXTerm on windows or the terminal program on Mac
ssh to
olympus.ece.tamu.edu
, i.e.ssh -Y netid@olympus.ece.tamu.edu
(replace netid with your NetID)Log in using your NetID password
For non-gpu academic users, you will need to connect to an available compute node. Enter the proper
load-ecen-###
command at the prompt and hit return. The command that you will run depends on which course you are taking. The following are valid commands:load-ecen-248
load-ecen-350
load-ecen-403
load-ecen-425
load-ecen-449
load-ecen-454
load-ecen-468
load-ecen-474
load-ecen-475
load-ecen-620
load-ecen-625
load-ecen-651
load-ecen-655
load-ecen-676
load-ecen-680
load-ecen-704
load-ecen-714
load-ecen-720
load-ecen-749
Source the same file that you use in the Zachry Linux Labs.
For CPU research users the following interactive load commands are available.
load-2core - creates a 2 core job on a cpu node
load-4core - creates a 4 core job on a cpu node
For GPU users see instructions below on setting up containers using Singularity. Singularity is similar to Docker and allow you to create custom environments for your gpu jobs. These environments include using different versions of Linux inside the container.
Instructions for Using Singularity Containers for GPU and specialty programs on Olympus
Singularity Containers on Olympus GPU Nodes
Once you have set up your environment and debugged your environment/programs in the interactive gpu session, you can submit a job to run in batch mode.
How to start a non-interactive (batch)
These jobs run in the background on the cluster and do not require an active terminal session once submitted.
...