Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The Olympus cluster consists of the login node (olympus.ece.tamu.edu), six eight non-GPU compute nodes and five GPU compute nodes.    The cluster has software that ensures users receive the resources needed for their labs by distributing users across the compute nodes based on their course requirements. There is limited software installed on the Olympus head node.

...

Each of the six compute nodes have dual Xeon E5-2650 v3 CPU and 256GB RAM.

...

Three of the GPU nodes have dual Xeon Gold 6130 with 392GB RAM and four Nvidia V100 GPUs

...

Nodes 1-5  Poweredge 730XD - dual Xeon(R) CPU E5-2650 v3 - 20 cores (40 with HT) per node, 256GB RAM

100 core total

Nodes 6-8  Poweredge R6525- Dual AMD EPYC 7443 - 48 cores (96 with HT) per node, 256GB RAM

144 core total

Nodes 9-11 Poweredge C4140 - Dual Xeon(R) Gold 6130 - 32 cores (64 with HT) per node, 196GB RAM, 4 Tesla V100’s per node

96 core and 12 V100 total

Nodes 12-13 PowerEdge R750xa - Dual Xeon(R) Gold 6326 - 32 cores (64 with HT) per node, 256GB RAM, 4 Tesla A100’s per node

64 core and 8 A100 total

Cluster Usage Limitations

To assure resources are available to all students, the following limitations are enforced.

Nodes are grouped into partitions.  The following partitions are configured.

CPU Nodes: nodes 1-8.  Nodes 1-5 have academic priority (jobs will run on these nodes first)

CPU-RESEARCH:  Nodes 6-8 research jobs will run on these nodes - requires PI approval

GPU:  nodes 9-13 for coursework and research - requires PI approval

Resource allocation is set using Quality of Server (qos) in slurm. 

QOS name

Hardware Limits

Default Time Limits

Hard Time Limit

Partition

Ugrad (academic)

4 cpu cores

12 hours

12 hours

CPU

Grad (academic)

6 cpu cores

12 hours

12 hours

CPU

Research

12 cpu cores

48 hours

48 hours

CPU-Research

Ecen-ugrad-gpu

8 cpu, 1gpu

36 hours

36 hours

GPU

Olympus-research-gpu

32 cpu, 4gpu

4 days

4days

GPU

Olympus-research-gpu2

160 cpu/20 gpu

7 days

21 days

GPU

Link for Academic (ECEN lab users). Link for Research users

Non-GPU limitations:

  1. Undergraduate users (academic)

    1. are allowed two simultaneous interactive sessions on the non-GPU compute nodes.  Users can log in to Olympus using ssh with two different sessions and run the proper load-ecen-### command in each ssh session. 

    2. Each interactive session is limited to a maximum of 12 hours.

  2. Graduate Users (academic)

    1. are allowed to use up to eight cores on the non-GPU compute nodes.  Users can log in to Olympus using ssh with four different sessions and run the proper load-ecen-### command in each ssh session. 

    2. Each interactive session is limited to a maximum of 12 hours.

  3. Research Users

    1. are allowed to use up to 10 cores on the non-GPU compute nodes. 

    2. Each job is limited to a maximum of 48 hours

...