...
The Olympus cluster consists of the login node (olympus.ece.tamu.edu), eight non-GPU compute nodes and five GPU compute nodes. The cluster has software that ensures users receive the resources needed for their labs and research by distributing users' jobs across the compute nodes based on their course the user’s requirements. There is limited software installed on the Olympus head node.
...
CPU-RESEARCH: Three nodes - research jobs will run on these nodes - requires PI approval for access
GPU: Five nodes for projects and research - requires PI/Faculty approval for access
Resource allocation is set using Quality of Server Service groups (qos) in slurm.
QOS name | Hardware Limits | Default Time Limits | Hard Time Limit | Partition |
Ugrad (academic) | 4 cpu cores | 12 hours | 12 hours | CPU |
Grad (academic) | 6 cpu cores | 12 hours | 12 hours | CPU |
Research | 12 cpu cores | 48 hours | 48 hours | CPU-Research |
Ecen-ugrad-gpu | 8 cpu, 1gpu | 36 hours | 36 hours | GPU |
Olympus-research-gpu | 32 cpu, 4gpu | 4 days | 4days | GPU |
Olympus-research-gpu2 | 160 cpu/20 gpu | 7 days | 21 days | GPU |
...