Topics Map > •Research Computing
CRC NOTS Description
Back to main "Getting Started" document
Introduction
NOTS (Night Owls Time-Sharing Service; history of name) is a batch scheduled cluster for high-performance computing (HPC) and high-throughput computing (HTC). The system consists of 159 nodes with dual-socketed, hyperthreaded CPUs within HPE and Dell PowerEdge chassis. All the nodes are interconnected with 25 Gigabit Ethernet network. In addition, most nodes are connected with either high speed Omni-Path or InfiniBand HDR 100 for message passing applications. There is a 300TB VAST filesystem attached to the compute nodes via Ethernet. The system can support various work loads including single node, parallel, large memory multi-threaded, and GPU jobs.
Citation
If you use NOTS to support your research activities, please acknowledge (in publications, on your project web pages, …) our efforts to maintain this infrastructure for your use. An example acknowledgment that can be used follows. Feel free to modify wording for your specific needs but please keep the essential information:
This work was supported in part by the NOTS cluster operated by Rice University's Center for Research Computing (CRC).
Node Configurations
Node | CPU Cores | Memory & Storage | Network [3] | GPU | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Node Type |
Count | Hardware | Processor Type [1] |
Core Count [2] | Name | Base Freq. |
Sys-tem RAM |
Allo-cat-able RAM |
Disk Size |
High Speed Network [4] | GPU Type [5] |
Count | Name | Precision |
Compute | 52 | HPE XL170r | cascadelake |
40 | Intel Xeon Gold 6230 | 2.10 GHz | 192 GB | 182 GB | 960 GB | opath |
- | - | - | - |
Compute | 8 | Dell PowerEdge C6420 | cascadelake |
40 | Intel Xeon Gold 6230 | 2.10 GHz | 768 GB | 730 GB | 960 GB | opath |
- | - | - | - |
Compute | 4 | HPE XL170r | cascadelake |
40 | Intel Xeon Gold 6230 | 2.10 GHz | 1.5 TB | 1,461 GB | 960 GB | opath |
- | - | - | - |
Compute | 3 | Dell PowerEdge C6520 | icelake |
48 | Intel Xeon Gold 6336Y | 2.40 GHz | 256 GB | 250 GB | 960 GB | - | - | - | - | - |
Compute | 60 | HPE Proliant DL360 Gen11 | sapphirerapids |
96 | Intel Xeon Platinum 8468 | 2.10 GHz | 256 GB | 251 GB | 960 GB | ib |
- | - | - | - |
GPU | 1 | Dell PowerEdge R750xa | icelake |
48 | Intel Xeon Gold 6342 | 2.80 GHz | 256 GB | 250 GB | 480 GB | - | ampere |
4 | Tesla A40 | single |
GPU | 3 | HPE XL675d | milan |
32 | AMD EPYC 7343 | 3.2 GHz | 512 GB | 503 GB | 240 GB | - | ampere |
8 | Tesla A40 | single |
GPU | 16 | HPE XL190 | cascadelake |
40 | Intel Xeon Gold 6230 | 2.10 GHz | 192 GB | 182 GB | 960 GB | opath |
volta |
2 | Tesla V100 | double |
GPU | 12 | HPE ProLiant DL380a Gen11 | sapphirerapids |
96 | Intel Xeon Platinum 8468 | 2.10 GHz | 512 GB | 503 GB | 960 GB | ib |
lovelace |
4 | Tesla L40S | single |
Notes for the table above:
- The Processor Type column shows the SLURM feature name for the CPU.
- All nodes contain hyperthreaded processors, so are able to execute two threads per core.
- The storage network for all nodes is 25 GbE.
- The High Speed Network column shows the SLURM feature name for the network:
opath
= Omni-Pathib
= InfiniBand HDR 100
- The GPU Type column shows the SLURM "gres" name, specified as
gres=gpu:type
.
Software Environment on the Nodes
- Operating system: Red Hat Enterprise Linux 9
- SLURM scheduler: version 24
- CUDA libraries: Run
module spider CUDA
on the cluster to see versions available