Topics Map > •Research Computing

CRC NOTS Description

Description of CRC's NOTS cluster.

Back to main "Getting Started" document

Introduction

NOTS (Night Owls Time-Sharing Service; history of name) is a batch scheduled cluster for high-performance computing (HPC) and high-throughput computing (HTC). The system consists of 159 nodes with dual-socketed, hyperthreaded CPUs within HPE and Dell PowerEdge chassis. All the nodes are interconnected with 25 Gigabit Ethernet network. In addition, the nodes are connected with either high speed Omni-Path or InfiniBand HDR 100 for message passing applications. There is a 300TB VAST filesystem attached to the compute nodes via Ethernet. The system can support various work loads including single node, parallel, large memory multi-threaded, and GPU jobs.

Citation

If you use NOTS to support your research activities, please acknowledge (in publications, on your project web pages, …) our efforts to maintain this infrastructure for your use. An example acknowledgment that can be used follows. Feel free to modify wording for your specific needs but please keep the essential information:

This work was supported in part by the NOTS cluster operated by Rice University's Center for Research Computing (CRC).

Node Configurations

All nodes contain hyperthreaded CPUs.  The Storage Network for all nodes is 25 GbE.

NOTS Node Configurations
Node Type Node Count Hardware CPU Core Count System
RAM

Available
RAM

Disk GPUs
(SLURM gres=...)
High Speed Network SLURM
Features
Compute 52 HPE XL170r Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz 40 192 GB 182 GB 960 GB - Omni-Path cascadelake,
opath
Compute 8 HPE XL170r Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz 40 768 GB 730 GB 960 GB - Omni-Path cascadelake,
opath
Compute 4 HPE XL170r Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz 40 1.5 TB 1,461 GB 960 GB - Omni-Path cascadelake,
opath
Compute 3 Dell PowerEdge C6520 Intel(R) Xeon(R) Gold 6336Y CPU @ 2.40GHz 48 256 GB 250 GB 960 GB - Omni-Path icelake
Compute 60 HPE Proliant DL360 Gen11 Intel(R) Xeon(R) Platinum 8468 96 256 GB 251 GB 960 GB - InfiniBand HDR 100 sapphirerapids,
ib
GPU 1 Dell PowerEdge R750xa Intel(R) Xeon(R) Gold 6342 CPU @ 2.80GHz 24 256GB 250 GB 480 GB 4 x Tesla A40
(gpu:ampere)
InfiniBand HDR 100 icelake
GPU 3 HPE XL675d AMD EPYC 7343 CPU @ 3.2GHz 32 512 GB 503 GB 240 GB 8 x Tesla A40
(gpu:ampere)
InfiniBand HDR 100 milan, ib
GPU 16 HPE XL190 Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz 40 192 GB 182 GB 960 GB 2 x Tesla V100 (double- precision)
(gpu:volta)
Omni-Path cascadelake,
opath
GPU 12 HPE ProLiant DL380a Gen11 Intel(R) Xeon(R) Platinum 8468 96 512 GB 503 GB 960 GB 4 x Tesla L40S
(gpu:lovelace)
InfiniBand HDR 100 sapphirerapids,
ib

Software environment on the nodes

  • Operating system: Red Hat Enterprise Linux 9
  • SLURM scheduler: version 24
  • CUDA libraries:  Run module spider CUDA on the cluster to see versions available



Keywords:
CRC NOTS cluster description citation nodes compute GPU night owls time sharing service 
Doc ID:
147971
Owned by:
Bryan R. in Rice U
Created:
2025-01-31
Updated:
2025-02-03
Sites:
Rice University