Pleiades Supercomputer

Pleiades supercomputer at the NASA Advanced Supercomputing facility

Pleiades is a distributed-memory SGI/HPE ICE cluster connected with InfiniBand in a dual-plane hypercube technology. Originally deployed in 2008, Pleiades has been expanded many times and is one of the world's most powerful supercomputers.

The system contains the following types of Intel Xeon processors: E5-2680v4 (Broadwell), E5-2680v3 (Haswell), E5-2680v2 (Ivy Bridge), and E5-2670 (Sandy Bridge). Pleiades is named after the astronomical open star cluster of the same name.

System Architecture

Pleiades Node Detail
  Broadwell Nodes Haswell Nodes Ivy Bridge Nodes Sandy Bridge Nodes
Number of Nodes 2,016 2,052 5,256 944
Processors per Node 2 fourteen-core processors per node 2 twelve-core processors per node 2 ten-core processors per node 2 eight-core processors per node
Node Types Intel Xeon E5-2680v4 processors Intel Xeon E5-2680v3 processors Intel Xeon E5-2680v2 processors Intel Xeon E5-2670 processors
Processor Speed 2.4 GHz 2.5 GHz 2.8 GHz 2.6 GHz
Cache 35 MB for 14 cores 30 MB for 12 cores 25 MB for 10 cores 20 MB for 8 cores
Memory Type DDR4 FB-DIMMs DDR4 FB-DIMMs DDR3 FB-DIMMs DDR3 FB-DIMMs
Memory Size 4.6 GB per core, 128 GB per node 5.3 GB per core, 128 GB per node 3.2 GB per core, 64 GB per node (plus 3 bigmem nodes with 128 GB per node) 2 GB per core, 32 GB per node
Host Channel Adapter InfiniBand FDR host channel adapter and switches InfiniBand FDR host channel adapter and switches InfiniBand FDR host channel adapter and switches InfiniBand FDR host channel adapter and switches
GPU-Enhanced Nodes
  Sandy Bridge + GPU Nodes Skylake + GPU Nodes Cascade Lake + GPU Nodes
Number of Nodes 64 19 38
Processors per Node Two 8-core host processors and one GPU coprocessor (2,880 CUDA cores) Two 18-core host processors; four GPU coprocessors (for 17 nodes); eight GPU coprocessors (for 2 nodes) Two 24-core host processors and four GPU coprocessors (5,120 CUDA cores)
Node Types Intel Xeon E5-2670 (host); NVIDIA Tesla K40 (GPU) Intel Xeon Gold 6154 (host); NVIDIA Tesla V100-SXM2-32GB (GPU) Intel Xeon Platinum 8268 (host); NVIDIA Tesla V100-SXM2-32GB (GPU)
Processor Speed 2.6 GHz (host); 745 MHz (GPU) 3.0 GHz (host); 1,290 MHz (GPU) 2.9 GHz (host); 1,290 MHz (GPU)
Cache 20 MB for 8 cores (host) 27.5 MB shared non-inclusive by 20 cores 35.75 MB shared non-inclusive by 24 cores
Memory Type DDR3 FB-DIMMS (host); GDDR5 (GPU) DDR4 FB-DIMMS (host); HBM2 (GPU) DDR4 FB-DIMMS (host); HBM2 (GPU)
Memory Size 64 GB per node (host); 12 GB per GPU card 384 GB per node (host); 32 GB per GPU card 384 GB per node (host); 32 GB per GPU card
Host Channel Adapter InfiniBand FDR host channel adapter and switches (host) InfiniBand EDR host channel adapter and switches (host) InfiniBand EDR host channel adapter and switches (host)
Subsystems
  8 Front-End Nodes PBS server pbpspl1 PBS server pbspl3
Number of Processors 2 eight-core processors per node 2 16-core processors per node 2 16-core processors per node
Processor Types Xeon E5-2670 (Sandy Bridge) processors AMD EPYC 7320P processors AMD EPYC 7320P processors
Processor Speed 2.6 GHz 4 GHz 4 GHz
Memory 64 GB per node 72 GB per node 16 GB per node
Connection 10 Gigabit and 1 Gigabit Ethernet connection N/A N/A

Interconnects

Operating Environment

Contact Us

General User Assistance

Security

  • Report security issues 24x7x365
  • Toll-free: 1-877-NASA-SEC (1-877-627-2732)
  • E-mail: soc@nasa.gov

User Documentation

High-End Computing Capability (HECC) Portfolio Office

NASA High-End Computing Program

Tell Us About It

Our goal is furnish all the information you need to efficiently and effectively use the HECC resources needed for your NASA computational projects.

We welcome your input on features and topics that you would like to see included on this website.

Please send us email with your wish list and other feedback.