Personal tools
You are here: Home Apply for resources Available resources NSC

NSC

Below follows a short description of the NSC systems that are available through SNIC allocations.

TRIOLITH

The part of Triolith available to SNAC projects shrunk from 1536 nodes to 960 nodes on April 1st, 2017. This ws a direct consequence of the delay in funding from SNIC for a replacement system. By reducing the number of nodes NSC can keep the remaining 960 nodes running until a replacement system is in place. NSC estimate that the replacement system will be installed in Q3, 2018. At that point project allocations will be scaled and transfered to the new resource. Installation of the new system is likely to be done in stages with an overlap in operation between parts of the new system and parts of Triolith.

Triolith (triolith.nsc.liu.se) was a capability cluster with a total of 24320 cores and a peak performance of 428 Tflops/s. However, Triolith was shrunk by 576 nodes on April 3rd, 2017 as a result of a delay in funding a replacement system and now has a peak performance of 260 Teraflop/sec and 16,368 compute cores. It is equipped with a fast interconnect for high performance for parallel applications. The operating system is CentOS 6.x x86_64. Each of the 1520 (now 944) HP SL230s compute servers is equipped with two Intel E5-2660 (2.2 GHz Sandybridge) processors with 8 cores each (i.e. 16 cores per compute server). 56 of the compute servers have 128 GiB memory each and the remaining 888 have 32 GiB each. The fast interconnect is Infiniband from Mellanox (FDR IB, 56 Gb/s) in a 2:1 blocking configuration.

Hardware summary 


Triolith
Processor Intel® E5-2660 16 Core Processor (Sandybridge) 2.2 GHz
Interconnect Mellanox (FBR IB, 56 Gb/s) in a 2:1 blocking configuration
Node memory  32 GiB on 1464 of the compute servers, 128 GiB on 56 of the compute servers
Node local scratch disk
500 GiB  or 2 x 500 GiB per node

Software summary


Triolith

Operating System:

CentOS 6.x 64-bit Linux

Resource Manager:

SLURM 2

Scheduler:

SLURM 2

Compiler:

Intel compiler collection
icc and ifort

Math library:

Intel Math Kernel Library
Cluster Edition

MPI:

IntelMPI Messag-passing interface library

OpenMPI A High Performance Message Passing Library

 

Centre storage at NSC

The total disk space available to store files at NSC is approximately 2800 TiB. By default, each user have a ~20 GiB home directory with backup. By default, each project that has been allocated computing time on Triolith will have a directory under /proj (e.g /proj/snic2014-1-123, /proj/somename) where the project members can store their data associated with that project. The name of the directory is decided by the project Principal Investigator ("PI"). Default disk space per project for storage during a SNAC project's allocation period, referred to as Permanent in the SNAC application form, is 500 GiB. Disc space above the default limit can be granted following a well motivated request by e-mail to support@nsc.liu.se. Node local scratch disk in the hardware summary table above refers to disk space available for running batch jobs, referred to as Temporary in the application form, and is erased between batch jobs. For more information regarding Center Storage, visit the extensive description of Center Storage at NSC.

 

For more information regarding the systems, storage and available software on the systems, visit the NSC web pages and especially the user guides.

Document Actions
« October 2017 »
October
Mo Tu We Th Fr Sa Su
1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31
More events…