Personal tools
You are here: Home Apply for resources Available resources NSC


Below follows a short description of the NSC systems that are available through SNIC allocations.


The part of Triolith available to SNAC projects shrunk from 1536 nodes to 960 nodes on April 1st, 2017. This is a direct consequence of the delay in funding from SNIC for a replacement system. By reducing the number of nodes we can keep the remaining 960 nodes running until a replacement system is in place, or at least until July 30, 2018. A decision of funding for a replacement system is not taken yet (as of 2017-03-28).

Triolith ( is a capability cluster with a total of 24320 cores and a peak performance of 428 Tflops/s. It is equipped with a high performance interconnect for parallel applications. The operating system is CentOS 6.x x86_64. Each of the 1520 HP SL230s compute servers is equipped with two Intel E5-2660 (2.2 GHz Sandybridge) processors with 8 cores each (i.e. 16 cores per compute server). 56 of the compute servers have 128 GiB memory each and the remaining 1464 have 32 GiB each. The fast interconnect is Infiniband from Mellanox (FDR IB, 56 Gb/s) in a 2:1 blocking configuration.

Hardware summary 

Processor Intel® E5-2660 16 Core Processor (Sandybridge) 2.2 GHz
Interconnect Mellanox (FBR IB, 56 Gb/s) in a 2:1 blocking configuration
Node memory  32 GiB on 1464 of the compute servers, 128 GiB on 56 of the compute servers
Node local scratch disk
500 GiB  or 2 x 500 GiB per node

Software summary


Operating System:

CentOS 6.x 64-bit Linux

Resource Manager:





Intel compiler collection
icc and ifort

Math library:

Intel Math Kernel Library
Cluster Edition


IntelMPI Messag-passing interface library

OpenMPI A High Performance Message Passing Library


Centre storage at NSC

The total disk space available to store files at NSC is approximately 2800 TiB. By default, each user have a ~20 GiB home directory with backup. By default, each project that has been allocated computing time on Triolith will have a directory under /proj (e.g /proj/snic2014-1-123, /proj/somename) where the project members can store their data associated with that project. The name of the directory is decided by the project Principal Investigator ("PI"). Default disk space per project for storage during a SNAC project's allocation period, referred to as Permanent in the SNAC application form, is 500 GiB. Disc space above the default limit can be granted following a well motivated request by e-mail to Node local scratch disk in the hardware summary table above refers to disk space available for running batch jobs, referred to as Temporary in the application form, and is erased between batch jobs. For more information regarding Center Storage, visit the extensive description of Center Storage at NSC.


For more information regarding the systems, storage and available software on the systems, visit the NSC web pages and especially the user guides.

Document Actions
« March 2017 »
Mo Tu We Th Fr Sa Su
1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31
More events…