High Performance Computing at NJIT

Table : High Performance Computing at NJIT

Table last modified: 21-Nov-2018 15:14
HPC Machine Specifications
 Kong.njit.edu Phi.njit.edu Gorgon.njit.eduStheno.njit.eduStheno.njit.edNJIT HPC
Expansion[1]Kong-5Kong-6Kong-7Kong-8Kong-9Kong-10Kong-11Kong-12Kong-13Kong-14Cluster Total Stheno-1Stheno-2Stheno-3Stheno-4Stheno-5Cluster TotalGrand Totals
Tartan designation[2]Tartan-9Tartan-11Tartan-12Tartan-15Tartan-16Tartan-17Tartan-18Tartan-19Tartan-20Tartan-21Tartan-4Tartan-3Tartan-5Tartan-6Tartan-10Tartan-13Tartan-14   
Manufacturer IBMIBMSupermicroSunDellMicrowayMicrowayMicrowayMicrowayMicrowayVMware[5]MicrowayMicrowayIBMIBMIBM   
Model iDataPlex dx360 M4iDataPlex dx360 M4SS2016X4600PowerEdge R630NumberSmasherNumberSmasher-4XNumberSmasher DualXeon Twin ServerNumberSmasher DualVMware[5] NumberSmasher-4XiDataPlex dx360 M4iDataPlex dx360 M4iDataPlex dx360 M4   
Nodes 22270135122413021188132132336
• PROCESSORS •                        
CPUs per node22282222222422222   
Cores per CPU6104410101010101088666610   
Cores per node122083220202020202016321212121220   
Total CPU cores24402160326010024040802027961632969615624203923236
Processor model[4]Intel Xeon E5-2630Intel Xeon E5-2660v2Intel Xeon L5520AMD Opteron 8384Intel Xeon E5-2660-v3Intel Xeon E5-2630-v4Intel Xeon E5-2630-v4Intel Xeon E5-2630-v4Intel Xeon E5-2630-v4Intel Xeon E5-2630-v4Intel Xeon E5-2680 AMD Opteron
6134
Intel Xeon E5649Intel Xeon E5-2630Intel Xeon E5-2630Intel Xeon E5-2630Intel Xeon E5-2660 v2   
Processor µarchSandy BridgeIvy BridgeNehalemK10 ShanghaiHaswellBroadwellBroadwellBroadwellBroadwellBroadwellSandy BridgeK10 MaranelloWestmereSandy BridgeSandy BridgeSandy BridgeIvy Bridge   
Processor launch 2012 Q12013 Q32009 Q12008 Q42014 Q32016 Q12016 Q12016 Q12016 Q12016 Q12012 Q12010 Q12011 Q12012 Q12012 Q12012 Q12013 Q3   
Processor speed, GHz 2.32.22.272.692.62.22.22.22.22.22.72.32.532.32.32.32.2   
• MEMORY •                        
RAM per node, GB 12812846128128256256256256256646496128128128128   
RAM per CPU, GB 646423166412812812812812832164864646464   
RAM per core, GB 10.676.45.7546.412.812.812.812.812.842810.6710.6710.676.4   
Total RAM, GB2562561242012838412803072512102425619588646476810241664256128384023556
• CO-PROCESSORS •                        
GPU ModelNvidia K20XNvidia Tesla P100 16GB “Pascal”Nvidia Tesla P100 16GB “Pascal”Nvidia Tesla P100 16GB “Pascal”Nvidia K20Nvidia K20m   
GPUs410442242628
Cores per GPU268835843584358424962668   
Total GPU cores1075235840143361433675264998453361532090584
RAM per GPU, GB616161656   
Total GPU RAM, GB241606464312201232344
• STORAGE •                        
Local disk per node, GB[6] 5005001000146102410241024102410241024 117117500500500   
Total local disk, GB100010002700001463072512012288204840961024299794936936650010005009872309666
Shared scratch[7]33743374/nscratch, 151GB/scratch, 938GB/gscratch, 361GB   
NFS /home/, GB826182612728   
Node interconnect 10GbE10GbEGigEGigE10GbE10GbEInfiniBand FDR10GbE10GbE10GbE InfiniBand QDRInfiniBand FDRInfiniBand FDRInfiniBand FDR   
• SOFTWARE •                        
Scheduler SunGridEngine 6.2   
Cluster mgmt Warewulf   
Operating System SL 5.5 SL 5.5 SL 5.5    
Kernel Release 398504313711725   
• RATINGS •                        
Max GFLOPS [9] 20733018387322.8585825198033066016523791.8162276910.88281345.52071653456.327686.1
CPU Mark, per CPU [11]19106136594357705168146814811819106191061910613659   
CPU Mark, per node382122731887145640822733188071880718807188071880713628272561623638212382123821227318   
CPU Mark, per node totaled76,42454636235278056408681999403522568437614752281880730598151362827256129888305696496756764242731810360824136781
Max GPU GFLOPS[10]395035203950   
Total GPU GFLOPS15800140807900   
• POWER •                        
Watts per node10401040300197515001600162016001000160010001000104010401040   
Total Watts20802080810001975450080001944032004000160012787580008000135202080832039920167795
MFLOPS per Watt                        
• ETC •                        
Access modelReserved[12]PublicPublicPublicReserved[12]Partly[13] Reserved[14] Reserved[13] Reserved[16] Reserved[17] Public Reserved[1] Reserved[1]   
Head node AFS client YesYesYes   
Compute nodes AFS client YesYesYes   
In-service date Aug 2013Oct 2013Mar 2015Aug 2015Nov 2016Aug 2017Sep 2017Sep 2017May 2018Aug 2018 Oct 2010
Sep 2017[15]
Aug 2010Nov 2011Sep 2012Aug 2013May 2015Jun 2015   
Node numbers147-150151, 152100-111, 200-401, 500-599 [18]153402-404412-416417-428429-430431-434435“Phi”“Gorgon”0-78-1516-2730-3132   
URL Link   
 Kong.njit.edu Phi.njit.edu Gorgon.njit.eduStheno.njit.eduStheno.njit.edNJIT HPC

Notes: Notes last modified: 21-Nov-2018 15:14
[1]  Access to Stheno and Gorgon is restricted to Department of Mathmatics use.
[2]  See https://ist.njit.edu/tartan-high-performance-computing-initiative
[3]  A small number of Kong nodes are reserved by specific faculty.
[4]  All active systems are 64-bit.
[5]  Phi is virtual machine running on VMware provisioned as shown here; actual hardware is irrelevant.
[6]  A small portion of compute nodes' local disk is used for AFS cache and swap; the remainder is /scratch available to users.
[7]  Shared scratch writable by all nodes via NFS (/nscratch) or locally mounted for one-node systems (Phi, Gorgon)
[8]  Core counts do not include hyperthreading
[9]  Most GFLOPS estimated by cores*clock*(FLOPs/cycle), however 3.75 FLOPs/cycle conservatively assumed instead of the typical 4.0
[10]  Peak single precision floating point performance as per manufacturer's specifications
[11]  PassMark CPU Mark from http://cpubenchmark.net/ or https://www.cpubenchmark.net/multi_cpu.html
[12]  Access to Kong-5 and Kong-9 is reserved for Dr. C.Dias and designees.
[13]  Access to Kong-10 is reserved for Data Sciences faculty and students, contact ARCS@NJIT.EDU for additional information.
[14]  Access to Kong-11 is reserved for Dr. G.Gor and designees.
[15]  Phi upgraded, was originally 1-CPU of 4-cores and 32GB RAM
[16]  Access to Kong-13 is reserved for Dr. E. Nowadnick and designees.
[17]  Access to Kong-14 is reserved for Dr. D. Datta and designees.
[18]  On in-service there were 314 nodes average 64GB RAM, but failed nodes are generally not repaired. Count shown in “Nodes” and “RAM per node” above is as of the “Table last modified” date.