Access to the cluster can be granted only to the participating professors and their research groups. Jobs are managed by the SLURM queue manager. The storage is for temporary computation and is not backed up or duplicated in any way except that it is configured as RAID6 so can withstand up-to-two simultaneous hard drive failures. Central storage is managed by redundant storage servers, with 200 TB of usable storage evenly allocated to researchers. Each node contains 64GB of RAM shared by two CPU sockets, each with an 8-core CPU running at 2.4GHz. The College of Engineering High Performance Computing Cluster (HPC1) contains 60 compute nodes and central storage, all connected on Infiniband networking. If your research group is not part of the HPC2 Cluster and you would like to join, please send an email to so that we can discuss access. ![]() HPC2 technical details can be found at the HPC Core Facility website. ![]() ![]() The College of Engineering deployed a new High Performance Computing Cluster (HPC2) in Fall 2020.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |