Mass Storage
In the Research Computing Environment, an enterprise Isilon storage array provides 2 petabytes of storage capacity for research data. The Research storage system consists of an Isilon cluster accessible from the Turing and Wahab clusters or campus workstations using SMB. This platform is currently used for both the home directories and research mass storage.
泭
Lustre
Lustre is an open-source parallel file system. It is designed for scalability, high-performance and high availability. Each cluster has its own Lustre storage system. The Lustre storage should be used to store your input and output files for HPC jobs that require high I/O performance. By default users have 1TB of storage allocation on the lustre storage system.
Lustre storage is mounted in the following locations:
- Turing: /scratch-lustre
- Wahab: /scratch
This storage is only accessible from the clusters.
Backups
圖朸厙 uses an enterprise backup system called Cohesity. Data is backed up to a storage cluster on the 圖朸厙 campus and then replicated to a cloud storage target. The user home directories and research mass storage directory locations are both backed up using this system. The scratch storage space is not backed up and important information should be archived manually.