Home

A new storage solution for HYDRA was deployed on June 1st

[id : 508] [01/06/2016] [hits : 5778]


The migration to the new Hydra storage is completed, Torque & Moab have been upgraded. We have been delayed in the works with the deployment of the new node image which represented 1 TB of data to deploy on the nodes, forcing a node-by-node process to avoid network saturation and boot failure. The cluster is now ready to run jobs again.

We will put into production a new storage solution on Hydra. The operation will require to put the cluster offline, one to two days, but no more jobs can be running during the operation.

The migration started May 30, 2016 at 10:00.

In the coming two weeks, we will transfer a number of data from the current storage to the new one.

The work directory (your $WORKDIR space) currently being used to store job outputs *will not* be transferred. The new storage will have an empty work directory. You will have the possibility to transfer data to this new work directory until 31st of July. After that, the remaining files left in the current work directory will be deleted.

Some changes will be made in the access modalities with the new storage:

1) The total disk space for the work directory will be increased to 50 TB (today it is only 12 TB).
2) A "soft" quota of 400 GB on the work directory will be applied for each user.
3) A "hard quota" of 1 TB will be applied with a grace period of 15 days.
4) The "soft" and/or "hard" quota will be dynamically adapted according to the work storage usage.

Users currently occupying more than 400 GB in the work directory will receive an email and should start cleaning their space.

As a reminder, the work space must be used to store job output files only temporarily. Other solutions are offered to store large volumes permanently on the cluster.

Some information about the new storage:

The new storage solution had low performance issues due to technical and integration problems after its initial implementation. The company recently performed the hardware upgrade and provided the expected performance numbers this week: using 40 parallel processes, 4.6 GB/s writing and 4 GB/s reading were attained. As a comparison, this is roughly 20x faster than our current storage.

The HPC team

http://webnotes.vub.ac.be/&noteid=508

: :: ::: ::::