ALWAYS submit your production jobs to cluster using LSF. DO NOT run production jobs on the head nodes. To learn how to use LSF visit our running jobs web page.
You can compile and test your code on the head nodes. However, make sure your test case can finish within one hour and does not use too much system resources. Any jobs exceeding these limits without prior approval from the Director of HPC will be killed and the user account will be disabled.
All active users are allocated a logical storage area, HOME directory. Hard storage quotas are not currently implemented at the user level, but are generally limited to 250G for HOME. Home directories are intended primarily for basic account information, source codes, and binaries. There is another directory, named “/scratch”, which should be used for compiles and run-time input and output files. Both HOME and scratch are available on all nodes of the cluster. Data in the “/scratch” directory are NOT backed up and untouched data older than 3 weeks are subject to purging. Data in HOME directory is also NOT backed up. However, it is not subject to purging, unless the account is deleted.
Accounts exceeding the 250G Home directory limit will be suspended. The user will not able to submit jobs. Once the usage is under 250GB, the account will be enabled in one business day.
After 90 days, suspended accounts will be disabled. The user will not be able to login. At this point the user must submit a request to have their account temporarily enabled, so they can clean up their data. Once the usage is under 250GB, the account will be enabled in one business day.
Disabled accounts will be deleted after 30 days. Their data will be archived to backup. The backup will be kept for 30 days.