Pegasus access: | Storage access: | Network access:

2016-07-19 Pegasus bigmem queue max runtime extension

The bigmem queue max runtime on Pegasus has been extended to 5 days.
Pegasus Queues

2016-07-14 Pegasus performance issues – reminder to use /scratch for job data

We are experiencing severe periodic performance issues on Pegasus. I/O performance on the /scratch and /nethome filesystems appear to be the most affected.
Users should stage data for jobs exclusively on the /scratch filesystem.

2016-07-01 Pegasus maintenance complete

Pegasus maintenance for June 29 through July 01, 2016 has completed. CCS systems including Pegasus and storage are available for use.
Click title for details.

2016-07-05 Expired CCS account removal

All /nethome data for disabled or expired CCS accounts older than 30 days will be removed from the filesystem on Tuesday, July 05.
Per CCS policy, disabled account data will be deleted after 30 days.


The High-Performance Computing (HPC) core is focused on providing the latest in Supercomputing technology and tools to the University of Miami (UM) research community. While this core consists of traditional operations staff such as systems and network administrators, it also encompasses other areas of expertise including scientific programmers, parallel code profiling, and optimization. The HPC core is responsible for the operations of all infrastructure maintained at CCS.

Image Goes Here

The HPC core’s services include batch and interactive Compute, Visualization and Secure Data Processing clusters, Systems Administration and Consulting, Storage Implementation, Archive Storage, Systems Hosting and Maintenance. The HPC core has in-depth experience in parallelizing codes written in Fortran, C, Java, Perl, Python, and R. The core also has expertise in parallelizing code using both MPI and OpenMP.

Central to HPC is the Pegasus Supercomputer, a 350-node Lenovo cluster with over 300 applications and optimized libraries, including standard scientific libraries and numerous optimized libraries and algorithms tuned for the computing environment. The LSF (load sharing facility) resource manager, which supports over 1500 users and over 200,000 simultaneous job submissions, maximizes the efficiency of computational resources. By utilizing the full suite of LSF tools, we are able to provide for both batch and interactive jobs while retaining workload management features.

Image Goes Here