7 am – 12 am Tuesday July 3rd, storage systems for the pegasus compute cluster, apollo, and visx will undergo hardware maintenance.
During this maintenance period, pegasus batch execution and login nodes will be unavailable – as will the gateway (gw.ccs.miami.edu), apollo, and visx systems.
Due to financial decisions made at the Miller School of Medicine (MSOM), open access on both Pegasus and Bigfoot will no longer be available to MSOM faculty, staff, and students effective October 31, 2017.
Read the full contents for details about how this will affect your CCS account.
CCS Advanced Computing invites you to connect with the Advanced Computing community on Slack: http://umadvancedcomputing.slack.com
The Advanced Computing community Slack channels provide a place for user discussions, information sharing, and informal announcements about CCS resources and developments. All users with an @miami.edu or @umiami.edu email address can create a Slack account in UM Advanced Computing.
The High-Performance Computing (HPC) core is focused on providing the latest in Supercomputing technology and tools to the University of Miami (UM) research community. While this core consists of traditional operations staff such as systems and network administrators, it also encompasses other areas of expertise including scientific programmers, parallel code profiling, and optimization. The HPC core is responsible for the operations of all infrastructure maintained at CCS.
The HPC core’s services include batch and interactive Compute, Visualization and Secure Data Processing clusters, Systems Administration and Consulting, Storage Implementation, Archive Storage, Systems Hosting and Maintenance. The HPC core has in-depth experience in parallelizing codes written in Fortran, C, Java, Perl, Python, and R. The core also has expertise in parallelizing code using both MPI and OpenMP.
Central to HPC is the Pegasus Supercomputer, a 350-node Lenovo cluster with over 300 applications and optimized libraries, including standard scientific libraries and numerous optimized libraries and algorithms tuned for the computing environment. The LSF (load sharing facility) resource manager, which supports over 1500 users and over 200,000 simultaneous job submissions, maximizes the efficiency of computational resources. By utilizing the full suite of LSF tools, we are able to provide for both batch and interactive jobs while retaining workload management features.