UM maintains one of the largest centralized academic cyber infrastructures in the country with numerous assets.
The CCS HPC group has been in continuous operation for the past five years. Over that time the core has grown from no HPC cyberinfrastructure to a regional high-performance computing environment that currently supports more than 1,200 users, 220 TFlops of computational power, and more than 3 Petabytes of disk storage. The center’s latest system acquisition, an IBM IDataPlex system, has been ranked at number 389 on the November 2012 Top 500 Supercomputer Sites list.
At present, CCS maintains several clusters and application servers:
Pegasus – Top 500 System (389)
- 10,000 cores. IBM IDataPlex/Blade system
- Diverse operating environments (Intel Xeon, Intel Phi, AMD processors)
- 19TB of RAM
- Dedicated graphical nodes (Pegasus-gui)
- Dedicated data transfer nodes with direct connection to I2 (Aspera, GridFTP, SFTP)
- 250+ programs, compilers, and libraries
Jabberwocky – CentOS 6.2 based interactive visualization cluster
- 184 cores
- 1 TB RAM
- Graphical access from all nodes
- Firewalled access to all resources
- 1 PB+ of storage
Elysium – CentOS 6.2 based secure data processing cluster (HIPAA/IRB compliant)
- 32 cores
- 128 GB RAM
- Separate VLAN
- Restricted access (MAC authentication/user ACL’s enforced)
- Full auditing and attestation
- 500 TB
DAVID (Distributed Access for Visualization and Interaction with Data) Cloud
- 32 cores
- 128 GB RAM
- CIFS/NFS/FTP/HTTP access
- 500 TB
CCS offers an integrated storage environment for both structured (relational) and unstructured (flat file) data. These systems are specifically tuned for CCS’ data type and application requirements, whether they are serial access or highly parallelized. Each investigator or group has access to its own area and can present his or her data through a service-oriented architecture (SOA) model. Researchers can share their data via access control lists (ACLs), which ensure data integrity and security while allowing flexibility for collaboration.
CCS offers structured data services through the most common relational database formats, including: Oracle, MySQL, and PostgreSQL. Investigators and project teams can access their space through SOA and utilize their resources with the support of an integrated backend infrastructure.
The CCS flat file storage environment is built as a multi-tier solution combining high-speed storage with dense high capacity storage in a tiered architecture, all supported by IBM’s GPFS. Our HPC/Global tier (700TB) is available on all compute nodes. This storage is designed for massively parallel work and has been clocked at 157,000 IOP/sec and over 20 GB/sec bandwidth.
Our standard tier of storage (2.8 PB) is designed for general-purpose data storage, analysis, and presentation of data to collaborators both within and without the University of Miami. All tier 2 storage is available from all systems including our visualization cluster. Several data management tools are available for tier 2 storage including public presentation, long-term archive, deduplication, encryption, and HSM.
Our archival tier of storage (2.5 PB) leverages several platforms for keeping critical data safe. By using a combination of tape and disk technologies we are able to reduce restore times significantly while still ensuring data integrity.
HPC CORE EXPERTISE
The group has in-depth experience in various scientific research areas with extensive experience in parallelizing or distributing codes written in Fortran, C, Java, Perl, Python and R. The HPC team is active in contributing to Open Source software efforts including: R, Python, the Linux Kernel, Torque, Maui, XFS and GFS. The team also specializes in scheduling software (LSF) to optimize the efficiency of the HPC systems and adapt codes to the CCS environment. The HPC core also has a great deal of expertise in parallelizing code using both MPI and OpenMP depending on the programming paradigm. CCS has contributed several parallelization efforts back to the community in projects such as R, WRF, and HYCOM.
The core specializes in implementing and porting open source codes to CCS’ environment and often contributes changes back to the community. CCS currently supports more than 300 applications and optimized libraries on its computing environment. The core personnel are experts in implementing and designing solutions in the three different variants of Unix. CCS also maintains industry research partnerships with IBM, Schrodinger, Open Eye, and DDN.
HPC users have a complete software suite at their fingertips, including standard scientific libraries and numerous optimized libraries and algorithms tuned for the computing environment. All programs and algorithms are implemented in 64-bit mode in order to address large memory problems, and also offer compatible 32-bit libraries and algorithms. In addition, the LSF grid scheduling process maximizes the efficiency of the computational resources. Increased efficiency translates into the faster execution of programs, which provides researchers faster access to more resources. By utilizing the full suite of LSF tools we are able to provide both batch and interactive workloads while still retaining workload management features.
The proposed system will be collocated at the Terremark NAP of the Americas (NOTA or NAP). The NOTA Datacenter in Miami (Figure 2) currently features a 750,000 square foot, purpose-built datacenter, Tier IV facility with N+2 14 Megawatt power and cooling infrastructure. The equipment floors start at 32 feet above sea level, roof slope designed to aid in drainage of floodwater in excess of 100-year storm intensity assisted by 18 rooftop drains, architecture designed to withstand a Category 5 hurricane with approximately 19 million pounds of concrete roof ballast, 7 inch thick steel reinforced concrete exterior panels, and the building is outside FEMA 500-year designated flood zone. The NAP uses a dry pipe fire-suppression system to minimize the risk of damage from leaks.
The NAP of the Americas has a centrally located Command Center manned by 7×24 security and security sensors. In order to connect the UM with the NOTA Datacenter, the University of Miami has invested in a Dense wavelength division multiplexing (DWDM) optical ring for all of its campuses. The CCS HPC resources occupy a discrete, secure wavelength on the ring, which provides a distinct 10 Gigabit HPC network to all UM campuses and facilities. The CGC system will reside in the University of Miami DMZ, which will have a direct FLR/I2 connection of 100Gb/sec by Fall 2014.
Given University of Miami’s past experience including several hurricanes and other natural disasters, we anticipate no service interruptions due to facilities issues. The NAP was designed and constructed for resilient operations. UM has gone through several hurricanes, power outages, and other severe weather crises without any loss of power or connectivity to the NAP. The NAP maintains its own generators with a flywheel power crossover system. This insures that power is not interrupted when the switch is made to auxiliary power. The NAP maintains a 2-week fuel supply (at 100% utilization) and is on the primary list for fuel replacement due to its importance as a data serving facility.
In addition to hosting the University of Miami’s computing infrastructure, the NAP of the Americas is home to the US SouthCOM, Amazon, EBay, and several telecommunications companies’ assets. The NAP at Miami hosts 97% of the network traffic between the US and Central/South America. The NAP is also the local access point for FLR, which is gated to I2 to provide full support to the I2 Innovation Platform. The NAP also provides TLD information to the DNS infrastructure and is the local peering point for all networks in the area.
The University of Miami has made NOTA (or NAP) its primary Data Center occupying a very significant footprint on the third floor. Currently all UM-CCS resources, clusters, storage and back up system run from this facility and serves all four major campuses of UM. The proposed system described in this proposal will be housed and operated in the existing UM/CCS space at the NAP.
For more details about our HPC infrastructure, please visit our pages on this website at http://www.ccs.miami.edu/hpc
For assistance with any of our HPC Systems and Resources, please email CCS HPC Support.