Resources

CCS has a wide array of resources available to the University of Miami research community.  Additional information regarding these resources are available within the pages of our Focus Areas, located on the right navigation pane.

High Performance Computing

Systems and Resources:   Our computational resources encompass a wide range of processors and system architectures and support several paradigms including massively and embarrassingly parallel applications.

HPC Core Expertise:   The HPC group has in-depth experience in various scientific research areas with extensive experience in parallelizing or distributing codes written in Fortran, C, Java, Perl, Python and R. The HPC team is active in contributing to Open Source software efforts including: R, Python, the Linux Kernel, Torque, Maui, XFS and GFS. The team also specializes in scheduling software (LSF) to optimize the efficiency of the HPC systems and adapt codes to the CCS environment. The HPC core also has a great deal of expertise in parallelizing code using both MPI and OpenMP depending on the programming paradigm. CCS has contributed several parallelization efforts back to the community in projects such as R, WRF, and HYCOM.

The core specializes in implementing and porting open source codes to CCS’ environment and often contributes changes back to the community. CCS currently supports more than 300 applications and optimized libraries on its computing environment. The core personnel are experts in implementing and designing solutions in the three different variants of Unix. CCS also maintains industry research partnerships with IBM, Schrodinger, Open Eye, and DDN.

Software Engineering

The Software Engineering Resource provides expertise in the areas of systems design, development, implementation and integration. The group provides these services through two core teams: software engineering and project management. The software engineering team provides technical expertise and the project management team provides leadership and coordination for systems projects.

Bioinformatics 

The Computational Biology and Bioinformatics Program (CBBP)  conducts research and offers services and training in the management and analysis of biological and medical/health record data. The team provides data analysis training and expertise at a three levels, consulting, preliminary data generation, and fully collaborative, based on the time and complexity of the service requested. The analyses are undertaken by skilled analysts, and overseen by experienced faculty. The group works with microarray data and next generation sequencing data.  Analytical services include, but are not limited to:

• gene expression analysis for transcriptome profiling and/or gene regulatory network building,
• prognostics and/or diagnostic biomarker discovery,
• microRNA target analysis,
• copy number variant analysis, in this context we are testing the few existing algorithms and developing new ones for accurate and unambiguous discovery of copy number variation in the human genome,
• genome or transcriptome assembly from next generation sequencing data, and its visualization,
• SNP functionality analysis,
• other projects include merging or correlating data from various data types for a holistic view of a particular pathway or disease process.

Big Data Analytics and Advanced Data Mining 

The group provides advanced data mining expertise and capabilities to further explore high dimensional data. The following are examples of the expertise areas covered by our faculty.

Visualization 

The Visualization program conducts both theoretical and applied research in the general areas of Machine Vision and Learning, and specifically in (i) computer vision and image processing, (ii) machine learning, (iii) biomedical image analysis, and (iv) computational biology and neuroscience. The goal is to provide expertise n this area to develop novel fully automated methods that can provide robustness, accuracy and computational efficiency. The program works towards finding better solutions to existing open problems as well as exploring different scientific fields where our research can provide useful interpretation, quantification and modeling.

Cheminformatics and computational chemistry tools

• SciTegic Pipeline Pilot – visual work-flow-based programming environment (data pipelining); broad cheminformatics, reporting / visualization, modeling capabilities; integration of applications, databases, algorithms, data

• Leadscope Enterprise - integrated cheminformatics data mining and visualization environment; unique chemical perception (~27K custom keys; user extensions); various algorithms, HTS analysis, SAR / R-group analysis, data modeling

• ChemAxon tools and applications -cheminformatics software applications and tools; wide variety of cheminformatics functionality

 Spotfire -highly interactive visualization and data analysis environment, various statistical algorithms with chemical structure visualization, HTS and SAR analysis

• Open Eye ROCS, FRED, OMEGA, EON, etc. implemented on Linux cluster  - suite of powerful applications and tool kits for high-throughput 3D manipulation of chemical structures, modeling of shape, electrostatics, protein-ligand interactions and various other aspects of structure- and ligand-based design; also includes powerful cheminformatics 2D structure tools

• Schrodinger Glide, Prime, Macromodel, and various other tools implemented on Linux cluster – powerful state of the art docking, protein modeling and structure prediction tools and visualization

 Desmond implemented on Linux Cluster – powerful state of the art explicit solvent molecular dynamics

• TIP workgroup - powerful environment for global analysis of protein structures, binding sites, binding interactions; implemented automated homology modeling, binding site prediction, structure and site comparison for amplification of known protein structure space