Job Queues

The CCS HPC resources are distributed among 3 logical partitions, designated as General, Project and Dedicated Partitions to maximize throughput. Approximately 50% of the total CCS resources are available in the general partition. This partition provides a basic HPC capability to all university users on a First Come First Served basis. The remaining 50% of the CCS resources are allocated to the project partition. This partition is primarily used for large parallel jobs. Approval of an allocations committee is required to use this partition. This committee will meet a few times a year to review proposals requesting project allocations. Approved users will be allocated CPU-hours and an account number that will be used for accounting purposes. A dedicated allocation also known as the “Condo” Model is available for users requiring on demand access to resources. In this model users contribute money towards compute and/or storage. These resources are operated by CCS as part of the general pool but made available to owners on demand.

  • On demand, no wait access is available only via the “condo” model.
  • In both general and project partitions jobs are submitted and executed by a batch system which entails some wait time.
  • The General Partition is open to all users on a First Come First Served basis. Several Job queues are available (see below) for job submissions. There is a 2 simultaneous jobs/user limit on the general partition for Ares users, and 20 simultaneous jobs/user limit or 32 slots (cores) for Pegasus users.
  • IBM p-series machines entirely allocated to project partition and all jobs need approval of the allocation committee. Approved users can submit jobs, using the account number, to the project queues until they exhaust allocated time. Jobs can be submitted to any of the Job queues in this partition and will be run as resources become available.
  • The scheduler will dispatch jobs based on the requested wall clock time and job priority values in the tables below.
  • These queues/classes will be available as a part of the October, 2008 release and are prefixed according to the partition they belong to. Additional queues will be set up as necessary.

Table 15: Summary of CCS Pegasus Cluster Job Queues

Partition Queue/Class Max CPU’s Job Time Limit Job Priority
General debug 8 0.5 hrs 200
General small 64 168 hrs 100
General medium 128 48 hrs 90
General large 256 24 hrs 80
Dedicated amd 400 168 hrs 100
Dedicated xlarge 512 8 hrs 70

Table 16:Summary of CCS Ares Cluster Job Queues

Partition Queue/Class Max CPU’s Job Time Limit Job Priority
Project psmall 96 24 hrs 100
Project pmedium 256 12 hrs 90
Project plarge 576 4 hrs 75
Project pdebug 16 0.5 hrs interactive
Project plongrun 48 72 hrs 90
Project popenmp 16 48 hrs 100
Dedicated downer cpu’s purchased unlimited immediate availability