Page tree
Skip to end of metadata
Go to start of metadata


Overview

The University of Arizona's High Performance Computing (HPC) environment is a mix of HPC resources including shared memory, distributed memory and a high capacity storage system. These systems are designed to grow with the campus needs. The base systems are purchased and supported with central funding.

Benefits to Buy-In

Dedicated Research Compute.  Research groups can 'Buy-In' (add resources such as processors, memory, storage, etc.) to the base HPC systems as funding becomes available. Buy-In research groups will have highest priority on the resources they add to the system.

Quality Environment. The Buy-In option allows research groups to take advantage of central machine room space, operations and administration of the computing systems while maintaining highest priority access for the resources they purchase.

Flexible Capacity. An additional benefit to Buy-In research group members is that expansion resources, are integrated into the base systems, and can be used in conjunction with base resources to address computational projects that would be beyond the capacity of the group resources if they were configured as an independent system.

Shared Resource. The University research computing community benefits from expansions to the base systems by having additional computing resources available. If the expansion resources are not fully utilized by the Buy-In group they will be made available to all users. The joint operation of base and Buy-In resources is designed to maintain economical and full use of all HPC resources.

Cost Competitiveness. Lower costs included in the grant proposals (i.e. hardware only, no operations costs) and evidence of campus cost‐sharing give a positive advantage during funding agency review.

Standard Limitations.  All groups are provided with 24,000 cpu-wallclock-hours per month (no-cost) and unlimited hours of windfall.  The caveat to windfall priority is that these jobs can be preempted (killed, and rescheduled for later restart) by standard and high priority jobs if the resources (cpus and memory) are not available to run these jobs.  Preemption is not required very often and many groups are able to run a very large number of calculations in windfall after their standard allocation is consumed each month.

Buy-In Details

Estimates.  Buy-in, high priority hours are assigned based on the groups purchase of nodes which are added to the central system.  Current estimates for buy-in nodes are about $8,500 for a standard node (28-core, 196GB memory, 1TB internal hard drive) and $13,800 for a node that contains an NVIDIA P100 GPU accelerator.  Each buy-in node provides approximately 20,000 cpu-wallclock-hours per month (28-core, 24 hours/day, 30days/month). 

Policies. Standard and High priority jobs have essentially the same priority and will preempt windfall jobs when necessary.  High priority jobs have the added advantage that the buy-in nodes can only run High priority and Windfall; so there is a slight advantage to High priority due to the nodes that do not run Standard jobs.  High priority jobs are run on both the buy-in nodes and the centrally funded nodes.  So it is possible to consume High priority allocation in a shorter period of time than the one month allocation.  This can be an advantage if there is a short-term project deadline (submit or present results in a publication or conference, etc.).

The HPC 'Buy In'program is not designed to replace or compete with the very large‐scale resources at national NSF and DOE facilities. National resources are available at no financial cost to many researchers based on competitive proposal processes.  The HPC 'Buy In' program is designed to meet the needs of many researchers with lower or medium‐scale HPC requirements, who want guaranteed access to compute resources and control of the scheduling priorities for their resources.

Detailed information and estimates for Buy-In costs can be obtained by contacting HPC Consulting:

  hpc-consult@list.arizona.edu




  • No labels