Page tree
Skip to end of metadata
Go to start of metadata


University of Arizona High Performance Computing

UA High Performance Computing (HPC) is an interdisciplinary research center. The mission is to enable research and discoveries that advance sciences and technologies. UA HPC deploys and operates advanced computing and data resources to enable computational and data-enabled research activities of students, faculty, and staff at the University of Arizona. UA HPC also provides consulting, technical documentation, and training to support users of these resources.

This site is divided into sections that describe the High Performance Computing (HPC) resources that are available; how to use them; and the rules for use.


Maintenance downtime is scheduled from 7AM to 6PM on April 24

Maintenance downtime is scheduled from 6AM to 7PM on February 20

Maintenance downtime is scheduled from 8 AM to 6 PM on October 31.

Maintenance downtime is scheduled from 8 AM to 6 PM on July 25. No impact on jobs running on ElGato is expected.

Multiple Dates

Multiple network maintenance events for campus are summarized here

Maintenance downtime is scheduled from 6 am to 6 pm on April 25.

Debug queue is added to support testing code or trying script options. It has higher priority but short limits.

46 nodes with Nvidia GPU's are available for standard and windfall use.

Maintenance downtime is scheduled for January 24 and 25

The 2012 systems (cluster, smp, and htc) have been powered off as scheduled.


We are taking delivery of 46 new Ocelote nodes with Nvidia P100 GPU's that will be available to campus researchers, probably early February.

We offer a new web portal called Open OnDemand which includes Jupyter notebooks and a nifty file browser


Scheduled maintenance. Ocelote compute node, login node, fileserver and bastion host patching and required reboots; El-Gato will be inaccessible at times during this window due to bastion updates. Legacy systems (ICE, SMP, HTC) will be rebooted after fileserver updates as well.


Scheduled maintenance. Compute node upgrades. Complete


Maintenance on the storage array affecting the Ocelote system, the Globus DTNs, the El Gato GPU cluster, and the legacy smp/cluster/htc systems. The maintenance is expected to run from 6am-6pm
HPC/El-Gato/Ocelote down 6AM on 22 February until at least 6PM on 23 February for storage system expansion and filesystem maintenance.
Maintenance Outage (ocelote,, sftp.hpc only intermittent outages) 6AM to 6PM
Maintenance Outage 8AM to 6PM
Grand Opening of Ocelote 3pm


Storage Maintenance - no expected interruptions.


Ocelote and the bastion host
will be down for software maintenance.


Next scheduled maintenance window for
storage migration - to be confirmed. 


Pilot users starting on new cluster

Intel 2016 Compiler is available on ocelote via
module load intel/compiler 
and on frost, sleet, and hail (login.hpc) via
"module load intel/xe.2016.u2"

Configuration preparation begins.
Pilot users are planned for mid-April


Testing ends - it is officially ours.
Performance testing begins

The new cluster is installed. Acceptance testing begins

 Our new cluster is delivered.
Installation will take most of the week
  • No labels