Available Resources

Resource Type: Compute
Resource Description: The Hive Cluster at Georgia Techs' Partnership for an Advanced Computing Environment (PACE) is an NSF funded cluster through MRI award 1828187: "MRI: Acquisition of an HPC System for Data-Driven Discovery in Computational Astrophysics, Biology, Chemistry, and Materials Science." This cluster is comprised of 484 nodes featuring Intel Cascade Lake processors and NVIDIA VOLTA graphics processing cards, and the 11,616 cores deliver over 0.7 PF of performance based on the LINPACK Benchmark. A portion of the resource is available for ACCESS community use through the Hive Gateway (https://gateway.hive.pace.gatech.edu), powered by Apache Airavata. More detailed documentation is available here: https://docs.pace.gatech.edu/hiveGateway/gettingStarted/
Recommended Use: As part of the ACCESS ecosystem, a portion of the system is available for community use through the Hive Gateway (https://gateway.hive.pace.gatech.edu). The Hive gateway permits direct use of a variety of software for users with ACCESS logins. No allocation necessary! Also through its ACCESS participation, the Hive cluster is available to support existing or new gateways using the Airavata platform. Interested gateways should contact pace-support@oit.gatech.edu.
Organization: Georgia Institute of Technology
Units: Core-hours
Description: As part of the ACCESS ecosystem, a portion of the system is available for community use through the Hive Gateway (https://gateway.hive.pace.gatech.edu). The Hive gateway permits direct use of a variety of software for users with ACCESS logins. No allocation necessary! Also through its ACCESS participation, the Hive cluster is available to support existing or new gateways using the Airavata platform. Interested gateways should contact pace-support@oit.gatech.edu.
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Ookami is a computer technology testbed supported by the National Science Foundation under grant OAC 1927880. It provides researchers with access to the A64FX processor developed by Riken and Fujitsu for the Japanese path to exascale computing and is deployed in the, until June 2022, fastest computer in the world, Fugaku. It is the first such computer outside of Japan. By focusing on crucial architectural details, the ARM-based, multi-core, 512-bit SIMD-vector processor with ultrahigh-bandwidth memory promises to retain familiar and successful programming models while achieving very high performance for a wide range of applications. While being very power-efficient it supports a wide range of data types and enables both HPC and big data applications. The Ookami HPE (formerly Cray) Apollo 80 system has 176 A64FX compute nodes each with 32GB of high-bandwidth memory and a 512 Gbyte SSD. This amounts to about 1.5M node hours per year. A high-performance Lustre filesystem provides about 0.8 Pbyte storage. To facilitate users exploring current computer technologies and contrasting performance and programmability with the A64FX, Ookami also includes: - 1 node with dual socket AMD Milan (64cores) with 512 Gbyte memory - 2 nodes with dual socket Thunder X2 (64 cores) each with 256 Gbyte memory - 1 node with dual socket Intel Skylake (32 cores) with 192 Gbyte memory and 2 NVIDIA V100 GPUs
Recommended Use: Applications that are fitting within the memory requirements (27GB per node) and are well vectorized, or well auto-vectorized by the compiler. Note a node is allocated exclusively to one user. Node-sharing is not available.
Organization: Institute for Advanced Computational Science at Stony Brook University
Units: SUs
Description: 1 SU = 1 node hour A node is allocated exclusively to a user. Node-sharing is not available.
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Jetstream2 is a user-friendly cloud environment designed to give researchers and students access to computing and data analysis resources on demand as well as for gateway and other infrastructure projects. Jetstream2 is a hybrid-cloud platform that provides flexible, on-demand, programmable cyberinfrastructure tools ranging from interactive virtual machine services to a variety of infrastructure and orchestration services for research and education. The primary resource is a standard CPU resource consisting of AMD Milan 7713 CPUs with 128 cores per node and 512gb RAM per node connected by 100gbps ethernet to the spine.
Recommended Use: For the researcher needing virtual machine services on demand as well as for software creators and researchers needing to create their own customized virtual machine environments. Additional use cases are for research-supporting infrastructure services that need to be "always on" as well as science gateway services and for education support, providing virtual machines for students.
Organization: Indiana University
Units: SUs
Description: 1 SU = 1 Jetstream2 vCPU-hour. VM sizes and cost per hour are available https://docs.jetstream-cloud.org/general/vmsizes/
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Jetstream2 is a user-friendly cloud environment designed to give researchers and students access to computing and data analysis resources on demand as well as for gateway and other infrastructure projects. This is for the GPU-specific Jetstream2 resources only. Jetstream2 GPU is a hybrid-cloud platform that provides flexible, on-demand, programmable cyberinfrastructure tools ranging from interactive virtual machine services to a variety of infrastructure and orchestration services for research and education. This particular portion of the resource is allocated separately from the primary resource and contains 360 NVIDIA A100 GPUs -- 4 GPUs per node, 128 AMD Milan cores, and 512gb RAM connected by 100gbps ethernet to the spine.
Recommended Use: For the researcher needing GPUs for virtual machine services on demand as well as for software creators and researchers needing to create their own customized virtual machine environments. Additional use cases are for research-supporting infrastructure services that need to be "always on" as well as science gateway services and for education support, providing virtual machines for students. The A100 GPUs on Jetstream2-GPU are well-suited for machine learning/deep learning projects and other codes optimized for GPU usage. They also may be utilized for some graphical/desktop applications with some effort.
Organization: Indiana University
Units: SUs
Description: Jetstream2 GPU VMs are charged 4x the number of vCPUs per hour because of the associated GPU capability. For example, the g3.large VM costs 64 SUs per hour. VM sizes and cost per hour are available at <a href="https://docs.jetstream-cloud.org/general/vmsizes/">the Jetstream2 website</a>.
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Jetstream2 is a user-friendly cloud environment designed to give researchers and students access to computing and data analysis resources on demand as well as for gateway and other infrastructure projects. This is for the Large Memory-specific Jetstream2 resources only. Jetstream2 LM is a hybrid-cloud platform that provides flexible, on-demand, programmable cyberinfrastructure tools ranging from interactive virtual machine services to a variety of infrastructure and orchestration services for research and education. This particular portion of the resource is allocated separately from the primary resource and contains 32 nodes of GPU-ready 1TB RAM compute nodes, AMD Milan 7713 CPUs with 128 cores per node connected by 100gbps ethernet to the spine.
Recommended Use: For the researcher needing virtual machine services on demand as well as for software creators and researchers needing to create their own customized virtual machine environments. Additional use cases are for research-supporting infrastructure services that need to be "always on" as well as science gateway services and for education support, providing virtual machines for students. This particular resource is for those required 512gb or 1tb for their workloads and must demonstrate that need.
Organization: Indiana University
Units: SUs
Description: Jetstream2 Large Memory VMs are charged 2x the number of vCPUs per hour because of the additional memory capacity. For instance, the r3.xl flavor that uses 128 cores costs 256 SUs per hour. VM sizes and cost per hour are available at the <a href="https://docs.jetstream-cloud.org/general/vmsizes/">Jetstream website</a>.
User Guide Link: User Guide
Resource Type: Storage
Resource Description: Storage for use with Jetstream Computing
Recommended Use:
Organization: Indiana University
Units: GB
Description:
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Johns Hopkins University participates in the ACCESS Federation with its new NSF-funded flagship cluster "rockfish.jhu.edu" funded by NSF MRI award #1920103 that integrates high-performance and data-intensive computing while developing tools for generating, analyzing and disseminating data sets of ever-increasing size. The cluster will contain compute nodes optimized for different research projects and complex, optimized workflows. Rockfish (10) GPU nodes are intended for applications that need gpu-processing, machine learning and data analytics. Each gpu node consists of two Intel Xeon Gold Cascade Lake (6248R) processors with 192GB of memory, 3.0GHz base frequency, 48 cores per node, (4) Nvidia A100 GPUs and 1TB NVMe for local storage. All compute nodes have HDR100 connectivity. In addition, the cluster has access to several GPFS file systems totaling 10PB of storage. 20% of these resources will be allocated via ACCESS.
Recommended Use:
Organization: Johns Hopkins University MARCC
Units: GPU Hours
Description:
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Johns Hopkins University participates in the ACCESS Federation with its new NSF-funded flagship cluster "rockfish.jhu.edu" funded by NSF MRI award #1920103 that integrates high-performance and data-intensive computing while developing tools for generating, analyzing and disseminating data sets of ever-increasing size. The cluster will contain compute nodes optimized for different research projects and complex, optimized workflows. Rockfish (10) Large Memory nodes are intended for applications that need more than 192GB or memory (up to 1.5TB), machine learning and data analytics. Each large memory node consists of two Intel Xeon Gold Cascade Lake (6248R) processors with 1,524GB of memory, 3.0GHz base frequency, 48 cores per node and 1TB NVMe for local storage. All compute nodes have HDR100 connectivity. In addition, the cluster has access to several GPFS file systems totaling 10PB of storage. 20% of these resources will be allocated via ACCESS.
Recommended Use: Jobs that need > 192GB of memory up to 1524GB
Organization: Johns Hopkins University MARCC
Units: Core-hours
Description:
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Johns Hopkins University will participate in the ACCESS Federation with its new NSF-funded flagship cluster "rockfish.jhu.edu" funded by NSF MRI award #1920103 that integrates high-performance and data-intensive computing while developing tools for generating, analyzing and disseminating data sets of ever-increasing size. The cluster will contain compute nodes optimized for different research projects and complex, optimized workflows. Rockfish (368) Regular Memory nodes are intended for regular-purpose computing, machine learning and data analytics. Each regular compute node consists of two Intel Xeon Gold Cascade Lake (6248R) processors with 192GB of memory, 3.0GHz base frequency, 48 cores per node and 1TB NVMe for local storage. All compute nodes have HDR100 connectivity. In addition, the cluster has access to several GPFS file systems totaling 10PB of storage. 20% of these resources will be allocated via ACCESS.
Recommended Use:
Organization: Johns Hopkins University MARCC
Units: Core-hours
Description: 1 SU = 1 core-hour
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Five large memory compute nodes dedicated for XSEDE allocation. Each of these nodes have 40 cores (Broadwell class and lntel(R) Xeon(R) CPU E7-4820 v4 @ 2.00GHz with 4 sockets, 10 cores/socket), 3TB RAM, and 6TB SSD storage drives. The 5 dedicated XSEDE nodes will have exclusive access to approximately 300 TB of network attached disk storage. All these compute nodes are interconnected through a 100 Gigabit Ethernet (l00GbE) backbone and the cluster login and data transfer nodes will be connected through a 100Gb uplink to lnternet2 for external connections.
Recommended Use: Large memory nodes are increasingly needed by a wide range of XSEDE researchers, particularly researchers working with big data such as massive NLP data sets used in many research domains or the massive genomes required by computational biology and bioinformatics.
Organization: University of Kentucky
Units: Core-hours
Description: Each Core hr is 1 CPU core with a default memory of 75GB RAM/core
User Guide Link: User Guide
Resource Type: Program
Resource Description: The pilot program for the ACCESS MATCHPlus short-term support partnership provides consulting support from an experienced mentor paired with a student-facilitator to help you address an immediate research need. These needs may include improvements like expanding your code functionality, transitioning from lab computers to HPC, or introducing new technologies into your workflow. MATCHPlus will be selecting ten projects to carry out between September 2022 and April 2023. Requests will be evaluated in the order received until January 31, 2023.
Recommended Use: MATCHPlus is an opportunity for researchers to get help with improvements like expanding your code functionality, transitioning from lab computers to HPC, or introducing new technologies into your workflow.
Organization: ACCESS Support
Units: [Yes = 1, No = 0]
Description: The pilot program for the ACCESS MATCHPlus short-term support partnership provides consulting support from an experienced mentor paired with a student-facilitator to help you address an immediate research need. These needs may include improvements like expanding your code functionality, transitioning from lab computers to HPC, or introducing new technologies into your workflow. MATCHPlus will be selecting ten projects to carry out between September 2022 and April 2023. Requests will be evaluated in the order received until January 31, 2023.</br> </br> For more information, visit the <a target="_blank" href="https://support.access-ci.org/matchplus">MATCHPlus page</a> on the ACCESS Support portal.
User Guide Link: User Guide
Resource Type: Program
Resource Description: The pilot program for the ACCESS MATCHPremier embedded support service provides consulting support from an experienced research support professional, identified by ACCESS Support to provide the skills needed for your project. Research projects are matched with consultants who have the expertise to help with managing massive allocations or using ACCESS resources in novel ways. We may be able to provide proposal development support for researchers that have significant or unusual research needs, including detailed input that will help create a well designed plan that maximizes scientific output. MATCHPremier consultants are selected from the Computational Science and Support Network (CSSN) — they may be facilitators, research software engineers, or other types of appropriate support personnel. MATCHPremier should be requested at least six months in advance of the anticipated need. MATCHPremier will be selecting ten projects between September 2022 and March 2023. Requests will be evaluated in the order received until January 31, 2023. 
Recommended Use: MATCHPremier identifies consultants who are available to be embedded on your research team for 12- 8 months. Consultants have the expertise to help manage massive allocation plans or to use ACCESS resources in novel ways. Engagements should be requested at least six months in advance and are funded through your research award.
Organization: ACCESS Support
Units: [Yes = 1, No = 0]
Description: The pilot program for the ACCESS MATCHPremier embedded support service provides matches your project with an experienced research professional, identified by ACCESS Support to provide the skills needed to help your project, for example, with managing massive allocations or using ACCESS resources in novel ways. MATCHPremier consultants are selected from the Computational Science and Support Network (CSSN) — they may be facilitators, research software engineers, or other types of appropriate support personnel. MATCHPremier should be requested at least six months in advance of the anticipated need. MATCHPremier will be selecting ten projects between September 2022 and March 2023. Requests will be evaluated in the order received until January 31, 2023. </br> </br> For more information, visit the <a target="_blank" href="https://support.access-ci.org/matchpremier">MATCHPremier page</a> on the ACCESS Support portal.
User Guide Link: User Guide
Resource Type: Compute
Resource Description: The NCSA Delta CPU allocation type provides access to the Delta CPU-only nodes. The Delta CPU resource comprises 124 dual-socket compute nodes for general purpose computation across a broad range of domains able to benefit from the scalar and multi-core performance provided by the CPUs, such as appropriately scaled weather and climate, hydrodynamics, astrophysics, and engineering modeling and simulation, and other domains using algorithms not yet adapted for the GPU. Each Delta CPU node is configured with 2 AMD EPYC 7763 (“Milan”) processors with 64-cores/socket (128-cores/node) at 2.55GHz and 256GB of DDR4-3200 RAM. An 800GB, NVMe solid-state disk is available for use as local scratch space during job execution. All Delta CPU compute nodes are interconnected to each other and to the Delta storage resource by a 100 Gb/sec HPE Slingshot network fabric.
Recommended Use: The Delta CPU resource is designed for general purpose computation across a broad range of domains able to benefit from the scalar and multi-core performance provided by the CPUs such as appropriately scaled weather and climate, hydrodynamics, astrophysics, and engineering modeling and simulation, and other domains that have algorithms that have not yet moved to the GPU. Delta also supports domains that employ data analysis, data analytics, or other data-centric methods. Delta features a rich base of preinstalled applications, based on user demand. The system is optimized for capacity computing, with rapid turnaround for small to modest scale jobs, and features support for shared-node usage. Local SSD storage on each compute node benefits applications with random access data patterns or require fast access to significant amounts of compute-node local scratch space. This allocation type is specific to and required for the CPU-only nodes on Delta. Request this allocation type if you have CPU only parts to your workflow.
Organization: National Center for Supercomputing Applications
Units: Core-hours
Description: 1 SU = 1 Core-hour
User Guide Link: User Guide
Resource Type: Compute
Resource Description: The Delta GPU resource comprises 4 different node configurations intended to support accelerated computation across a broad range of domains such as soft-matter physics, molecular dynamics, replica-exchange molecular dynamics, machine learning, deep learning, natural language processing, textual analysis, visualization, ray tracing, and accelerated analysis of very large in-memory datasets. Delta is designed to support the transition of applications from CPU-only to using the GPU or hybrid CPU-GPU models. Delta GPU resource capacity is predominately provided by 200 single-socket nodes, each configured with 1 AMD EPYC 7763 (“Milan”) processor with 64-cores/socket (64-cores/node) at 2.45GHz and 256GB of DDR4-3200 RAM. Half of these single-socket GPU nodes (100 nodes) are configured with 4 NVIDIA A100 GPUs with 40GB HBM2 RAM and NVLink (400 total A100 GPUs); the remaining half (100 nodes) are configured with 4 NVIDIA A40 GPUs with 48GB GDDR6 RAM and PCIe 4.0 (400 total A40 GPUs). Rounding out the GPU resource is 6 additional large-memory, “dense” GPU nodes, containing 8 GPUs each, in a dual-socket CPU configuration (128-cores per node) and 2TB of DDR4-3200 RAM but otherwise configured similarly to the single-socket GPU nodes. Within the “dense” GPU nodes, 5 nodes employ NVIDIA A100 GPUs (40 total A100 GPUs in “dense” configuration) and 1 node employs AMD MI100 GPUs (8 total MI100 GPUs) with 32GB HBM2 RAM. A 1.6TB, NVMe solid-state disk is available for use as local scratch space during job execution on each GPU node type. All Delta GPU compute nodes are interconnected to each other and to the Delta storage resource by a 100 Gb/sec HPE Slingshot network fabric. A Delta GPU allocation grants access to all types of Delta GPU nodes (both GPUs and CPUs on those nodes). The Delta CPU allocation is needed to utilize the CPU-only nodes.
Recommended Use: The Delta GPU resource is designed to support accelerated computation across a broad range of domains such as soft-matter physics, molecular dynamics, replica-exchange molecular dynamics, machine learning, deep learning, natural language processing, textual analysis, visualization, ray tracing, and accelerated analysis of very large in-memory datasets. Delta is designed to support the transition of applications from CPU-only to using the GPU or hybrid CPU-GPU models. Delta features a rich base of preinstalled applications, based on user demand. The system is optimized for capacity computing, with rapid turnaround for small to modest scale jobs, and features support for shared-node usage. Local SSD storage on each compute node benefits applications with random access data patterns or require fast access to significant amounts of compute-node local scratch space. This allocation type covers the use of all Delta GPU node types and all resources (CPU and GPU) within those nodes. If you have CPU only portions of your workflow you should also request an allocation of time under NCSA Delta CPU which provides access to CPU-only compute nodes.
Organization: National Center for Supercomputing Applications
Units: GPU Hours
Description: 1 SU = 1 GPU-Hour
User Guide Link: User Guide
Resource Type: Storage
Resource Description: The Delta Storage resource provides storage allocations on the scratch file system for projects using the Delta CPU and Delta GPU resources. It delivers 4PB of capacity to projects on Delta and will be augmented by a later expansion of 3PB of flash capacity for high-speed, data-intensive workloads. Projects are provided with a 1TB allocation on scratch by default. Please do not request storage allocations smaller than 1TB.
Recommended Use: The Delta Storage resource provides storage allocations on the scratch file system for allocated projects using the Delta CPU and Delta GPU resources. Allocations are available for the duration of the compute allocation period. Scratch is currently unpurged storage though that could change in the future with a minimum of 30 days notice.
Organization: National Center for Supercomputing Applications
Units: GB
Description:
User Guide Link: User Guide
Resource Type: Compute
Resource Description: A virtual HTCondor pool made up of resources from the Open Science Grid
Recommended Use: High throughput jobs using a single core, or a small number of threads which can fit on a single compute node.
Organization: Open Science Grid
Units: SUs
Description:
User Guide Link: User Guide
Resource Type: Storage
Resource Description: The Open Storage Network (OSN) is an NSF-funded cloud storage resource, geographically distributed among several pods. OSN pods are currently hosted at SDSC, NCSA, MGHPCC, RENCI, and Johns Hopkins University. Each OSN pod currently hosts 1PB of storage, and is connected to R&E networks at 50 Gbps. OSN storage is allocated in buckets, and is accessed using S3 interfaces with tools like rclone, cyberduck, or the AWS cli.
Recommended Use: Cloud-style storage of project datasets for access using AWS S3-compatible tools. The minimum allocation is 10TB. Storage allocations up to 300TB may be requested via the XSEDE resource allocation process.
Organization: Open Storage Network
Units: TB
Description: Amount of storage capacity being requested, expressed in base 10 units. The minimum allocation is 10TB. Larger allocations of up to 300TB may be accommodated with further justification and approval by OSN. Storage access is via AWS S3-compatible tools such as rclone, cyberduck and the AWS command-line interface for S3. A web interface is provided at https://portal.osn.xsede.org for allocated project users to manage and browse their storage.
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Anton is a special purpose supercomputer for biomolecular simulation designed and constructed by D. E. Shaw Research (DESRES). PSC's current system is known as Anton 2 and is a successor to the original Anton 1 machine hosted here. Anton 2, the next-generation Anton supercomputer, is a 128 node system, made available without cost by DESRES for non-commercial research use by US universities and other not-for-profit institutions, and is hosted by PSC with support from the NIH National Institute of General Medical Sciences. It replaced the original Anton 1 system in the fall of 2016. Anton was designed to dramatically increase the speed of molecular dynamics (MD) simulations compared with the previous state of the art, allowing biomedical researchers to understand the motions and interactions of proteins and other biologically important molecules over much longer time periods than was previously accessible to computational study. The MD research community is using the Anton 2 machine at PSC to investigate important biological phenomena that due to their intrinsically long time scales have been outside the reach of even the most powerful general-purpose scientific computers. Application areas include biomolecular energy transformation, ion channel selectivity and gating, drug interactions with proteins and nucleic acids, protein folding and protein-membrane signaling.
Recommended Use: As part of the ACCESS ecosystem, Anton 2 invites interested molecular dynamics (MD) investigators to request allocations on the special purpose supercomputer built specifically for MD. Requests for proposals will be accepted through June 29, 2023. For detailed application instructions please see (https://www.psc.edu/resources/anton-2/anton-rfp/). Questions regarding Anton 2 can be directed to: grants@psc.edu.
Organization: Pittsburgh Supercomputing Center
Units: MD Simulation Units
Description: Anton 2 is allocated annually via a Request for Proposal with proposals reviewed by a committee convened by the National Research Council at the National Academies. To qualify for an allocation on Anton 2, the principal investigator must be a faculty or staff member at a U.S. academic or non-profit research institution. Anton is allocated in MD simulation units.
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Bridges-2 Extreme Memory (EM) nodes provide 4TB of shared memory for genome sequence assembly, graph analytics, statistics, and other applications that need a large amount of memory and for which distributed-memory implementations are not available. Each Bridges-2 EM node consists of 4 Intel Xeon Platinum 8260M “Cascade Lake” CPUs, 4TB of DDR4-2933 RAM, 7.68TB NVMe SSD. They are connected to Bridges-2's other compute nodes and its Ocean parallel filesystem and archive by HDR-200 InfiniBand, providing 200Gbps of bandwidth to read or write data from each EM node.
Recommended Use: Bridges-2 Extreme Memory (EM) nodes enable memory-intensive genome sequence assembly, graph analytics, statistics, in-memory databases, and other applications that need a large amount of memory and for which distributed-memory implementations are not available. This includes memory-intensive applications implemented in languages such as Java, R, and Python. Their x86 CPUs support an extremely broad range of applications, and approximately 42GB of RAM per core provides valuable support for applications where memory capacity is the primary requirement.
Organization: Pittsburgh Supercomputing Center
Units: Core-hours
Description: 1 SU = 1 core hour
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Bridges-2 Accelerated GPU (GPU) nodes are optimized for scalable artificial intelligence (AI; deep learning). Bridges-2 GPU nodes each contain 8 NVIDIA Tesla V100-32GB SXM2 GPUs, providing 40,960 CUDA cores and 5,120 tensor cores. In addition, each node holds 2 Intel Xeon Gold 6248 CPUs; 512GB of DDR4-2933 RAM; and 7.68TB NVMe SSD. They are connected to Bridges-2's other compute nodes and its Ocean parallel filesystem and archive by two HDR-200 InfiniBand links, providing 400Gbps of bandwidth to enhance scalability of deep learning training.
Recommended Use: Bridges-2 GPU nodes are optimized for scalable artificial intelligence (AI) including deep learning training, deep reinforcement learning, and generative techniques - as well as for accelerated simulation and modeling.
Organization: Pittsburgh Supercomputing Center
Units: GPU Hours
Description: 1 SU = 1 GPU hour
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Bridges-2 Regular Memory (RM) nodes provide extremely powerful general-purpose computing, machine learning and data analytics, AI inferencing, and pre- and post-processing. Each Bridges RM node consists of two AMD EPYC “Rome” 7742 64-core CPUs, 256-512GB of RAM, and 3.84TB NVMe SSD. 488 Bridges-2 RM nodes have 256GB RAM, and 16 have 512GB RAM for more memory-intensive applications (see also Bridges-2 Extreme Memory nodes, each of which has 4TB of RAM). Bridges-2 RM nodes are connected to other Bridges-2 compute nodes and its Ocean parallel filesystem and archive by HDR-200 InfiniBand.
Recommended Use: Bridges-2 Regular Memory (RM) nodes provide extremely powerful general-purpose computing, machine learning and data analytics, AI inferencing, and pre- and post-processing. Their x86 CPUs support an extremely broad range of applications, and jobs can request anywhere from 1 core to all 64,512 cores of the Bridges-2 RM resource.
Organization: Pittsburgh Supercomputing Center
Units: Core-hours
Description: 1 SU = 1 core hour
User Guide Link: User Guide
Resource Type: Storage
Resource Description: The Bridges-2 Ocean data management system provides a unified, high-performance filesystem for active project data, archive, and resilience. Ocean consists of two tiers, disk and tape, transparently managed by HPE DMF as a single, highly usable namespace. Ocean's disk subsystem, for active project data, is a high-performance, internally resilient Lustre parallel filesystem with 15PB of usable capacity, configured to deliver up to 129GB/s and 142GB/s of read and write bandwidth, respectively. Ocean's tape subsystem, for archive and additional resilience, is a high-performance tape library with 7.2PB of uncompressed capacity (estimated 8.6PB compressed, with compression done transparently in hardware with no performance overhead), configured to deliver 50TB/hour.
Recommended Use: The Bridges-2 Ocean data management system provides high-performance and highly usable access to project and archive data. It is equally accessible to all Bridges-2 compute nodes, allowing seamless execution of data-intensive workflows involving components running on different compute resource types.
Organization: Pittsburgh Supercomputing Center
Units: GB
Description: Storage
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Purdue's Anvil cluster is comprised of 1000 nodes (each with 128 cores and 256 GB of memory for a peak performance of 5.3 PF), 32 large memory nodes (each with 128 cores and 1 TB of memory), and 16 GPU nodes (each with 128 cores, 256 GB of memory, and four NVIDIA A100 Tensor Core GPUs) providing 1.5 PF of single-precision performance to support machine learning and artificial intelligence applications. All CPU cores are AMD's "Milan" architecture running at 2.0 GHz, and all nodes are interconnected using a 100 Gbps HDR Infiniband fabric. Scratch storage consists of a 10+ PB parallel filesystem with over 3 PB of flash drives. Storage for active projects is provided by Purdue's Research Data Depot, and data archival is available via Purdue's Fortress tape archive. The operating system is CentOS 8, and the batch scheduling system is Slurm. Anvil's advanced computing capabilities are well suited to support a wide range of computational and data-intensive research spanning from traditional high-performance computing to modern artificial intelligence applications.
Recommended Use: Anvil's general purpose CPUs and 128 cores per node make it suitable for many types of CPU-based workloads.
Organization: Purdue University
Units: SUs
Description: 1 Service Unit = 1 Core Hour
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Purdue's Anvil GPU cluster is comprised of 16 GPU nodes (each with 128 cores, 256 GB of memory, and four NVIDIA A100 Tensor Core GPUs) providing 1.5 PF of single-precision performance to support machine learning and artificial intelligence applications. All CPU cores are AMD's "Milan" architecture running at 2.0 GHz, and all nodes are interconnected using a 100 Gbps HDR Infiniband fabric. Scratch storage consists of a 10+ PB parallel filesystem with over 3 PB of flash drives. Storage for active projects is provided by Purdue's Research Data Depot, and data archival is available via Purdue's Fortress tape archive. The operating system is CentOS 8, and the batch scheduling system is Slurm.
Recommended Use: Machine learning (ML), artificial intelligence (AI), other GPU-enabled workloads
Organization: Purdue University
Units: SUs
Description: 1 SU = 1 GPU hour
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Expanse will be a Dell integrated compute cluster, with AMD Rome processors, interconnected with Mellanox HDR InfiniBand in a hybrid fat-tree topology. There are 728 compute nodes, each with two 64-core AMD EPYC 7742 (Rome) processors for a total of 93,184 cores. They will feature 1TB of NVMe storage and 256GB of DRAM per node. Full bisection bandwidth will be available at rack level (56 nodes) with HDR100 connectivity to each node. HDR200 switches are used at the rack level and there will be 3:1 oversubscription cross-rack. In addition, Expanse also has four 2 TB large memory nodes. The system will also feature 12PB of Lustre based performance storage (140GB/s aggregate), and 7PB of Ceph based object storage.
Recommended Use: Expanse is designed to provide cyberfrastructure for the long tail of science, covering a diverse application base with complex workflows. It will feature a rich base of preinstalled applications including commercial software like Gaussian, Abaqus, QChem, MATLAB, and IDL. The system will be geared towards supporting capacity computing, optimized for quick turnaround on small/modest scale jobs. The local NVMes on each compute node will be beneficial to applications that exhibit random access data patterns or require fast access to significant amounts of compute node local scratch space. Expanse will support composable systems computing with dynamic capabilities enabled using tools such as Kubernetes and workflow software.
Organization: San Diego Supercomputer Center
Units: Core-hours
Description:
User Guide Link: User Guide
Resource Type: Compute
Resource Description: Expanse GPU will be a Dell integrated cluster, NVIDIA V100 GPUs with NVLINK, interconnected with Mellanox HDR InfiniBand in a hybrid fat-tree topology. There are a total of 52 nodes with four V100 SMX2 GPUs per node (with NVLINK connectivity). There are two 20-core Xeon 6248 CPUs per node. Full bisection bandwidth will be available at rack level (52 CPU nodes, 4 GPU nodes) with HDR100 connectivity to each node. HDR200 switches are used at the rack level and there will be 3:1 oversubscription cross-rack. In addition, Expanse also has four 2 TB large memory nodes. The system will also feature 12PB of Lustre based performance storage (140GB/s aggregate), and 7PB of Ceph based object storage.
Recommended Use: GPUs are a specialized resource that performs well for certain classes of algorithms and applications. Recommend to be used for accelerating simulation codes optimized to take advantage of GPUs (using CUDA, OpenACC). There is a large and growing base of community codes that have been optimized for GPUs including those in molecular dynamics, and machine learning. GPU-enabled applications on Expanse will include: AMBER, Gromacs, BEAST, OpenMM, NAMD, TensorFlow, and PyTorch.
Organization: San Diego Supercomputer Center
Units: GPU Hours
Description:
User Guide Link: User Guide
Resource Type: Storage
Resource Description: Allocated storage for projects using Expanse Compute and Expanse GPU resources.
Recommended Use: Use for storage needs of allocated projects on Expanse Compute and Expanse GPU resources. Unpurged storage available for duration of allocation period.
Organization: San Diego Supercomputer Center
Units: GB
Description:
User Guide Link: User Guide
Resource Type: Compute
Resource Description: The Stampede2 Dell/Intel Knights Landing (KNL), Skylake (SKX) System provides the user community access to two Intel Xeon compute technologies. The system is configured with 4204 Dell KNL compute nodes, each with a stand-alone Intel Xeon Phi Knights Landing bootable processor. Each KNL node includes 68 cores, 16GB MCDRAM, 96GB DDR-4 memory and a 200GB SSD drive. Stampede2 also includes 1736 Intel Xeon Skylake (SKX) nodes and additional management nodes. Each SKX includes 48 cores, 192GB DDR-4 memory, and a 200GB SSD. Allocations awarded on Stampede2 may be used on either or both of the node types. Compute nodes have access to dedicated Lustre Parallel file systems totaling 28PB raw, provided by Cray. An Intel Omni-Path Architecture switch fabric connects the nodes and storage through a fat-tree topology with a point to point bandwidth of 100 Gb/s (unidirectional speed). 16 additional login and management servers complete the system. Stampede2 will deliver an estimated 18PF of peak performance. Please see the Stampede2 User Guide for detailed information on the system and how to most effectively use it. https://portal.xsede.org/tacc-stampede2
Recommended Use: Stampede2 is intended primarily for parallel applications scalable to tens of thousands of cores, as well as general purpose and throughput computing. Normal batch queues will enable users to run simulations up to 48 hours. Jobs requiring run times and more cores than allowed by the normal queues will be run in a special queue after approval of TACC staff. normal, serial and development queues are configured as well as special purpose queues.
Organization: Texas Advanced Computing Center
Units: Node Hours
Description: Stampede 2 is allocated in service units (SU)s. An SU is defined as 1 wall-clock node hour. Allocations awarded on Stampede2 may be used on either or both node types.
User Guide Link: User Guide
Resource Type: Storage
Resource Description:
Recommended Use: TACC's High Performance Computing systems are used primarily for scientific computing with users having access to WORK, SCRATCH, and HOME file systems that are limited in size. This is also true for TACC's visualization system, Longhorn. The Ranch system serves the HPC and Vis community systems by providing a massive, high-performance file system for archival purposes. Space on Ranch can also be requested independent of an accompanying allocation on an XSEDE compute or visualization resource. Please note that Ranch is an archival system. The ranch system is not backed up or replicated. This means that Ranch contains a single copy, and only a single copy, of your file/s. While lost data due to tape damage is rare, please keep this fact in mind for your data management plans.
Organization: Texas Advanced Computing Center
Units: GB
Description:
User Guide Link: User Guide
Resource Type: Compute
Resource Description: ACES (Accelerating Computing for Emerging Sciences) is funded by NSF ACSS program (Award #2112356) and provides an innovative advanced computational prototype system. The ACES system has Intel Sapphire Rapids processors, Graphcore IPUs, NEC Vector Engines, Intel Max GPUs (formerly Ponte Vecchio), Intel FPGAs, Next Silicon co-processors, NVIDIA H100 GPUs, Intel Optane memory, and LIQID’s composable PCIe fabric.
Recommended Use: Workflows that can utilize novel accelerators and/or multiple GPUs.
Organization: Texas A&M University
Units: Core-hours
Description:
Resource Type: Compute
Resource Description: FASTER (Fostering Accelerated Scientific Transformations, Education and Research) is funded by the NSF MRI program (Award #2019129) and provides a composable high-performance data-analysis and computing instrument. The FASTER system has 180 compute nodes with 2 Intel 32-core Ice Lake processors and includes 260 NVIDIA GPUs (40 A100, 8 A10, 4 A30, 8 A40 and 200 T4 GPUs). Using LIQID’s composable technology, all 180 compute nodes have access to the pool of available GPUs, dramatically improving workflow scalability. FASTER has HDR InfiniBand interconnection and access to a 5PB of shared usable high-performance storage system running Lustre filesystem. Thirty percent of FASTER’s computing resources will be allocated to researchers nationwide through the ACCESS AARC process.
Recommended Use: Workflows that can utilize multiple GPUs.
Organization: Texas A&M University
Units: SUs
Description: 1 SU = 1 node hour
Resource Type: Program
Resource Description: The Science Gateways Community Institute (SGCI) services commits people time. Two SGCI services are offered to help clients build science gateways, longer term engagements (up to a year, 25% FTE commitment) through SGCI’s Extended Developer Support and shorter term consulting engagements for advice in specific areas (usability, cybersecurity, sustainability and more) through SGCI’s Incubator program. Further detail is available from the Services link at http://www.sciencegateways.org.
Recommended Use: SGCI services can be used to develop entirely new gateways or improve existing gateways. The gateways do not need to make use of ACCESS compute resources (though they can). They could be gateways to data collections, instruments, sensor streams, citizen science engagement, etc.
Organization: Science Gateways Community Institute
Units: [Yes = 1, No = 0]
Description:
User Guide Link: User Guide
Resource Type: Compute
Resource Description: The Delaware Advanced Research Workforce and Innovation Network (DARWIN) computing system at the University of Delaware is based on AMD Epyc™ 7502 processors with three main memory sizes to support different workload requirements (512 GiB, 1024 GiB, 2048 GiB). The cluster provides more than 1 PiB usable, shared storage via a dedicated Lustre parallel file system to support large, data sciences workloads. The Mellanox HDR 200Gbps InfiniBand network provides near-full bisectional bandwidth at 100 Gbps per node.
Recommended Use: DARWIN's standard memory nodes provide powerful general-purpose computing, data analytics, and pre- and post-processing capabilities. The large and xlarge memory nodes enable memory-intensive applications and workflows that do not have distributed-memory implementations.
Organization: University of Delaware
Units: SUs
Description: 1 SU = 1 compute hour (any node type); standard = 1 core + 8 GiB RAM; large = 1 core + 16 GiB RAM; xlarge = 1 core + 32 GiB RAM; lg-swap is billed as the entire node at 64 SUs per hour = 64 cores + 1024 GiB RAM + 2.73 TiB Optane NVMe swap
User Guide Link: User Guide
Resource Type: Compute
Resource Description: The Delaware Advanced Research Workforce and Innovation Network (DARWIN) computing system at the University of Delaware is based on AMD Epyc™ 7502 processors with three main memory sizes to support different workload requirements (512 GiB, 1024 GiB, 2048 GiB). The cluster provides more than 1 PiB usable, shared storage via a dedicated Lustre parallel file system to support large, data sciences workloads. The Mellanox HDR 200Gbps InfiniBand network provides near-full bisectional bandwidth at 100 Gbps per node. Additionally, the system provides access to three GPU architectures to facilitate Artificial Intelligence (AI) research in the data sciences domains.
Recommended Use: DARWIN's GPU nodes provide resources for machine learning, artificial intelligence research, and visualization.
Organization: University of Delaware
Units: SUs
Description: 1 SU = 1 device hour; T4 or MI50 device hour = 1 GPU + 64 CPU cores + 512 GiB RAM; V100 device hour = 1 GPU + 12 CPU cores + 192 GiB RAM
User Guide Link: User Guide
Resource Type: Storage
Resource Description: Storage for DARWIN projects at the University of Delaware
Recommended Use: DARWIN's Lustre storage should be used for storing input files, supporting data files, work files, and output files associated with computational tasks run on the cluster.
Organization: University of Delaware
Units: GB
Description:
User Guide Link: User Guide