Available Resources

Filter By Feature
Allocations


Resource Type






Specialized Hardware


Specialized Support






Resource Type: Compute
Resource Description:
Recommended Use: As part of the ACCESS ecosystem, a portion of the system is available for community use through the Hive Gateway (https://gateway.hive.pace.gatech.edu). The Hive gateway permits direct use of a variety of software for users with ACCESS logins. No allocation necessary! Also through its ACCESS participation, the Hive cluster is available to support existing or new gateways using the Airavata platform. Interested gateways should contact pace-support@oit.gatech.edu.
Organization: Georgia Institute of Technology
Units: Core-hours
Description: As part of the ACCESS ecosystem, a portion of the system is available for community use through the Hive Gateway (https://gateway.hive.pace.gatech.edu). The Hive gateway permits direct use of a variety of software for users with ACCESS logins. No allocation necessary! Also through its ACCESS participation, the Hive cluster is available to support existing or new gateways using the Airavata platform. Interested gateways should contact pace-support@oit.gatech.edu.
User Guide Link: User Guide
Features Available:
  • Resource Type: Multicore Compute
  • Allocations: RP-managed process
  • Specialized Support: ACCESS OnDemand, Science Gateway support
Resource Type: Compute
Resource Description: Ookami is a computer technology testbed supported by the National Science Foundation under grant OAC 1927880. It provides researchers with access to the A64FX processor developed by Riken and Fujitsu for the Japanese path to exascale computing and is deployed in the, until June 2022, fastest computer in the world, Fugaku. It is the first such computer outside of Japan. By focusing on crucial architectural details, the ARM-based, multi-core, 512-bit SIMD-vector processor with ultrahigh-bandwidth memory promises to retain familiar and successful programming models while achieving very high performance for a wide range of applications. While being very power-efficient it supports a wide range of data types and enables both HPC and big data applications. The Ookami HPE (formerly Cray) Apollo 80 system has 176 A64FX compute nodes each with 32GB of high-bandwidth memory and a 512 Gbyte SSD. This amounts to about 1.5M node hours per year. A high-performance Lustre filesystem provides about 0.8 Pbyte storage. To facilitate users exploring current computer technologies and contrasting performance and programmability with the A64FX, Ookami also includes: - 1 node with dual socket AMD Milan (64cores) with 512 Gbyte memory - 2 nodes with dual socket Thunder X2 (64 cores) each with 256 Gbyte memory - 1 node with dual socket Intel Skylake (32 cores) with 192 Gbyte memory and 2 NVIDIA V100 GPUs
Recommended Use: Applications that are fitting within the memory requirements (27GB per node) and are well vectorized, or well auto-vectorized by the compiler. Note a node is allocated exclusively to one user. Node-sharing is not available.
Organization: Institute for Advanced Computational Science at Stony Brook University
Units: SUs
Description: 1 SU = 1 node hour A node is allocated exclusively to a user. Node-sharing is not available.
User Guide Link: User Guide
Features Available:
  • Resource Type: Multicore Compute, Innovative / Novel Compute
  • Allocations: ACCESS Allocated
  • Specialized Support: Science Gateway support
Resource Type: Compute
Resource Description: Jetstream2 is a hybrid-cloud platform that provides flexible, on-demand, programmable cyberinfrastructure tools ranging from interactive virtual machine services to a variety of infrastructure and orchestration services for research and education. The primary resource is a standard CPU resource consisting of AMD Milan 7713 CPUs with 128 cores per node and 512gb RAM per node connected by 100gbps ethernet to the spine.
Recommended Use: For the researcher needing virtual machine services on demand as well as for software creators and researchers needing to create their own customized virtual machine environments. Additional use cases are for research-supporting infrastructure services that need to be "always on" as well as science gateway services and for education support, providing virtual machines for students.
Organization: Indiana University
Units: SUs
Description: 1 SU = 1 Jetstream2 vCPU-hour. VM sizes and cost per hour are available https://docs.jetstream-cloud.org/general/vmsizes/
User Guide Link: User Guide
Features Available:
  • Specialized Support: ACCESS Pegasus, Science Gateway support
  • Resource Type: Cloud, Multicore Compute
  • Allocations: ACCESS Allocated
Resource Type: Compute
Resource Description: Jetstream2 GPU is a hybrid-cloud platform that provides flexible, on-demand, programmable cyberinfrastructure tools ranging from interactive virtual machine services to a variety of infrastructure and orchestration services for research and education. This particular portion of the resource is allocated separately from the primary resource and contains 360 NVIDIA A100 GPUs -- 4 GPUs per node, 128 AMD Milan cores, and 512gb RAM connected by 100gbps ethernet to the spine.
Recommended Use: For the researcher needing GPUs for virtual machine services on demand as well as for software creators and researchers needing to create their own customized virtual machine environments. Additional use cases are for research-supporting infrastructure services that need to be "always on" as well as science gateway services and for education support, providing virtual machines for students. The A100 GPUs on Jetstream2-GPU are well-suited for machine learning/deep learning projects and other codes optimized for GPU usage. They also may be utilized for some graphical/desktop applications with some effort.
Organization: Indiana University
Units: SUs
Description: Jetstream2 GPU VMs are charged 4x the number of vCPUs per hour because of the associated GPU capability. For example, the g3.large VM costs 64 SUs per hour. VM sizes and cost per hour are available at <a href="https://docs.jetstream-cloud.org/general/vmsizes/">the Jetstream2 website</a>.
User Guide Link: User Guide
Features Available:
  • Specialized Support: ACCESS Pegasus, Science Gateway support
  • Resource Type: Cloud, GPU Compute
  • Allocations: ACCESS Allocated
Resource Type: Compute
Resource Description: Jetstream2 LM is a hybrid-cloud platform that provides flexible, on-demand, programmable cyberinfrastructure tools ranging from interactive virtual machine services to a variety of infrastructure and orchestration services for research and education. This particular portion of the resource is allocated separately from the primary resource and contains 32 nodes of GPU-ready 1TB RAM compute nodes, AMD Milan 7713 CPUs with 128 cores per node connected by 100gbps ethernet to the spine.
Recommended Use: For the researcher needing virtual machine services on demand as well as for software creators and researchers needing to create their own customized virtual machine environments. Additional use cases are for research-supporting infrastructure services that need to be "always on" as well as science gateway services and for education support, providing virtual machines for students. This particular resource is for those required 512gb or 1tb for their workloads and must demonstrate that need.
Organization: Indiana University
Units: SUs
Description: Jetstream2 Large Memory VMs are charged 2x the number of vCPUs per hour because of the additional memory capacity. For instance, the r3.xl flavor that uses 128 cores costs 256 SUs per hour. VM sizes and cost per hour are available at the <a href="https://docs.jetstream-cloud.org/general/vmsizes/">Jetstream website</a>.
User Guide Link: User Guide
Features Available:
  • Resource Type: Cloud
  • Specialized Hardware: Large Memory Nodes
  • Specialized Support: ACCESS Pegasus
  • Allocations: ACCESS Allocated
Resource Type: Storage
Resource Description:
Recommended Use:
Organization: Indiana University
Units: GB
Description:
User Guide Link: User Guide
Features Available:
  • Resource Type: Cloud, Storage
  • Allocations: ACCESS Allocated
Resource Type: Compute
Resource Description: Jetstream is a user-friendly cloud environment designed to give researchers and students access to computing and data analysis resources on demand.
Recommended Use: For the researcher needing a handful of cores on demand as well as for software creators and researchers needing to create their own customized virtual machines environments. Jetstream is accessible ONLY via web interface (https://use.jetstream-cloud.org/) or via API using XSEDE credentials via Globus Auth.
Organization: Indiana University
Units: SUs
Description: 1 Service Unit = 1 Virtual CPU Hour
User Guide Link: User Guide
Features Available:
Resource Type: Compute
Resource Description:
Recommended Use:
Organization: Johns Hopkins University MARCC
Units: GPU Hours
Description:
User Guide Link: User Guide
Features Available:
  • Allocations: ACCESS Allocated
  • Specialized Support: ACCESS OnDemand
  • Resource Type: GPU Compute
Resource Type: Compute
Resource Description:
Recommended Use: Jobs that need > 192GB of memory up to 1524GB
Organization: Johns Hopkins University MARCC
Units: Core-hours
Description:
User Guide Link: User Guide
Features Available:
  • Resource Type: Multicore Compute
  • Allocations: ACCESS Allocated
  • Specialized Hardware: Large Memory Nodes
  • Specialized Support: ACCESS OnDemand
Resource Type: Compute
Resource Description:
Recommended Use:
Organization: Johns Hopkins University MARCC
Units: Core-hours
Description: 1 SU = 1 core-hour
User Guide Link: User Guide
Features Available:
  • Allocations: ACCESS Allocated
  • Specialized Support: ACCESS OnDemand
  • Resource Type: Multicore Compute
Resource Type: Compute
Resource Description: Five large memory compute nodes dedicated for XSEDE allocation. Each of these nodes have 40 cores (Broadwell class and lntel(R) Xeon(R) CPU E7-4820 v4 @ 2.00GHz with 4 sockets, 10 cores/socket), 3TB RAM, and 6TB SSD storage drives. The 5 dedicated XSEDE nodes will have exclusive access to approximately 300 TB of network attached disk storage. All these compute nodes are interconnected through a 100 Gigabit Ethernet (l00GbE) backbone and the cluster login and data transfer nodes will be connected through a 100Gb uplink to lnternet2 for external connections.
Recommended Use: Large memory nodes are increasingly needed by a wide range of XSEDE researchers, particularly researchers working with big data such as massive NLP data sets used in many research domains or the massive genomes required by computational biology and bioinformatics.
Organization: University of Kentucky
Units: Core-hours
Description: Each Core hr is 1 CPU core with a default memory of 75GB RAM/core
User Guide Link: User Guide
Features Available:
  • Resource Type: Multicore Compute
  • Allocations: ACCESS Allocated
  • Specialized Hardware: Large Memory Nodes
  • Specialized Support: ACCESS OnDemand
Resource Type: Program
Resource Description:
Recommended Use: MATCHPlus is an opportunity for researchers to get help with improvements like expanding your code functionality, transitioning from lab computers to HPC, or introducing new technologies into your workflow.
Organization: ACCESS Support
Units: [Yes = 1, No = 0]
Description: The ACCESS MATCHPlus short-term support partnership provides consulting support from an experienced mentor paired with a student-facilitator to help you address an immediate research need. These needs may include improvements like expanding your code functionality, transitioning from lab computers to HPC, or introducing new technologies into your workflow. For more information visit the ACCESS Support portal: https://support.access-ci.org/matchplus
User Guide Link: User Guide
Features Available:
  • Resource Type: Service / Other
  • Allocations: ACCESS Allocated
Resource Type: Program
Resource Description:
Recommended Use: MATCHPremier identifies consultants who are available to be embedded on your research team for 12- 8 months. Consultants have the expertise to help manage massive allocation plans or to use ACCESS resources in novel ways. Engagements should be requested at least six months in advance and are funded through your research award.
Organization: ACCESS Support
Units: [Yes = 1, No = 0]
Description: The pilot program for the ACCESS MATCHPremier embedded support service provides matches your project with an experienced research professional, identified by ACCESS Support to provide the skills needed to help your project, for example, with managing massive allocations or using ACCESS resources in novel ways. MATCHPremier consultants are selected from the Computational Science and Support Network (CSSN) — they may be facilitators, research software engineers, or other types of appropriate support personnel. MATCHPremier should be requested at least six months in advance of the anticipated need.For more information visit the ACCESS Support portal: https://support.access-ci.org/matchpremier
User Guide Link: User Guide
Features Available:
  • Resource Type: Service / Other
  • Allocations: ACCESS Allocated
Resource Type: Compute
Resource Description: The Delta CPU resource comprises 124 dual-socket compute nodes for general purpose computation across a broad range of domains able to benefit from the scalar and multi-core performance provided by the CPUs, such as appropriately scaled weather and climate, hydrodynamics, astrophysics, and engineering modeling and simulation, and other domains using algorithms not yet adapted for the GPU. Each Delta CPU node is configured with 2 AMD EPYC 7763 (“Milan”) processors with 64-cores/socket (128-cores/node) at 2.55GHz and 256GB of DDR4-3200 RAM. An 800GB, NVMe solid-state disk is available for use as local scratch space during job execution. All Delta CPU compute nodes are interconnected to each other and to the Delta storage resource by a 100 Gb/sec HPE Slingshot network fabric.
Recommended Use: The Delta CPU resource is designed for general purpose computation across a broad range of domains able to benefit from the scalar and multi-core performance provided by the CPUs such as appropriately scaled weather and climate, hydrodynamics, astrophysics, and engineering modeling and simulation, and other domains that have algorithms that have not yet moved to the GPU. Delta also supports domains that employ data analysis, data analytics, or other data-centric methods. Delta features a rich base of preinstalled applications, based on user demand. The system is optimized for capacity computing, with rapid turnaround for small to modest scale jobs, and features support for shared-node usage. Local SSD storage on each compute node benefits applications with random access data patterns or require fast access to significant amounts of compute-node local scratch space. This allocation type is specific to and required for the CPU-only nodes on Delta. Request this allocation type if you have CPU only parts to your workflow.
Organization: National Center for Supercomputing Applications
Units: Core-hours
Description: 1 SU = 1 Core-hour
User Guide Link: User Guide
Features Available:
  • Allocations: ACCESS Allocated
  • Specialized Support: Advance reservations, Science Gateway support, ACCESS OnDemand
  • Resource Type: Multicore Compute
Resource Type: Compute
Resource Description: The Delta GPU resource comprises 4 different node configurations intended to support accelerated computation across a broad range of domains such as soft-matter physics, molecular dynamics, replica-exchange molecular dynamics, machine learning, deep learning, natural language processing, textual analysis, visualization, ray tracing, and accelerated analysis of very large in-memory datasets. Delta is designed to support the transition of applications from CPU-only to using the GPU or hybrid CPU-GPU models. Delta GPU resource capacity is predominately provided by 200 single-socket nodes, each configured with 1 AMD EPYC 7763 (“Milan”) processors with 64-cores/socket (64-cores/node) at 2.55GHz and 256GB of DDR4-3200 RAM. Half of these single-socket GPU nodes (100 nodes) are configured with 4 NVIDIA A100 GPUs with 40GB HBM2 RAM and NVLink (400 total A100 GPUs); the remaining half (100 nodes) are configured with 4 NVIDIA A40 GPUs with 48GB GDDR6 RAM and PCIe 4.0 (400 total A40 GPUs). Rounding out the GPU resource is 6 additional “dense” GPU nodes, containing 8 GPUs each, in a dual-socket CPU configuration (128-cores per node) and 2TB of DDR4-3200 RAM but otherwise configured similarly to the single-socket GPU nodes. Within the “dense” GPU nodes, 5 nodes employ NVIDIA A100 GPUs (40 total A100 GPUs in “dense” configuration) and 1 node employs AMD MI100 GPUs (8 total MI100 GPUs) with 32GB HBM2 RAM. A 1.6TB, NVMe solid-state disk is available for use as local scratch space during job execution on each GPU node type. All Delta GPU compute nodes are interconnected to each other and to the Delta storage resource by a 100 Gb/sec HPE Slingshot network fabric.
Recommended Use: The Delta GPU resource is designed to support accelerated computation across a broad range of domains such as soft-matter physics, molecular dynamics, replica-exchange molecular dynamics, machine learning, deep learning, natural language processing, textual analysis, visualization, ray tracing, and accelerated analysis of very large in-memory datasets. Delta is designed to support the transition of applications from CPU-only to using the GPU or hybrid CPU-GPU models. Delta features a rich base of preinstalled applications, based on user demand. The system is optimized for capacity computing, with rapid turnaround for small to modest scale jobs, and features support for shared-node usage. Local SSD storage on each compute node benefits applications with random access data patterns or require fast access to significant amounts of compute-node local scratch space. This allocation type covers the use of all Delta GPU node types and all resources (CPU and GPU) within those nodes. If you have CPU only portions of your workflow you should also request an allocation of time under NCSA Delta CPU which provides access to CPU-only compute nodes.
Organization: National Center for Supercomputing Applications
Units: GPU Hours
Description: 1 SU = 1 GPU-Hour
User Guide Link: User Guide
Features Available:
  • Allocations: ACCESS Allocated
  • Specialized Support: Advance reservations, Science Gateway support, ACCESS OnDemand
  • Specialized Hardware: Large Memory Nodes
  • Resource Type: Multicore Compute, GPU Compute
Resource Type: Storage
Resource Description: The Delta Storage resource provides storage allocations for projects using the Delta CPU and Delta GPU resources. It delivers 7PB of capacity to projects on Delta and will be augmented by a later expansion of 3PB of flash capacity for high-speed, data-intensive workloads.
Recommended Use: The Delta Storage resource provides storage allocations on the scratch file system for allocated projects using the Delta CPU and Delta GPU resources. Allocations are available for the duration of the compute allocation period. Scratch is unpurged storage.
Organization: National Center for Supercomputing Applications
Units: GB
Description:
User Guide Link: User Guide
Features Available:
  • Allocations: ACCESS Allocated
  • Resource Type: Storage
Resource Type: Storage
Resource Description: The Open Storage Network (OSN) is an NSF-funded cloud storage resource, geographically distributed among several pods. OSN pods are currently hosted at SDSC, NCSA, MGHPCC, RENCI, and Johns Hopkins University. Each OSN pod currently hosts 1PB of storage, and is connected to R&E networks at 50 Gbps. OSN storage is allocated in buckets, and is accessed using S3 interfaces with tools like rclone, cyberduck, or the AWS cli.
Recommended Use: Cloud-style storage of project datasets for access using AWS S3-compatible tools. The minimum allocation is 10TB. Storage allocations up to 300TB may be requested via the XSEDE resource allocation process.
Organization: Open Storage Network
Units: TB
Description: Amount of storage capacity being requested, expressed in base 10 units. The minimum allocation is 10TB. Larger allocations of up to 300TB may be accommodated with further justification and approval by OSN. Storage access is via AWS S3-compatible tools such as rclone, cyberduck and the AWS command-line interface for S3. A web interface is provided at https://portal.osn.xsede.org for allocated project users to manage and browse their storage.
User Guide Link: User Guide
Features Available:
  • Resource Type: Storage
  • Allocations: ACCESS Allocated
Resource Type: Compute
Resource Description: A virtual HTCondor pool made up of resources from the Open Science Grid
Recommended Use: High throughput jobs using a single core, or a small number of threads which can fit on a single compute node.
Organization: Open Science Grid
Units: SUs
Description: 1 SU = 1 Core-hour
User Guide Link: User Guide
Features Available:
  • Allocations: ACCESS Allocated
  • Specialized Support: Preemption, Science Gateway support
Resource Type: Compute
Resource Description: Anton is a special purpose supercomputer for biomolecular simulation designed and constructed by D. E. Shaw Research (DESRES). PSC's current system is known as Anton 2 and is a successor to the original Anton 1 machine hosted here. Anton 2, the next-generation Anton supercomputer, is a 128 node system, made available without cost by DESRES for non-commercial research use by US universities and other not-for-profit institutions, and is hosted by PSC with support from the NIH National Institute of General Medical Sciences. It replaced the original Anton 1 system in the fall of 2016. Anton was designed to dramatically increase the speed of molecular dynamics (MD) simulations compared with the previous state of the art, allowing biomedical researchers to understand the motions and interactions of proteins and other biologically important molecules over much longer time periods than was previously accessible to computational study. The MD research community is using the Anton 2 machine at PSC to investigate important biological phenomena that due to their intrinsically long time scales have been outside the reach of even the most powerful general-purpose scientific computers. Application areas include biomolecular energy transformation, ion channel selectivity and gating, drug interactions with proteins and nucleic acids, protein folding and protein-membrane signaling.
Recommended Use: As part of the ACCESS ecosystem, Anton 2 invites interested molecular dynamics (MD) investigators to request allocations on the special purpose supercomputer built specifically for MD. Requests for proposals are accepted each summer. The next request period will be in 2024. For detailed application instructions please see (https://www.psc.edu/resources/anton-2/anton-rfp/). Questions regarding Anton 2 can be directed to: grants@psc.edu.
Organization: Pittsburgh Supercomputing Center
Units: MD Simulation Units
Description: Anton 2 is allocated annually via a Request for Proposal with proposals reviewed by a committee convened by the National Research Council at the National Academies. To qualify for an allocation on Anton 2, the principal investigator must be a faculty or staff member at a U.S. academic or non-profit research institution. Anton is allocated in MD simulation units.
User Guide Link: User Guide
Features Available:
  • Resource Type: Innovative / Novel Compute
  • Allocations: RP-managed process
Resource Type: Storage
Resource Description:
Recommended Use: For researchers and data generators in the field of neuroscience, BIL provides a centralized hub for depositing, accessing, and sharing large brain image microscopy datasets to foster data-driven collaboration across the broader research community. AI and machine learning developers can harness the rich datasets to build and optimize models to advance neuroscience and computational image analysis domains.
Organization: Pittsburgh Supercomputing Center
Units: [Yes = 1, No = 0]
Description: BIL provides rich neuroscience image data collected in alignment with FAIR (Findable, Accessible, interoperable, reusable) principles. Experimentalists can deposit BRAIN data originating from microscopy experiments for indexing and distribution to others. Researchers interested in using BIL data for re-analysis or other purposes (including algorithm development) can request in-place access to datasets coupled with computing infrastructure provided by PSC Bridges-2 compute, GPU, and storage resources. Interested researchers are requested to contact PSC via email at bil-support@psc.edu
Features Available:
  • Allocations: RP-managed process
  • Resource Type: Innovative / Novel Compute
Resource Type: Compute
Resource Description: Bridges-2 combines high-performance computing (HPC), high performance artificial intelligence (HPAI), and large-scale data management to support simulation and modeling, data analytics, community data, and complex workflows. Bridges-2 Extreme Memory (EM) nodes enable memory-intensive genome sequence assembly, graph analytics, in-memory databases, statistics, and other applications that need a large amount of memory and for which distributed-memory implementations are not available. Bridges-2 Extreme Memory (EM) nodes each consist of 4 Intel Xeon Platinum 8260M “Cascade Lake” CPUs, 4TB of DDR4-2933 RAM, 7.68TB NVMe SSD. They are connected to Bridges-2's other compute nodes and its Ocean parallel filesystem and archive by two HDR-200 InfiniBand links, providing 400Gbps of bandwidth to read or write data from each EM node.
Recommended Use: Bridges-2 Extreme Memory (EM) nodes enable memory-intensive genome sequence assembly, graph analytics, statistics, in-memory databases, and other applications that need a large amount of memory and for which distributed-memory implementations are not available. This includes memory-intensive applications implemented in languages such as Java, R, and Python. Their x86 CPUs support an extremely broad range of applications, and approximately 42GB of RAM per core provides valuable support for applications where memory capacity is the primary requirement.
Organization: Pittsburgh Supercomputing Center
Units: Core-hours
Description: 1 SU = 1 core hour
User Guide Link: User Guide
Features Available:
  • Specialized Support: ACCESS Pegasus, Advance reservations, Science Gateway support, ACCESS OnDemand
  • Allocations: ACCESS Allocated
  • Specialized Hardware: Large Memory Nodes
  • Resource Type: Multicore Compute
Resource Type: Compute
Resource Description: Bridges-2 combines high-performance computing (HPC), high performance artificial intelligence (HPAI), and large-scale data management to support simulation and modeling, data analytics, community data, and complex workflows. Bridges-2 Accelerated GPU (GPU) nodes are optimized for scalable artificial intelligence (AI; deep learning). They are also available for accelerated simulation and modeling applications. Bridges-2 GPU nodes each contain 8 NVIDIA Tesla V100-32GB SXM2 GPUs, providing 40,960 CUDA cores and 5,120 tensor cores. In addition, each node holds 2 Intel Xeon Gold 6248 CPUs; 512GB of DDR4-2933 RAM; and 7.68TB NVMe SSD. They are connected to Bridges-2's other compute nodes and its Ocean parallel filesystem and archive by two HDR-200 InfiniBand links, providing 400Gbps of bandwidth to enhance scalability of deep learning training.
Recommended Use: Bridges-2 GPU nodes are optimized for scalable artificial intelligence (AI) including deep learning training, deep reinforcement learning, and generative techniques - as well as for accelerated simulation and modeling.
Organization: Pittsburgh Supercomputing Center
Units: GPU Hours
Description: 1 SU = 1 GPU hour
User Guide Link: User Guide
Features Available:
  • Allocations: ACCESS Allocated
  • Specialized Support: ACCESS Pegasus, ACCESS OnDemand, Advance reservations, Science Gateway support
  • Resource Type: GPU Compute
Resource Type: Compute
Resource Description: Bridges-2 combines high-performance computing (HPC), high performance artificial intelligence (HPAI), and large-scale data management to support simulation and modeling, data analytics, community data, and complex workflows. Bridges-2 Regular Memory (RM) nodes provide extremely powerful general-purpose computing, machine learning and data analytics, AI inferencing, and pre- and post-processing. Each Bridges RM node consists of two AMD EPYC “Rome” 7742 64-core CPUs, 256-512GB of RAM, and 3.84TB NVMe SSD. 488 Bridges-2 RM nodes have 256GB RAM, and 16 have 512GB RAM for more memory-intensive applications (see also Bridges-2 Extreme Memory nodes, each of which has 4TB of RAM). Bridges-2 RM nodes are connected to other Bridges-2 compute nodes and its Ocean parallel filesystem and archive by HDR-200 InfiniBand.
Recommended Use: Bridges-2 Regular Memory (RM) nodes provide extremely powerful general-purpose computing, machine learning and data analytics, AI inferencing, and pre- and post-processing. Their x86 CPUs support an extremely broad range of applications, and jobs can request anywhere from 1 core to all 64,512 cores of the Bridges-2 RM resource.
Organization: Pittsburgh Supercomputing Center
Units: Core-hours
Description: 1 SU = 1 core hour
User Guide Link: User Guide
Features Available:
  • Specialized Support: ACCESS OnDemand, ACCESS Pegasus, Advance reservations, Science Gateway support
  • Allocations: ACCESS Allocated
  • Resource Type: Multicore Compute
Resource Type: Storage
Resource Description: The Bridges-2 Ocean data management system provides a unified, high-performance filesystem for active project data, archive, and resilience. Ocean consists of two tiers, disk and tape, transparently managed by HPE DMF as a single, highly usable namespace. Ocean's disk subsystem, for active project data, is a high-performance, internally resilient Lustre parallel filesystem with 15PB of usable capacity, configured to deliver up to 129GB/s and 142GB/s of read and write bandwidth, respectively. Ocean's tape subsystem, for archive and additional resilience, is a high-performance tape library with 7.2PB of uncompressed capacity (estimated 8.6PB compressed, with compression done transparently in hardware with no performance overhead), configured to deliver 50TB/hour.
Recommended Use: The Bridges-2 Ocean data management system provides high-performance and highly usable access to project and archive data. It is equally accessible to all Bridges-2 compute nodes, allowing seamless execution of data-intensive workflows involving components running on different compute resource types.
Organization: Pittsburgh Supercomputing Center
Units: GB
Description: Storage
User Guide Link: User Guide
Features Available:
  • Allocations: ACCESS Allocated
  • Resource Type: Storage
Resource Type: Compute
Resource Description:
Recommended Use: Neocortex, a system that captures groundbreaking new hardware technologies, is designed to accelerate Deep Learning (DL) and High Performance Computing (HPC) research in pursuit of science, discovery, and societal good. Currently recommended DL projects focus on models such as BERT, GPT-J, and Transformer, or combine supported TensorFlow or PyTorch layers. DL codes can also be developed “from scratch” using the Cerebras Software Development Toolkit (SDK). The SDK can be used to develop HPC codes, such as structured grid based PDE and ODE solvers and particle methods with regular communication. Neocortex is currently in the testbed phase and accepting researchers through an RP managed allocation process. Interested researchers are requested to contact PSC via email at neocortex@psc.edu
Organization: Pittsburgh Supercomputing Center
Units: SUs
Description: 1 SU = 1 CS-2 Hour
User Guide Link: User Guide
Features Available:
  • Resource Type: Innovative / Novel Compute
  • Allocations: RP-managed process
Resource Type: Compute
Resource Description: Purdue's Anvil cluster was built in partnership with Dell and AMD and consists of 1,000 nodes with two 64-core AMD Epyc "Milan" processors each, and will deliver over 1 billion CPU core hours to XSEDE each year, with a peak performance of 5.3 petaflops. Anvil's nodes are interconnected with 100 Gbps Mellanox HDR InfiniBand.
Recommended Use: Anvil's general purpose CPUs and 128 cores per node make it suitable for many types of CPU-based workloads.
Organization: Purdue University
Units: SUs
Description: 1 Service Unit = 1 Core Hour
User Guide Link: User Guide
Features Available:
  • Specialized Support: ACCESS Pegasus, ACCESS OnDemand, Science Gateway support
  • Allocations: ACCESS Allocated
  • Specialized Hardware: Large Memory Nodes
  • Resource Type: Multicore Compute
Resource Type: Compute
Resource Description: 16 nodes each with four NVIDIA A100 Tensor Core GPUs providing 1.5 PF of single-precision performance to support machine learning and artificial intelligence applications.
Recommended Use: Machine learning (ML), artificial intelligence (AI), other GPU-enabled workloads
Organization: Purdue University
Units: SUs
Description: 1 SU = 1 GPU hour
User Guide Link: User Guide
Features Available:
  • Specialized Support: ACCESS Pegasus, ACCESS OnDemand, Science Gateway support
  • Allocations: ACCESS Allocated
  • Resource Type: GPU Compute
Resource Type: Program
Resource Description: The Science Gateways Community Institute (SGCI) and the follow-on SGX3 NSF Center of Excellence projects support science gateways through both complementary and paid services. Complementary services include assistance with proposal development, a technical consultancy to help science gateway providers define their system architectures and choose technologies, a usability/user experience consultancy to help design effective user interfaces, and sustainability training programs to help science gateway providers with long term operations planning. For more about science gateways and additional available services, please see https://sciencegateways.org/our-services/consulting-services. An SGCI team member will contact allocated projects to help review and select specific services.
Recommended Use: SGCI services can be used to develop entirely new gateways or improve existing gateways. The gateways do not need to make use of ACCESS compute resources (though they can). They could be gateways to data collections, instruments, sensor streams, citizen science engagement, etc.
Organization: Science Gateways Community Institute
Units: [Yes = 1, No = 0]
Description:
User Guide Link: User Guide
Features Available:
  • Resource Type: Service / Other
  • Allocations: ACCESS Allocated
  • Specialized Support: Science Gateway support
Resource Type: Compute
Resource Description: Expanse will be a Dell integrated compute cluster, with AMD Rome processors, interconnected with Mellanox HDR InfiniBand in a hybrid fat-tree topology. The compute node section of Expanse will have a peak performance of 3.373 PF. Full bisection bandwidth will be available at rack level (56 compute nodes) with HDR100 connectivity to each node. HDR200 switches are used at the rack level and there will be 3:1 oversubscription cross-rack. Compute nodes will feature 1TB of NVMe storage and 256GB of DRAM per node. The system will also feature 7PB of Lustre based performance storage (140GB/s aggregate), and 5PB of Ceph based object storage.
Recommended Use: Expanse is designed to provide cyberfrastructure for the long tail of science, covering a diverse application base with complex workflows. It will feature a rich base of preinstalled applications including commercial software like Gaussian, Abaqus, QChem, MATLAB, and IDL. The system will be geared towards supporting capacity computing, optimized for quick turnaround on small/modest scale jobs. The local NVMes on each compute node will be beneficial to applications that exhibit random access data patterns or require fast access to significant amounts of compute node local scratch space. Expanse will support composable systems computing with dynamic capabilities enabled using tools such as Kubernetes and workflow software.
Organization: San Diego Supercomputer Center
Units: Core-hours
Description:
User Guide Link: User Guide
Features Available:
  • Specialized Support: ACCESS Pegasus, Advance reservations, ACCESS OnDemand, Science Gateway support
  • Allocations: ACCESS Allocated
  • Specialized Hardware: Large Memory Nodes
  • Resource Type: Multicore Compute
Resource Type: Compute
Resource Description: Expanse will be a Dell integrated compute cluster, with AMD Rome processors, NVIDIA V100 GPUs, interconnected with Mellanox HDR InfiniBand in a hybrid fat-tree topology. The GPU component of Expanse features 52 GPU nodes, each containing four NVIDIA V100s (32 GB SMX2), connected via NVLINK, and dual 20-core Intel Xeon 6248 CPUs. They will feature 1.6TB of NVMe storage and 256GB of DRAM per node. There is HDR100 connectivity to each node. The system will also feature 7PB of Lustre based performance storage (140GB/s aggregate), and 5PB of Ceph based object storage.
Recommended Use: GPUs are a specialized resource that performs well for certain classes of algorithms and applications. Recommend to be used for accelerating simulation codes optimized to take advantage of GPUs (using CUDA, OpenACC). There is a large and growing base of community codes that have been optimized for GPUs including those in molecular dynamics, and machine learning. GPU-enabled applications on Expanse will include: AMBER, Gromacs, BEAST, OpenMM, NAMD, TensorFlow, and PyTorch.
Organization: San Diego Supercomputer Center
Units: GPU Hours
Description:
User Guide Link: User Guide
Features Available:
  • Allocations: ACCESS Allocated
  • Specialized Support: ACCESS Pegasus, Advance reservations, Science Gateway support, ACCESS OnDemand
  • Resource Type: GPU Compute
Resource Type: Storage
Resource Description: 5PB of storage on a Lustre based filesystem.
Recommended Use: Use for storage needs of allocated projects on Expanse Compute and Expanse GPU resources. Unpurged storage available for duration of allocation period.
Organization: San Diego Supercomputer Center
Units: GB
Description:
User Guide Link: User Guide
Features Available:
  • Allocations: ACCESS Allocated
  • Resource Type: Storage
Resource Type: Compute
Resource Description:
Recommended Use: Voyager invites researchers in science and engineering that are dependent upon artificial intelligence and deep learning as a critical element in their experimental and/or computational work. It provides researchers the ability to work with extremely large data sets using standard AI tools, like TensorFlow and PyTorch, or develop their own deep learning models using developer tools and libraries from Habana Labs. Voyager is currently in the testbed phase and accepting researchers through a RP managed allocation process. Interested researchers are requested to contact SDSC via email at consult@sdsc.edu.
Organization: San Diego Supercomputer Center
Units: SUs
Description:
User Guide Link: User Guide
Features Available:
  • Resource Type: Innovative / Novel Compute
  • Allocations: RP-managed process
Resource Type: Compute
Resource Description: PLEASE NOTE THAT STAMPEDE2 WILL NO LONGER BE AVAILABLE AS OF JULY 1, 2023. If you require larger scale resources you should consider the leadership computing capabilities of Frontera https://frontera-portal.tacc.utexas.edu/allocations/ Otherwise we are currently recommending PI's who are new to ACCESS to consider other resources. The Stampede2 Dell/Intel Knights Landing (KNL), Skylake (SKX) System provides the user community access to two Intel Xeon compute technologies. The system is configured with 4204 Dell KNL compute nodes, each with a stand-alone Intel Xeon Phi Knights Landing bootable processor. Each KNL node includes 68 cores, 16GB MCDRAM, 96GB DDR-4 memory and a 200GB SSD drive. Stampede2 also includes 1736 Intel Xeon Skylake (SKX) nodes and additional management nodes. Each SKX includes 48 cores, 192GB DDR-4 memory, and a 200GB SSD. Allocations awarded on Stampede2 may be used on either or both of the node types. Compute nodes have access to dedicated Lustre Parallel file systems totaling 28PB raw, provided by Cray. An Intel Omni-Path Architecture switch fabric connects the nodes and storage through a fat-tree topology with a point to point bandwidth of 100 Gb/s (unidirectional speed). 16 additional login and management servers complete the system. Stampede2 will deliver an estimated 18PF of peak performance. Please see the Stampede2 User Guide for detailed information on the system and how to most effectively use it. https://portal.tacc.utexas.edu/user-guides/stampede2
Recommended Use: Stampede2 is intended primarily for parallel applications scalable to tens of thousands of cores, as well as general purpose and throughput computing. Normal batch queues will enable users to run simulations up to 48 hours. Jobs requiring run times and more cores than allowed by the normal queues will be run in a special queue after approval of TACC staff. normal, serial and development queues are configured as well as special purpose queues.
Organization: Texas Advanced Computing Center
Units: Node Hours
Description: Stampede 2 is allocated in service units (SU)s. An SU is defined as 1 wall-clock node hour. Allocations awarded on Stampede2 may be used on either or both node types.
User Guide Link: User Guide
Features Available:
Resource Type: Storage
Resource Description: TACC's long-term mass storage solution, Ranch, is an Oracle® StorageTek Modular Library System. Ranch utilizes Oracle's Sun Storage Archive Manager Filesystem (SAM-FS) for migrating files to/from a tape archival system with a current offline storage capacity of 60 PB. Ranch's disk cache is built on Oracle's ZFS 7240 and Dell MD3600i disk arrays containing approximately 640 TB of usable spinning disk storage. These disk arrays are controlled by a Dell R720 SAM-FS Metadata server which has 16 CPUs and 72 GB of RAM. Two Oracle StorageTek SL8500 Automated Tape Libraries house all of the offline archival storage. Each SL8500 library can house up to 10,000 tapes with 64 tape drive slots. One SL8500 is currently populated with 10,000 T-10000B media where each tape is capable of holding one TB of uncompressed data while the second SL8500 houses 6,000 of the latest T-10000C media which can hold five TB of uncompressed data. Each SL8500 library also contains eight handbots to manage tapes and move them to/from the tape drives with a pass-through port connecting the two SL8500 libraries. If necessary, up to four SL8500 libraries can be integrated into a single archival solution, allowing for an offline storage capacity of 200 PB with current tape media.
Recommended Use: TACC's High Performance Computing systems are used primarily for scientific computing with users having access to WORK, SCRATCH, and HOME file systems that are limited in size. This is also true for TACC's visualization system, Longhorn. The Ranch system serves the HPC and Vis community systems by providing a massive, high-performance file system for archival purposes. Space on Ranch can also be requested independent of an accompanying allocation on an XSEDE compute or visualization resource. Please note that Ranch is an archival system. The ranch system is not backed up or replicated. This means that Ranch contains a single copy, and only a single copy, of your file/s. While lost data due to tape damage is rare, please keep this fact in mind for your data management plans.
Organization: Texas Advanced Computing Center
Units: GB
Description:
User Guide Link: User Guide
Features Available:
  • Allocations: ACCESS Allocated
  • Resource Type: Storage
Resource Type: Compute
Resource Description:
Recommended Use: Stampede3 is intended primarily for parallel applications scalable to tens of thousands of cores, as well as general purpose and throughput computing. Normal batch queues will enable users to run simulations up to 48 hours.
Organization: Texas Advanced Computing Center
Units: Node Hours
Description: DECEMBER 1, 2023 WILL BE THE START DATE FOR STAMPEDE3 MAXIMIZE ALLOCATIONS Stampede3 is allocated in service units (SU)s. The charge rate varies by node type: 1) Skylake 1 node-hour = 1 SU 2) Ice Lake 1 node-hour = 1.66 SU 3) Sapphire Rapids 1 node-hour = 3 SU
User Guide Link: User Guide
Features Available:
Resource Type: Compute
Resource Description:
Recommended Use: Workflows that can utilize novel accelerators and/or multiple GPUs.
Organization: Texas A&M University
Units: Core-hours
Description:
User Guide Link: User Guide
Features Available:
Resource Type: Compute
Resource Description: FASTER (Fostering Accelerated Scientific Transformations, Education and Research) is funded by the NSF MRI program (Award #2019129) and provides a composable high-performance data-analysis and computing instrument. The FASTER system has 180 compute nodes with 2 Intel 32-core Ice Lake processors and 256 GB RAM, and includes 240 NVIDIA GPUs (40 A100 and 200 T4 GPUs). Using LIQID’s composable technology, all 180 compute nodes have access to the pool of available GPUs, dramatically improving workflow scalability. FASTER will have HDR InfiniBand interconnection and access/share a 5PB usable high-performance storage system running Lustre filesystem. thirty percent of FASTER’s computing resources will be allocated to researchers nationwide through XSEDE’s XRAC process.
Recommended Use: Workflows that can utilize multiple GPUs.
Organization: Texas A&M University
Units: SUs
Description: 1 SU = 1 node hour
Features Available:
  • Resource Type: Innovative / Novel Compute
  • Allocations: ACCESS Allocated
  • Specialized Hardware: Composable hardware fabric
  • Specialized Support: ACCESS OnDemand
Resource Type: Compute
Resource Description: Nodes with two AMD EPYC™ 7502 processors (32 cores each) with three memory size options: 48x standard 512 GiB; 32x large-memory 1024 GiB; 11x xlarge-memory 2048 GiB; 1x lg-swap 1024 GiB RAM + 2.73 TiB Intel Optane NVMe swap
Recommended Use: DARWIN's standard memory nodes provide powerful general-purpose computing, data analytics, and pre- and post-processing capabilities. The large and xlarge memory nodes enable memory-intensive applications and workflows that do not have distributed-memory implementations.
Organization: University of Delaware
Units: SUs
Description: 1 SU = 1 compute hour (any node type); standard = 1 core + 8 GiB RAM; large = 1 core + 16 GiB RAM; xlarge = 1 core + 32 GiB RAM; lg-swap is billed as the entire node at 64 SUs per hour = 64 cores + 1024 GiB RAM + 2.73 TiB Optane NVMe swap
User Guide Link: User Guide
Features Available:
Resource Type: Compute
Resource Description: 3 nodes with two Intel® Xeon® Platinum 8260 processors (24 cores each), 768 GiB RAM, and 4 NVIDIA Tesla V100 32GB GPUs connected via NVLINK™ 9 nodes with two AMD EPYC™ 7502 processors (32 cores each), 512 GiB RAM, and a single NVIDIA Tesla T4 GPU 1 node with two AMD EPYC™ 7502 processors (32 cores each), 512 GiB RAM, and a single AMD Radeon Instinct MI50 GPU
Recommended Use: DARWIN's GPU nodes provide resources for machine learning, artificial intelligence research, and visualization.
Organization: University of Delaware
Units: SUs
Description: 1 SU = 1 device hour; T4 or MI50 device hour = 1 GPU + 64 CPU cores + 512 GiB RAM; V100 device hour = 1 GPU + 12 CPU cores + 192 GiB RAM
User Guide Link: User Guide
Features Available:
Resource Type: Storage
Resource Description: DARWIN's Lustre file system is for use with the DARWIN Compute and GPU nodes.
Recommended Use: DARWIN's Lustre storage should be used for storing input files, supporting data files, work files, and output files associated with computational tasks run on the cluster.
Organization: University of Delaware
Units: GB
Description:
User Guide Link: User Guide
Features Available: