Available Resources

Resource Type: Compute
Resource Description: Ookami is a computer technology testbed supported by the National Science Foundation under grant OAC 1927880. It provides researchers with access to the A64FX processor developed by Riken and Fujitsu for the Japanese path to exascale computing and is deployed in the, until June 2022, fastest computer in the world, Fugaku. It is the first such computer outside of Japan. By focusing on crucial architectural details, the ARM-based, multi-core, 512-bit SIMD-vector processor with ultrahigh-bandwidth memory promises to retain familiar and successful programming models while achieving very high performance for a wide range of applications. While being very power-efficient it supports a wide range of data types and enables both HPC and big data applications. The Ookami HPE (formerly Cray) Apollo 80 system has 176 A64FX compute nodes each with 32GB of high-bandwidth memory and a 512 Gbyte SSD. This amounts to about 1.5M node hours per year. A high-performance Lustre filesystem provides about 0.8 Pbyte storage. To facilitate users exploring current computer technologies and contrasting performance and programmability with the A64FX, Ookami also includes: - 1 node with dual socket AMD Milan (64cores) with 512 Gbyte memory - 2 nodes with dual socket Thunder X2 (64 cores) each with 256 Gbyte memory - 1 node with dual socket Intel Skylake (32 cores) with 192 Gbyte memory and 2 NVIDIA V100 GPUs
Recommended Use: Applications that are fitting within the memory requirements (27GB per node) and are well vectorized, or well auto-vectorized by the compiler. Note a node is allocated exclusively to one user. Node-sharing is not available.
Organization: Institute for Advanced Computational Science at Stony Brook University
Units: SUs
Description: 1 SU = 1 node hour A node is allocated exclusively to a user. Node-sharing is not available.
Resource Type: Compute
Resource Description: Jetstream2 is a user-friendly cloud environment designed to give researchers and students access to computing and data analysis resources on demand as well as for gateway and other infrastructure projects.
Recommended Use: For the researcher needing virtual machine services on demand as well as for software creators and researchers needing to create their own customized virtual machine environments. Additional use cases are for research-supporting infrastructure services that need to be "always on" as well as science gateway services and for education support, providing virtual machines for students.
Organization: Indiana University
Units: SUs
Description: 1 SU = 1 Jetstream2 vCPU-hour. VM sizes and cost per hour are available https://docs.jetstream-cloud.org/general/vmsizes/
Resource Type: Compute
Resource Description: Jetstream2 is a user-friendly cloud environment designed to give researchers and students access to computing and data analysis resources on demand as well as for gateway and other infrastructure projects. This is for the GPU-specific Jetstream2 resources only.
Recommended Use: For the researcher needing virtual machine services on demand as well as for software creators and researchers needing to create their own customized virtual machine environments. Additional use cases are for research-supporting infrastructure services that need to be "always on" as well as science gateway services and for education support, providing virtual machines for students.
Organization: Indiana University
Units: SUs
Description: Jetstream2 GPU VMs are charged 4x the number of vCPUs per hour because of the associated GPU capability. For example, the g3.large VM costs 64 SUs per hour. VM sizes and cost per hour are available at <a href="https://docs.jetstream-cloud.org/general/vmsizes/">the Jetstream2 website</a>.
Resource Type: Compute
Resource Description: Jetstream2 is a user-friendly cloud environment designed to give researchers and students access to computing and data analysis resources on demand as well as for gateway and other infrastructure projects. This is for the Large Memory-specific Jetstream2 resources only.
Recommended Use: For the researcher needing virtual machine services on demand as well as for software creators and researchers needing to create their own customized virtual machine environments. Additional use cases are for research-supporting infrastructure services that need to be "always on" as well as science gateway services and for education support, providing virtual machines for students. This particular resource is for those required 512gb or 1tb for their workloads and must demonstrate that need.
Organization: Indiana University
Units: SUs
Description: Jetstream2 Large Memory VMs are charged 2x the number of vCPUs per hour because of the additional memory capacity. For instance, the r3.xl flavor that uses 128 cores costs 256 SUs per hour. VM sizes and cost per hour are available at the <a href="https://docs.jetstream-cloud.org/general/vmsizes/">Jetstream website</a>.
Resource Type: Storage
Resource Description: Storage for use with Jetstream Computing
Recommended Use:
Organization: Indiana University
Units: GB
Description:
Resource Type: Compute
Resource Description: Johns Hopkins University participates in the XSEDE Federation with its new NSF-funded flagship cluster "rockfish.jhu.edu" funded by NSF MRI award #1920103 that integrates high-performance and data-intensive computing while developing tools for generating, analyzing and disseminating data sets of ever-increasing size. The cluster will contain compute nodes optimized for different research projects and complex, optimized workflows. Rockfish (10) GPU nodes are intended for applications that need gpu-processing, machine learning and data analytics. Each gpu node consists of two Intel Xeon Gold Cascade Lake (6248R) processors with 192GB of memory, 3.0GHz base frequency, 48 cores per node, (4) Nvidia A100 GPUs and 1TB NVMe for local storage. All compute nodes have HDR100 connectivity. In addition, the cluster has access to several GPFS file systems totaling 10PB of storage. 20% of these resources will be allocated via XSEDE.
Recommended Use:
Organization: Johns Hopkins University MARCC
Units: GPU Hours
Description:
Resource Type: Compute
Resource Description: Johns Hopkins University participates in the XSEDE Federation with its new NSF-funded flagship cluster "rockfish.jhu.edu" funded by NSF MRI award #1920103 that integrates high-performance and data-intensive computing while developing tools for generating, analyzing and disseminating data sets of ever-increasing size. The cluster will contain compute nodes optimized for different research projects and complex, optimized workflows. Rockfish (10) Large Memory nodes are intended for applications that need more than 192GB or memory (up to 1.5TB), machine learning and data analytics. Each large memory node consists of two Intel Xeon Gold Cascade Lake (6248R) processors with 1,524GB of memory, 3.0GHz base frequency, 48 cores per node and 1TB NVMe for local storage. All compute nodes have HDR100 connectivity. In addition, the cluster has access to several GPFS file systems totaling 10PB of storage. 20% of these resources will be allocated via XSEDE.
Recommended Use: Jobs that need > 192GB of memory up to 1524GB
Organization: Johns Hopkins University MARCC
Units: Core-hours
Description:
Resource Type: Compute
Resource Description: Johns Hopkins University will participate in the XSEDE Federation with its new NSF-funded flagship cluster "rockfish.jhu.edu" funded by NSF MRI award #1920103 that integrates high-performance and data-intensive computing while developing tools for generating, analyzing and disseminating data sets of ever-increasing size. The cluster will contain compute nodes optimized for different research projects and complex, optimized workflows. Rockfish (368) Regular Memory nodes are intended for regular-purpose computing, machine learning and data analytics. Each regular compute node consists of two Intel Xeon Gold Cascade Lake (6248R) processors with 192GB of memory, 3.0GHz base frequency, 48 cores per node and 1TB NVMe for local storage. All compute nodes have HDR100 connectivity. In addition, the cluster has access to several GPFS file systems totaling 10PB of storage. 20% of these resources will be allocated via XSEDE.
Recommended Use:
Organization: Johns Hopkins University MARCC
Units: Core-hours
Description: 1 SU = 1 core-hour
Resource Type: Compute
Resource Description: Five large memory compute nodes dedicated for XSEDE allocation. Each of these nodes have 40 cores (Broadwell class and lntel(R) Xeon(R) CPU E7-4820 v4 @ 2.00GHz with 4 sockets, 10 cores/socket), 3TB RAM, and 6TB SSD storage drives. The 5 dedicated XSEDE nodes will have exclusive access to approximately 300 TB of network attached disk storage. All these compute nodes are interconnected through a 100 Gigabit Ethernet (l00GbE) backbone and the cluster login and data transfer nodes will be connected through a 100Gb uplink to lnternet2 for external connections.
Recommended Use: Large memory nodes are increasingly needed by a wide range of XSEDE researchers, particularly researchers working with big data such as massive NLP data sets used in many research domains or the massive genomes required by computational biology and bioinformatics.
Organization: University of Kentucky
Units: Core-hours
Description: Each Core hr is 1 CPU core with a default memory of 75GB RAM/core
Resource Type: Compute
Resource Description: The Delta CPU resource comprises 124 dual-socket compute nodes for general purpose computation across a broad range of domains able to benefit from the scalar and multi-core performance provided by the CPUs, such as appropriately scaled weather and climate, hydrodynamics, astrophysics, and engineering modeling and simulation, and other domains using algorithms not yet adapted for the GPU. Each Delta CPU node is configured with 2 AMD EPYC 7763 (“Milan”) processors with 64-cores/socket (128-cores/node) at 2.55GHz and 256GB of DDR4-3200 RAM. An 800GB, NVMe solid-state disk is available for use as local scratch space during job execution. All Delta CPU compute nodes are interconnected to each other and to the Delta storage resource by a 100 Gb/sec HPE Slingshot network fabric.
Recommended Use: The Delta CPU resource is designed for general purpose computation across a broad range of domains able to benefit from the scalar and multi-core performance provided by the CPUs such as appropriately scaled weather and climate, hydrodynamics, astrophysics, and engineering modeling and simulation, and other domains that have algorithms that have not yet moved to the GPU. Delta also supports domains that employ data analysis, data analytics, or other data-centric methods. Delta will feature a rich base of preinstalled applications, based on user demand. The system will be optimized for capacity computing, with rapid turnaround for small to modest scale jobs, and will feature support for shared-node usage. Local SSD storage on each compute node will benefit applications with random access data patterns or require fast access to significant amounts of compute-node local scratch space.
Organization: National Center for Supercomputing Applications
Units: Core-hours
Description: 1 SU = 1 Core-hour
Resource Type: Compute
Resource Description: The Delta GPU resource comprises 4 different node configurations intended to support accelerated computation across a broad range of domains such as soft-matter physics, molecular dynamics, replica-exchange molecular dynamics, machine learning, deep learning, natural language processing, textual analysis, visualization, ray tracing, and accelerated analysis of very large in-memory datasets. Delta is designed to support the transition of applications from CPU-only to using the GPU or hybrid CPU-GPU models. Delta GPU resource capacity is predominately provided by 200 single-socket nodes, each configured with 1 AMD EPYC 7763 (“Milan”) processors with 64-cores/socket (64-cores/node) at 2.55GHz and 256GB of DDR4-3200 RAM. Half of these single-socket GPU nodes (100 nodes) are configured with 4 NVIDIA A100 GPUs with 40GB HBM2 RAM and NVLink (400 total A100 GPUs); the remaining half (100 nodes) are configured with 4 NVIDIA A40 GPUs with 48GB GDDR6 RAM and PCIe 4.0 (400 total A40 GPUs). Rounding out the GPU resource is 6 additional “dense” GPU nodes, containing 8 GPUs each, in a dual-socket CPU configuration (128-cores per node) and 2TB of DDR4-3200 RAM but otherwise configured similarly to the single-socket GPU nodes. Within the “dense” GPU nodes, 5 nodes employ NVIDIA A100 GPUs (40 total A100 GPUs in “dense” configuration) and 1 node employs AMD MI100 GPUs (8 total MI100 GPUs) with 32GB HBM2 RAM. A 1.6TB, NVMe solid-state disk is available for use as local scratch space during job execution on each GPU node type. All Delta GPU compute nodes are interconnected to each other and to the Delta storage resource by a 100 Gb/sec HPE Slingshot network fabric.
Recommended Use: The Delta GPU resource is designed to support accelerated computation across a broad range of domains such as soft-matter physics, molecular dynamics, replica-exchange molecular dynamics, machine learning, deep learning, natural language processing, textual analysis, visualization, ray tracing, and accelerated analysis of very large in-memory datasets. Delta is designed to support the transition of applications from CPU-only to using the GPU or hybrid CPU-GPU models. Delta will feature a rich base of preinstalled applications, based on user demand. The system will be optimized for capacity computing, with rapid turnaround for small to modest scale jobs, and will feature support for shared-node usage. Local SSD storage on each compute node will benefit applications with random access data patterns or require fast access to significant amounts of compute-node local scratch space.
Organization: National Center for Supercomputing Applications
Units: GPU Hours
Description: 1 SU = 1 GPU-Hour
Resource Type: Storage
Resource Description: The Delta Storage resource provides storage allocations for projects using the Delta CPU and Delta GPU resources. It delivers 7PB of capacity to projects on Delta and will be augmented by a later expansion of 3PB of flash capacity for high-speed, data-intensive workloads.
Recommended Use: The Delta Storage resource provides storage allocations for allocated projects using the Delta CPU and Delta GPU resources. Unpurged storage is available for the duration of the allocation period.
Organization: National Center for Supercomputing Applications
Units: GB
Description:
Resource Type: Compute
Resource Description: A virtual HTCondor pool made up of resources from the Open Science Grid
Recommended Use: High throughput jobs using a single core, or a small number of threads which can fit on a single compute node.
Organization: Open Science Grid
Units: SUs
Description:
Resource Type: Storage
Resource Description: The Open Storage Network (OSN) is an NSF-funded cloud storage resource, geographically distributed among several pods. OSN pods are currently hosted at SDSC, NCSA, MGHPCC, RENCI, and Johns Hopkins University. Each OSN pod currently hosts 1PB of storage, and is connected to R&E networks at 50 Gbps. OSN storage is allocated in buckets, and is accessed using S3 interfaces with tools like rclone, cyberduck, or the AWS cli.
Recommended Use: Cloud-style storage of project datasets for access using AWS S3-compatible tools. The minimum allocation is 10TB. Storage allocations up to 300TB may be requested via the XSEDE resource allocation process.
Organization: Open Storage Network
Units: TB
Description: Amount of storage capacity being requested, expressed in base 10 units. The minimum allocation is 10TB. Larger allocations of up to 300TB may be accommodated with further justification and approval by OSN. Storage access is via AWS S3-compatible tools such as rclone, cyberduck and the AWS command-line interface for S3. A web interface is provided at https://portal.osn.xsede.org for allocated project users to manage and browse their storage.
Resource Type: Compute
Resource Description: Bridges-2 Extreme Memory (EM) nodes provide 4TB of shared memory for genome sequence assembly, graph analytics, statistics, and other applications that need a large amount of memory and for which distributed-memory implementations are not available. Each Bridges-2 EM node consists of 4 Intel Xeon Platinum 8260M “Cascade Lake” CPUs, 4TB of DDR4-2933 RAM, 7.68TB NVMe SSD. They are connected to Bridges-2's other compute nodes and its Ocean parallel filesystem and archive by HDR-200 InfiniBand, providing 200Gbps of bandwidth to read or write data from each EM node.
Recommended Use: Bridges-2 Extreme Memory (EM) nodes enable memory-intensive genome sequence assembly, graph analytics, statistics, in-memory databases, and other applications that need a large amount of memory and for which distributed-memory implementations are not available. This includes memory-intensive applications implemented in languages such as Java, R, and Python. Their x86 CPUs support an extremely broad range of applications, and approximately 42GB of RAM per core provides valuable support for applications where memory capacity is the primary requirement.
Organization: Pittsburgh Supercomputing Center
Units: Core-hours
Description: 1 SU = 1 core hour
Resource Type: Compute
Resource Description: Bridges-2 Accelerated GPU (GPU) nodes are optimized for scalable artificial intelligence (AI; deep learning). Bridges-2 GPU nodes each contain 8 NVIDIA Tesla V100-32GB SXM2 GPUs, providing 40,960 CUDA cores and 5,120 tensor cores. In addition, each node holds 2 Intel Xeon Gold 6248 CPUs; 512GB of DDR4-2933 RAM; and 7.68TB NVMe SSD. They are connected to Bridges-2's other compute nodes and its Ocean parallel filesystem and archive by two HDR-200 InfiniBand links, providing 400Gbps of bandwidth to enhance scalability of deep learning training.
Recommended Use: Bridges-2 GPU nodes are optimized for scalable artificial intelligence (AI) including deep learning training, deep reinforcement learning, and generative techniques - as well as for accelerated simulation and modeling.
Organization: Pittsburgh Supercomputing Center
Units: GPU Hours
Description: 1 SU = 1 GPU hour
Resource Type: Compute
Resource Description: Bridges-2 Regular Memory (RM) nodes provide extremely powerful general-purpose computing, machine learning and data analytics, AI inferencing, and pre- and post-processing. Each Bridges RM node consists of two AMD EPYC “Rome” 7742 64-core CPUs, 256-512GB of RAM, and 3.84TB NVMe SSD. 488 Bridges-2 RM nodes have 256GB RAM, and 16 have 512GB RAM for more memory-intensive applications (see also Bridges-2 Extreme Memory nodes, each of which has 4TB of RAM). Bridges-2 RM nodes are connected to other Bridges-2 compute nodes and its Ocean parallel filesystem and archive by HDR-200 InfiniBand.
Recommended Use: Bridges-2 Regular Memory (RM) nodes provide extremely powerful general-purpose computing, machine learning and data analytics, AI inferencing, and pre- and post-processing. Their x86 CPUs support an extremely broad range of applications, and jobs can request anywhere from 1 core to all 64,512 cores of the Bridges-2 RM resource.
Organization: Pittsburgh Supercomputing Center
Units: Core-hours
Description: 1 SU = 1 core hour
Resource Type: Storage
Resource Description: The Bridges-2 Ocean data management system provides a unified, high-performance filesystem for active project data, archive, and resilience. Ocean consists of two tiers, disk and tape, transparently managed by HPE DMF as a single, highly usable namespace. Ocean's disk subsystem, for active project data, is a high-performance, internally resilient Lustre parallel filesystem with 15PB of usable capacity, configured to deliver up to 129GB/s and 142GB/s of read and write bandwidth, respectively. Ocean's tape subsystem, for archive and additional resilience, is a high-performance tape library with 7.2PB of uncompressed capacity (estimated 8.6PB compressed, with compression done transparently in hardware with no performance overhead), configured to deliver 50TB/hour.
Recommended Use: The Bridges-2 Ocean data management system provides high-performance and highly usable access to project and archive data. It is equally accessible to all Bridges-2 compute nodes, allowing seamless execution of data-intensive workflows involving components running on different compute resource types.
Organization: Pittsburgh Supercomputing Center
Units: GB
Description: Storage
Resource Type: Compute
Resource Description: Purdue's Anvil cluster is comprised of 1000 nodes (each with 128 cores and 256 GB of memory for a peak performance of 5.3 PF), 32 large memory nodes (each with 128 cores and 1 TB of memory), and 16 GPU nodes (each with 128 cores, 256 GB of memory, and four NVIDIA A100 Tensor Core GPUs) providing 1.5 PF of single-precision performance to support machine learning and artificial intelligence applications. All CPU cores are AMD's "Milan" architecture running at 2.0 GHz, and all nodes are interconnected using a 100 Gbps HDR Infiniband fabric. Scratch storage consists of a 10+ PB parallel filesystem with over 3 PB of flash drives. Storage for active projects is provided by Purdue's Research Data Depot, and data archival is available via Purdue's Fortress tape archive. The operating system is CentOS 8, and the batch scheduling system is Slurm. Anvil's advanced computing capabilities are well suited to support a wide range of computational and data-intensive research spanning from traditional high-performance computing to modern artificial intelligence applications.
Recommended Use: Anvil's general purpose CPUs and 128 cores per node make it suitable for many types of CPU-based workloads.
Organization: Purdue University
Units: SUs
Description: 1 Service Unit = 1 Core Hour
Resource Type: Compute
Resource Description: Purdue's Anvil GPU cluster is comprised of 16 GPU nodes (each with 128 cores, 256 GB of memory, and four NVIDIA A100 Tensor Core GPUs) providing 1.5 PF of single-precision performance to support machine learning and artificial intelligence applications. All CPU cores are AMD's "Milan" architecture running at 2.0 GHz, and all nodes are interconnected using a 100 Gbps HDR Infiniband fabric. Scratch storage consists of a 10+ PB parallel filesystem with over 3 PB of flash drives. Storage for active projects is provided by Purdue's Research Data Depot, and data archival is available via Purdue's Fortress tape archive. The operating system is CentOS 8, and the batch scheduling system is Slurm.
Recommended Use: Machine learning (ML), artificial intelligence (AI), other GPU-enabled workloads
Organization: Purdue University
Units: SUs
Description: 1 SU = 1 GPU hour
Resource Type: Compute
Resource Description: Expanse will be a Dell integrated compute cluster, with AMD Rome processors, interconnected with Mellanox HDR InfiniBand in a hybrid fat-tree topology. There are 728 compute nodes, each with two 64-core AMD EPYC 7742 (Rome) processors for a total of 93,184 cores. They will feature 1TB of NVMe storage and 256GB of DRAM per node. Full bisection bandwidth will be available at rack level (56 nodes) with HDR100 connectivity to each node. HDR200 switches are used at the rack level and there will be 3:1 oversubscription cross-rack. In addition, Expanse also has four 2 TB large memory nodes. The system will also feature 12PB of Lustre based performance storage (140GB/s aggregate), and 7PB of Ceph based object storage.
Recommended Use: Expanse is designed to provide cyberfrastructure for the long tail of science, covering a diverse application base with complex workflows. It will feature a rich base of preinstalled applications including commercial software like Gaussian, Abaqus, QChem, MATLAB, and IDL. The system will be geared towards supporting capacity computing, optimized for quick turnaround on small/modest scale jobs. The local NVMes on each compute node will be beneficial to applications that exhibit random access data patterns or require fast access to significant amounts of compute node local scratch space. Expanse will support composable systems computing with dynamic capabilities enabled using tools such as Kubernetes and workflow software.
Organization: San Diego Supercomputer Center
Units: Core-hours
Description:
Resource Type: Compute
Resource Description: Expanse GPU will be a Dell integrated cluster, NVIDIA V100 GPUs with NVLINK, interconnected with Mellanox HDR InfiniBand in a hybrid fat-tree topology. There are a total of 52 nodes with four V100 SMX2 GPUs per node (with NVLINK connectivity). There are two 20-core Xeon 6248 CPUs per node. Full bisection bandwidth will be available at rack level (52 CPU nodes, 4 GPU nodes) with HDR100 connectivity to each node. HDR200 switches are used at the rack level and there will be 3:1 oversubscription cross-rack. In addition, Expanse also has four 2 TB large memory nodes. The system will also feature 12PB of Lustre based performance storage (140GB/s aggregate), and 7PB of Ceph based object storage.
Recommended Use: GPUs are a specialized resource that performs well for certain classes of algorithms and applications. Recommend to be used for accelerating simulation codes optimized to take advantage of GPUs (using CUDA, OpenACC). There is a large and growing base of community codes that have been optimized for GPUs including those in molecular dynamics, and machine learning. GPU-enabled applications on Expanse will include: AMBER, Gromacs, BEAST, OpenMM, NAMD, TensorFlow, and PyTorch.
Organization: San Diego Supercomputer Center
Units: GPU Hours
Description:
Resource Type: Storage
Resource Description: Allocated storage for projects using Expanse Compute and Expanse GPU resources.
Recommended Use: Use for storage needs of allocated projects on Expanse Compute and Expanse GPU resources. Unpurged storage available for duration of allocation period.
Organization: San Diego Supercomputer Center
Units: GB
Description:
Resource Type: Compute
Resource Description: The Stampede2 Dell/Intel Knights Landing (KNL), Skylake (SKX) System provides the user community access to two Intel Xeon compute technologies. The system is configured with 4204 Dell KNL compute nodes, each with a stand-alone Intel Xeon Phi Knights Landing bootable processor. Each KNL node includes 68 cores, 16GB MCDRAM, 96GB DDR-4 memory and a 200GB SSD drive. Stampede2 also includes 1736 Intel Xeon Skylake (SKX) nodes and additional management nodes. Each SKX includes 48 cores, 192GB DDR-4 memory, and a 200GB SSD. Allocations awarded on Stampede2 may be used on either or both of the node types. Compute nodes have access to dedicated Lustre Parallel file systems totaling 28PB raw, provided by Cray. An Intel Omni-Path Architecture switch fabric connects the nodes and storage through a fat-tree topology with a point to point bandwidth of 100 Gb/s (unidirectional speed). 16 additional login and management servers complete the system. Stampede2 will deliver an estimated 18PF of peak performance. Please see the Stampede2 User Guide for detailed information on the system and how to most effectively use it. https://portal.xsede.org/tacc-stampede2
Recommended Use: Stampede2 is intended primarily for parallel applications scalable to tens of thousands of cores, as well as general purpose and throughput computing. Normal batch queues will enable users to run simulations up to 48 hours. Jobs requiring run times and more cores than allowed by the normal queues will be run in a special queue after approval of TACC staff. normal, serial and development queues are configured as well as special purpose queues.
Organization: Texas Advanced Computing Center
Units: Node Hours
Description: Stampede 2 is allocated in service units (SU)s. An SU is defined as 1 wall-clock node hour. Allocations awarded on Stampede2 may be used on either or both node types.
Resource Type: Storage
Resource Description:
Recommended Use: TACC's High Performance Computing systems are used primarily for scientific computing with users having access to WORK, SCRATCH, and HOME file systems that are limited in size. This is also true for TACC's visualization system, Longhorn. The Ranch system serves the HPC and Vis community systems by providing a massive, high-performance file system for archival purposes. Space on Ranch can also be requested independent of an accompanying allocation on an XSEDE compute or visualization resource. Please note that Ranch is an archival system. The ranch system is not backed up or replicated. This means that Ranch contains a single copy, and only a single copy, of your file/s. While lost data due to tape damage is rare, please keep this fact in mind for your data management plans.
Organization: Texas Advanced Computing Center
Units: GB
Description:
Resource Type: Compute
Resource Description: FASTER (Fostering Accelerated Scientific Transformations, Education and Research) is funded by the NSF MRI program (Award #2019129) and provides a composable high-performance data-analysis and computing instrument. The FASTER system has 180 compute nodes with 2 Intel 32-core Ice Lake processors and includes 240 NVIDIA GPUs (40 A100 and 200 T4 GPUs). Using LIQID’s composable technology, all 180 compute nodes have access to the pool of available GPUs, dramatically improving workflow scalability. FASTER will have HDR InfiniBand interconnection and access/share a 5PB usable high-performance storage system running Lustre filesystem. thirty percent of FASTER’s computing resources will be allocated to researchers nationwide through XSEDE’s XRAC process.
Recommended Use: Workflows that can utilize multiple GPUs.
Organization: Texas A&M University
Units: SUs
Description: 1 SU = 1 node hour
Resource Type: Program
Resource Description: The Science Gateways Community Institute (SGCI) services commits people time. Two SGCI services are offered to help clients build science gateways, longer term engagements (up to a year, 25% FTE commitment) through SGCI’s Extended Developer Support and shorter term consulting engagements for advice in specific areas (usability, cybersecurity, sustainability and more) through SGCI’s Incubator program. Further detail is available from the Services link at http://www.sciencegateways.org.
Recommended Use: SGCI services can be used to develop entirely new gateways or improve existing gateways. The gateways do not need to make use of ACCESS compute resources (though they can). They could be gateways to data collections, instruments, sensor streams, citizen science engagement, etc.
Organization: Science Gateways Community Institute
Units: [Yes = 1, No = 0]
Description:
Resource Type: Compute
Resource Description: The Delaware Advanced Research Workforce and Innovation Network (DARWIN) computing system at the University of Delaware is based on AMD Epyc™ 7502 processors with three main memory sizes to support different workload requirements (512 GiB, 1024 GiB, 2048 GiB). The cluster provides more than 1 PiB usable, shared storage via a dedicated Lustre parallel file system to support large, data sciences workloads. The Mellanox HDR 200Gbps InfiniBand network provides near-full bisectional bandwidth at 100 Gbps per node.
Recommended Use: DARWIN's standard memory nodes provide powerful general-purpose computing, data analytics, and pre- and post-processing capabilities. The large and xlarge memory nodes enable memory-intensive applications and workflows that do not have distributed-memory implementations.
Organization: University of Delaware
Units: SUs
Description: 1 SU = 1 compute hour (any node type); standard = 1 core + 8 GiB RAM; large = 1 core + 16 GiB RAM; xlarge = 1 core + 32 GiB RAM; lg-swap is billed as the entire node at 64 SUs per hour = 64 cores + 1024 GiB RAM + 2.73 TiB Optane NVMe swap
Resource Type: Compute
Resource Description: The Delaware Advanced Research Workforce and Innovation Network (DARWIN) computing system at the University of Delaware is based on AMD Epyc™ 7502 processors with three main memory sizes to support different workload requirements (512 GiB, 1024 GiB, 2048 GiB). The cluster provides more than 1 PiB usable, shared storage via a dedicated Lustre parallel file system to support large, data sciences workloads. The Mellanox HDR 200Gbps InfiniBand network provides near-full bisectional bandwidth at 100 Gbps per node. Additionally, the system provides access to three GPU architectures to facilitate Artificial Intelligence (AI) research in the data sciences domains.
Recommended Use: DARWIN's GPU nodes provide resources for machine learning, artificial intelligence research, and visualization.
Organization: University of Delaware
Units: SUs
Description: 1 SU = 1 device hour; T4 or MI50 device hour = 1 GPU + 64 CPU cores + 512 GiB RAM; V100 device hour = 1 GPU + 12 CPU cores + 192 GiB RAM
Resource Type: Storage
Resource Description: Storage for DARWIN projects at the University of Delaware
Recommended Use: DARWIN's Lustre storage should be used for storing input files, supporting data files, work files, and output files associated with computational tasks run on the cluster.
Organization: University of Delaware
Units: GB
Description: