Skip to main content
GPU Cloud provides dedicated compute infrastructure for machine learning workloads. Use GPU clusters to train models, run inference, and process large-scale AI tasks.

What is a GPU cluster

A GPU cluster is a group of interconnected servers, each equipped with multiple high-performance GPUs. Clusters are designed for workloads that require massive parallel processing power, such as training large language models (LLMs), fine-tuning foundation models, running inference at scale, and high-performance computing (HPC) tasks.
GPU Cloud create cluster page showing region selection, cluster type, and GPU configuration options
All nodes in a cluster share the same configuration: operating system image, network settings, and storage mounts. This ensures consistent behavior across the cluster.

Cluster types

Gcore offers three types of GPU clusters:
TypeDescriptionBest for
Bare Metal GPUDedicated physical servers with guaranteed resources. No virtualization overheadProduction workloads, long-running training jobs, and latency-sensitive inference
Spot Bare Metal GPUSame hardware as Bare Metal, but at a reduced price (up to 50% discount). Instances can be preempted with a 24-hour notice when capacity is neededFault-tolerant training with checkpointing, batch processing, development, and testing
Virtual GPUVirtualized GPU instances with flexible resource management. Supports flavor changes and cost optimization through shelving (powering off releases resources and stops billing)Development environments, variable workloads, cost-sensitive projects
Clusters can scale to hundreds of nodes. Production deployments with 250+ nodes in a single cluster are supported, limited only by regional stock availability.

Available configurations

Select a configuration based on your workload requirements:
ConfigurationGPUsInterconnectRAMStorageUse case
H100 with InfiniBand8x NVIDIA H100 80GB3.2 Tbit/s InfiniBand2TB8x 3.84TB NVMeDistributed LLM training requiring high-speed inter-node communication
H100 (bm3-ai-ndp)8x NVIDIA H100 80GB3.2 Tbit/s InfiniBand2TB6x 3.84TB NVMeDistributed training and latency-sensitive inference at scale
A100 with InfiniBand8x NVIDIA A100 80GB800 Gbit/s InfiniBand2TB8x 3.84TB NVMeMulti-node ML training and HPC workloads
A100 without InfiniBand8x NVIDIA A100 80GB2x 100 Gbit/s Ethernet2TB8x 3.84TB NVMeSingle-node training, inference for large models requiring more than 48GB VRAM
L40S8x NVIDIA L40S2x 25 Gbit/s Ethernet2TB4x 7.68TB NVMeInference, fine-tuning small to medium models requiring less than 48GB VRAM
Outbound data transfer (egress) from GPU clusters is free. Other costs are covered in GPU Cloud billing.

InfiniBand networking

InfiniBand is a high-bandwidth, low-latency interconnect for communication between cluster nodes. It is essential for distributed training and multi-node inference where frequent data synchronization is required. InfiniBand is available for both Bare Metal and Virtual GPU clusters and is configured automatically when you select a flavor with InfiniBand support.

Storage options

GPU clusters support two storage types:
Storage typePersistencePerformanceUse case
Local NVMeTemporary (deleted with cluster)Highest IOPS, lowest latencyTraining data cache, checkpoints during training
File sharesPersistent (independent of cluster)Network-attached, lower latency than object storageDatasets, model weights, shared checkpoints

Cluster lifecycle

Create --> Configure --> Run workloads --> Resize (optional) --> Delete
  1. Create: Select region, GPU type, number of nodes, image, and network settings. See creating a Bare Metal GPU cluster or creating a Virtual GPU cluster.
  2. Configure: Connect via SSH to each node, install required dependencies, and mount file shares to prepare the environment for workloads.
  3. Run workloads: Execute training jobs, run inference services, process data.
  4. Resize: Add or remove nodes based on demand. New nodes inherit the cluster configuration. See managing a Bare Metal GPU cluster for details.
  5. Delete: Remove the cluster when no longer needed. Local storage is erased; file shares and network disks can be preserved.
GPU clusters may take 15–40 minutes to provision, and their configuration (image, network, and storage) is fixed at creation. Local NVMe storage is temporary, so critical data should be saved to persistent file shares. Spot clusters can be interrupted with a 24-hour notice, and cluster size is limited by available regional stock.
Hardware firewall support is available on servers equipped with BlueField network cards, enhancing network security for GPU clusters.