{"@context":"https://schema.org","@graph":[{"@type":"Service","name":"GPU Cluster for HPC","description":"Multi-node GPU cluster with NVIDIA B200, NVIDIA B300 or AMD MI350X and 400/800G RoCEv2 fabric for training, HPC and simulation - in sovereign EU data centers.","provider":{"@type":"Organization","name":"Yorizon","url":"https://yorizon.com"},"areaServed":{"@type":"Place","name":"Europe"},"serviceType":"GPU HPC Service","category":"GPU","url":"https://yorizon.com/products/gpu-cluster-hpc"},{"@type":"FAQPage","mainEntity":[{"@type":"Question","name":"Which GPUs are used?","acceptedAnswer":{"@type":"Answer","text":"NVIDIA B200, NVIDIA B300 (Blackwell) or AMD MI350X. Per node: 8 GPUs, 128 cores, 2,304 GB RAM and 30.4 TB local NVMe."}},{"@type":"Question","name":"How does the cluster networking work?","acceptedAnswer":{"@type":"Answer","text":"400/800G RoCEv2 fabric based on SONiC Ethernet. This allows multi-node training and MPI workloads to reach InfiniBand-level performance without proprietary lock-in."}},{"@type":"Question","name":"Which software is preinstalled?","acceptedAnswer":{"@type":"Answer","text":"NVIDIA AI Enterprise (NVAIE) as the standard license-included option. Optional BYOL after review. MLOps stacks such as Kubeflow, MLflow and Ray can be connected."}},{"@type":"Question","name":"How is GPU performance ensured?","acceptedAnswer":{"@type":"Answer","text":"Direct liquid cooling at component level. Nodes are assigned exclusively to a tenant via OpenStack Host Aggregates - no shared load."}},{"@type":"Question","name":"Which workloads are typical?","acceptedAnswer":{"@type":"Answer","text":"Foundation model training, fine-tuning, HPC simulation (CFD, FEM), vision AI, genomics, climate modeling. Suitable for research, industry and medtech."}}]}]}

Order now
Order now

Bare Metal GPU

Dedicated bare-metal GPU servers for AI training, inference, and HPC — without hypervisor overhead, hosted in European timber-built data centers.

Research institutions, industry and AI teams that run multi-node training, simulations, and HPC workloads on a sovereign, energy-efficient European infrastructure.

What Yorizon delivers

  • Multiple GPU nodes per cluster

  • Per node: 8x B200, 8x B300 or 8x MI350X, 128 cores, 2,304 GB RAM

  • 30.4 TB local NVMe per node

  • 400/800G RoCEv2-Fabric

  • Direct Liquid Cooling

  • NVAIE included (License Included)

Architecture & Technology

Whole-machine GPU nodes, exclusively via host aggregate. OSISM with Ironic for bare-metal provisioning, Sovereign Cloud Stack. SONiC Ethernet fabric with RoCEv2. Direct liquid cooling. Timber-built data centers with PUE below 1.1.

Security & Sovereignty

  • EU locations (Heiligenhaus, Bad Lippspringe and additional locations planned)

  • No US Cloud Act

  • Roadmap ISO 27001, BSI C5; potentially VS-NfD

  • Customer-managed keys possible

Service Level

  • 99.9% availability

  • Maintenance with advance notice

  • Tiered service credits

Which GPUs are used?

NVIDIA B200/B300 (Blackwell) or AMD MI350X. Per node: 8 GPUs, 128 cores, 2,304 GB RAM, and 30.4 TB local NVMe.

How does cluster networking work?

400/800G RoCEv2 fabric based on SONiC Ethernet. This enables multi-node training and MPI workloads to achieve InfiniBand-level performance without proprietary lock-in.

Which software is preinstalled?

NVIDIA AI Enterprise (NVAIE) as a license-included standard. Optional BYOL after review. MLOps stacks such as Kubeflow, MLflow, and Ray can be integrated.

How is GPU performance guaranteed?

Direct liquid cooling at component level. Nodes are assigned exclusively to a tenant via OpenStack Host Aggregates - no shared load.

Which workloads are typical?

Foundation model training, fine-tuning, HPC simulation (CFD, FEM), computer vision AI, genomics, climate modeling. Suitable for research, industry, and medtech.