Global AI has deployed a large-scale GB300 NVL72 artificial intelligence cluster powered by NVIDIA in New York, underscoring accelerating investment in high-performance computing infrastructure to support advanced AI systems.
Global AI Deploys the largest NVIDIA GB300 NVL72 Cluster in New York; Plans Next to Deploy NVIDIA Vera Rubin NVL72 Systems
The company announced that the deployment will serve as a core part of its expanding data centre capacity, enabling more efficient training and deployment of large-scale AI models. It described the system as one of the largest GB300 NVL72 clusters in the region, designed to handle increasingly complex workloads across enterprise and research applications.
In a statement, Global AI said the new infrastructure will “significantly enhance computational performance and scalability for next-generation AI applications,” as demand for generative AI and large language models continues to surge globally.
The GB300 NVL72 architecture, developed by NVIDIA, integrates high-density GPUs with advanced interconnect technologies, enabling faster data processing and greater efficiency in large-scale model training environments. The deployment reflects a broader industry shift toward hyperscale AI infrastructure, where companies are investing heavily in clusters capable of supporting billions of parameters.
NVIDIA has positioned such systems as foundational to what it calls “AI factories” – large-scale computing environments built to produce and deploy artificial intelligence at an industrial scale. Speaking at recent industry events, NVIDIA Chief Executive Officer Jensen Huang said the future of AI will depend on “massive, highly optimised computing infrastructure” capable of sustaining continuous model development and deployment.
Related: world’s premier AI conference, NVIDIA GTC 2026
Global AI said it is already planning the next phase of its infrastructure rollout, which will include deploying NVIDIA’s Vera Rubin NVL72 systems, the company’s next-generation AI platform. The company noted that the upcoming systems are expected to deliver significant improvements in performance and efficiency compared with current architectures.
“Our roadmap includes early adoption of next-generation AI systems to ensure we remain at the forefront of high-performance computing,” the company said in the statement.
The announcement comes amid intensifying global competition among technology firms and cloud providers to secure access to advanced AI chips and build large-scale data centres. Companies across North America, Europe and Asia have accelerated investments in GPU clusters, driven by the rapid adoption of generative AI tools across industries.
Analysts say control over compute infrastructure is becoming a critical differentiator in the AI economy, with capacity constraints and rising costs shaping how quickly organisations can develop and deploy new models.
Global AI did not disclose the financial cost of the New York deployment but indicated that further expansion plans are underway as part of its long-term strategy to scale AI infrastructure.
The move highlights the growing importance of specialised hardware and large-scale computing systems in defining the next phase of artificial intelligence development, as companies race to meet demand for faster, more efficient AI capabilities.


