NxtGen Cloud Technologies has invested Rs 3,600 crore to build a sovereign AI factory in Bengaluru, a specialized high-density computing facility optimized for large-scale AI training and inference. The infrastructure features a cluster of 4,000 Nvidia Blackwell GPUs across 512 servers, with technology and hardware partnerships involving Dell, Nvidia, AMD, CommScope, and Vertiv.
The project is funded through a mix of equity from promoters and a real estate group, and debt, managed by a new entity called NxtGenAI. The facility is designed for population-scale AI inference, with a distributed storage architecture to reduce latency, and it currently has 60 use cases in production.
The company plans an additional Rs 3,600 crore investment to deploy a second 4,000-GPU cluster before the year's end.
Main Topics: Corporate investment and partnership; AI infrastructure and technology specifications; project funding and structure; technical design goals for efficiency.
NxtGen Cloud Technologies has invested Rs 3,600 crore to build a sovereign AI factory in collaboration with Dell Technologies, Nvidia, CommScope and Vertiv to support enterprise inference, chief executive A S Rajgopal told ET.
The investment is being managed through NxtGenAI, a newly incorporated entity set up to raise capital independently.
An AI 'factory' is a specialised computing infrastructure that is designed to train and scale AI models at a large scale. Unlike data centres that store or process enterprise data, AI factories are optimised for high density, GPU-driven AI training and inferencing.
The funding includes nearly 30% equity financed by an Indian real estate group and the companyâs promoters, with the remainder funded through debt, for a 10-year period. The company did not disclose the financiers.
âWhen we look at population-scale inferences, you need a sizeable cluster to do it. Blackwell (GPUs) ran inference better than the Hopper platform (Nvidiaâs model training architecture) we had. That's the difference that (this project) brings,â Rajgopal said.
The facility is designed to support population-scale AI inference using a high-density cluster of 4,000 GPUs housed in 512 servers occupying 6,000 sq ft.
Nvidia has provided BlueField-3 DPUs and Spectrum-X Ethernet networking, while the hardware includes AMD 9575F processors for compute tasks.
CommScope is responsible for cabling and Vertiv has provided the power and cooling systems. The setup will run on Dell PowerEdge R670 servers and Dell PowerScale F710 storage.
The idea is to build a more efficient and less latency network, Rajgopal said.
âNormally, if you have a large clustered storage, you tend to have a traffic jam from the storage itself. The storage (in this project) is spread across 60 nodes to ensure that we can deliver very high-speed interaction,â he said.
The infrastructure supports Virtual Language Models (VLMs) that can run on GPUs from Nvidia, Intel and AMD, allowing the provider to offer services at different price points depending on the GPU used.
Located in the Bidadi Industrial Area in Bengaluru, the facility has developed 153 use cases, of which 60 are currently in production, Rajgopal said.
The company plans to invest another Rs 3,600 crore to deploy a second cluster of 4,000 GPUs before the end of the year.
The investment is being managed through NxtGenAI, a newly incorporated entity set up to raise capital independently.
An AI 'factory' is a specialised computing infrastructure that is designed to train and scale AI models at a large scale. Unlike data centres that store or process enterprise data, AI factories are optimised for high density, GPU-driven AI training and inferencing.
The funding includes nearly 30% equity financed by an Indian real estate group and the companyâs promoters, with the remainder funded through debt, for a 10-year period. The company did not disclose the financiers.
âWhen we look at population-scale inferences, you need a sizeable cluster to do it. Blackwell (GPUs) ran inference better than the Hopper platform (Nvidiaâs model training architecture) we had. That's the difference that (this project) brings,â Rajgopal said.
The facility is designed to support population-scale AI inference using a high-density cluster of 4,000 GPUs housed in 512 servers occupying 6,000 sq ft.
Nvidia has provided BlueField-3 DPUs and Spectrum-X Ethernet networking, while the hardware includes AMD 9575F processors for compute tasks.
CommScope is responsible for cabling and Vertiv has provided the power and cooling systems. The setup will run on Dell PowerEdge R670 servers and Dell PowerScale F710 storage.
The idea is to build a more efficient and less latency network, Rajgopal said.
âNormally, if you have a large clustered storage, you tend to have a traffic jam from the storage itself. The storage (in this project) is spread across 60 nodes to ensure that we can deliver very high-speed interaction,â he said.
The infrastructure supports Virtual Language Models (VLMs) that can run on GPUs from Nvidia, Intel and AMD, allowing the provider to offer services at different price points depending on the GPU used.
Located in the Bidadi Industrial Area in Bengaluru, the facility has developed 153 use cases, of which 60 are currently in production, Rajgopal said.
The company plans to invest another Rs 3,600 crore to deploy a second cluster of 4,000 GPUs before the end of the year.