
The demand for AI computing capacity in Asia-Pacific is growing faster than the infrastructure to support it. Southeast Asia's data centre capacity is projected to need to triple by 2030. India's AI market is projected to reach $126 billion by 2030 and could contribute up to $1.7 trillion to GDP by 2035. Most existing facilities were built for traditional cloud workloads and are not optimized for the intensive GPU demands of AI training and inference.
Nava - formerly known as Kluisz - raised $22 million in a Series A on April 9 to address that gap. The round was led by Greenoaks, with participation from RTP Global and Unicorn India Ventures. Alongside the raise, the company rebranded to Nava and established Singapore as its regional headquarters. Total funding now stands at $31.6 million, following a $9.6 million seed round in July 2025 that was among the largest seed rounds for an AI startup at the time.
Who Built This and Why
The founding team brings operational credibility that matters in a capital-intensive infrastructure category. CEO Abhinav Sinha was Global COO and CPO at OYO, where he scaled complex operations across dozens of markets. Co-founder Vamshidhar Reddy was a McKinsey partner with experience at AMD, giving the team semiconductor and consulting depth. Abhijeet Singh, the third co-founder, was VP of Cloud at Reliance Jio - one of Asia's largest cloud deployments - and previously at AT&T.
The combination of hyperscale operational experience, AI infrastructure knowledge, and cloud platform execution is precisely what building vertically integrated data centre infrastructure requires. These are not first-time builders in adjacent spaces.
What Nava Is Building
The full-stack approach is the strategic differentiator. Rather than offering only GPU-as-a-service or only data centre colocation, Nava is building the entire chain: physical AI-optimized data centres, GPU computing infrastructure, orchestration software for distributing workloads across machines, and inferencing tools for production AI deployment. This vertical integration is designed to give enterprise customers better performance control, lower latency for regional workloads, and more predictable cost structures than routing through hyperscale Western cloud providers.
Capital will go toward expanding the GPU compute and data centre footprint across APAC, with hiring focused on data centre design, GPU engineering, go-to-market, and operations - primarily across India and Southeast Asia.
The Broader Context
India faces a structural shortfall in AI-ready data centre capacity as demand accelerates, with over $200 billion being invested in the country's AI infrastructure ecosystem. Large conglomerates and global tech firms are racing to build AI-ready facilities, but purpose-built AI infrastructure - designed from the ground up for GPU workloads rather than retrofitted from traditional cloud facilities - remains limited. Nava's thesis is that a full-stack, Asia-native AI cloud provider is a structurally different offering than a Western hyperscaler with regional presence, particularly for enterprises building AI products that require low-latency local compute and data sovereignty.



