India’s AI governance moment: Why good intent must now translate into deep capability
India’s AI governance guidelines have won applause for their tone and ambition. They project the country as a responsible, inclusive leader — favouring voluntary guardrails over heavy-handed regulation, and rooting innovation in digital public infrastructure rather than compliance burdens. This approach resonates strongly across the Global South. But once the applause dies down, a harder question emerges: is India doing enough to build AI, not just deploy it?
Why India’s AI approach has found global resonance
At the heart of India’s AI narrative is its digital public infrastructure (DPI). Platforms such as “Aadhaar”, “UPI”, DigiLocker, Bhashini and DEPA give developers a rare advantage — ready-made national-scale rails for identity, payments, consented data-sharing and multilingual access.
This integrated architecture allows AI applications to scale rapidly in real-world contexts, from fintech and governance to health and language services. No other country offers this level of interoperable infrastructure at population scale. As a result, India is among the best places globally to deploy AI solutions, especially those aimed at inclusion and last-mile delivery.
The missing layer: From applying AI to building it
The challenge begins higher up the technology stack. India excels at applying AI, but lags in building foundational models. This imbalance has long-term consequences.
One core problem is legal uncertainty around training data. India’s Copyright Act has not been updated for AI. There is no explicit exemption for text-and-data mining of publicly available material. Developers cannot be sure whether large-scale model training is lawful. Faced with this ambiguity, many start-ups limit themselves to fine-tuning open models, move compute workloads offshore, or avoid foundational research altogether. The result is a quiet erosion of technological sovereignty.
Liability gaps that deter serious innovation
A second unresolved issue is liability. If an AI system causes harm, who is responsible — the model developer, the deploying enterprise, or the hosting platform? India’s current guidelines acknowledge the question but defer answers to future regulation.
That uncertainty matters most for smaller firms and research-driven start-ups. In high-stakes sectors like finance or healthcare, unclear liability is enough to deter foundational development. Predictable risk allocation is a prerequisite for serious innovation; without it, ambition remains rhetorical.
Underused research talent and constrained compute
India’s academic AI talent is strong, but poorly supported by infrastructure. Initiatives such as AIRAWAT signal intent, yet access remains opaque. Documentation is limited, onboarding unclear, and approvals slow. University labs cannot easily spin up large training jobs without navigating bureaucratic bottlenecks.
This stands in contrast to countries that treat compute as strategic infrastructure — aggressively funding shared facilities and ensuring frictionless access for researchers. Without such a backbone, experimentation slows and talent drifts.
What other countries are doing differently
Globally, governments are not just regulating AI — they are enabling it. The UAE has backed Falcon, a nationally supported open-source language model. Singapore is pairing governance with clear rules on explainability and redress. The EU, despite its regulatory reputation, has created carve-outs for research and legal certainty for developers. The US continues to operate under a broad fair-use doctrine that gives developers room to experiment.
China’s experience offers a cautionary lesson. Models like DeepSeek demonstrated impressive cost and performance gains, briefly rattling global tech markets. But privacy concerns and trust deficits limited global adoption. Capability alone is not enough; openness, alignment and governance determine scale.
India’s real risk: Dependency at the core
India’s current advantage lies in deployment — local-language applications, citizen services, and DPI-linked platforms. But the deeper one goes into model training, risk calibration and core architecture, the thinner policy support becomes.
Over time, this creates dependency. If India does not build its own models, it relies on others’ assumptions, licences and priorities. Public services and user experiences end up shaped by systems India did not design. This is not merely a technical issue; it is about control, resilience and competitiveness in a strategic technology.
What targeted policy fixes could unlock capability
India does not need sweeping new laws to change course. A few targeted interventions could have outsized impact.
First, legal clarity. Explicitly permitting the use of publicly available data for AI research and training would immediately unlock experimentation across academia and start-ups.
Second, safe harbours. Developers need proportional, predictable liability — similar to protections once extended to internet intermediaries — so misuse by third parties does not automatically translate into legal risk.
Third, usable compute. AIRAWAT must become genuinely accessible: clear norms, simple onboarding, and shared compute clusters with minimal friction.
Fourth, regulatory sandboxes. Structured AI sandboxes, co-run with sectoral regulators, would allow innovation in finance, health and governance without paralysing legal uncertainty.
Finally, lightweight certification. Models that pass baseline tests for fairness, transparency and robustness should be able to plug into DPI use cases by default, creating incentives for responsible development.
The choice before India — and the Global South
India’s current AI governance framework deserves credit for avoiding overregulation and acknowledging complexity. But it is only the first chapter. The next must focus on creation, not just protection.
India has the ingredients — talent, data, market scale, public infrastructure and institutional maturity. What it needs now is policy that converts these into capability. The Global South is watching closely. If India can offer a governance model that is both inclusive and production-ready, it will not just use AI safely — it will help define how the next generation of AI is built.