Akriti Vasudeva: Well, hello and welcome to everyone joining us today. Thank you so much for being here at this discussion. My name is Akriti Vasudeva Kalyankar. I’m a fellow in Stimson South Asia program. And I know many of you are joining us from a cold and icy Washington, and hopefully from a pleasant evening in Delhi. I’m really delighted to welcome you to this conversation on what is really a growth area in the U.S.-India relationship, which is artificial intelligence or AI.
And I wanted to start with reflecting on where we are on this journey. Over the years, the conversation around AI has really shifted beyond the initial discussions on risks and the technical aspects of this transformative technology to more about adoption and impact. And the question that we find ourselves asking now is, how can AI be used as a tool for inclusive economic and social development?
So, in this context, it’s crucial to consider what Washington and Delhi, how can they contribute to this discussion? Both the United States and India are two of the most forward-looking societies when it comes to AI penetration, research and policy discussion. And given the United States’ vast AI infrastructure and research resources and India’s massive talent pool and the scale it provides, it makes sense to think about jointly addressing the challenges and opportunities that this technology presents. So, today’s conversation will focus on the role of open-source AI development and deployment and the potential for U.S.-India collaboration in this area.
Now, in true AI cross-functional fashion, this event, too, is a collaboration between Stimson South Asia program and the Strategic Foresight Hub.
And I want to thank my colleague and co-moderator Giulia Neaher and Isaac Halaszi for all of their help on this event.
So, to kick things off, I am really delighted to invite Sunayna Dabas of the Indian Embassy in Washington to tell us a little bit about India’s AI goals and priorities and the evolution of U.S.-India AI collaboration. Sunayna is a seasoned diplomat who serves as First Secretary for Strategic Technology & Education and has previously served in Russia as well at headquarters in Delhi.
But before I turn to Sunayna, I do want to offer a word of thanks to the Indian Ministry of Electronics and IT and the Indian Embassy in Washington for their support on this event. And Stimson is looking forward to continuing these conversations on the ground at the AI Impact Summit in Delhi next month.
With that, Sunayna, the floor is yours.
Sunayna Dabas: Thank you, Akriti. Good morning to everyone. I hope it’s a warm morning to all of you, keeping in mind the weather conditions that have been in
D.C. for the last two days. So, I would like to thank, first of all, Stimson Center for convening this discussion today on democratizing AI, especially in the context of the upcoming AI summit that India is going to host from February 19th through 20th.
Akriti, first of all, I think I would like to thank you especially for this opportunity, for this event where we are actually discussing this thing, because I remember we have been talking about it for the past two, three months now, that we should really do something on the AI infrastructure side, something in the India-U.S. context, because I think there are a lot of pre-summit events happening right now in D.C. also. There are multiple events which are happening. But this is one part which is really relevant, and I think there’s a lot of opportunity. Some of it is already happening, and I think there’s going to be a lot more in the next coming months or years.
So, if we speak about the AI Impact Summit, I would like to tell everyone here that we are expecting participation from several global leaders, including more than 25 heads of states, heads of governments, and various technology leaders from all over the world. The kind of enthusiasm that we have seen from the U.S. side is especially, I would say, very encouraging. And you would really find the who’s who of the U.S. AI ecosystem in Delhi next month. From the government side also, I probably am not at a liberty to say right now who is coming. We know who’s coming. But it’s really a very-high power delegation that would be there.
So, you expect very… the top AI voices in the U.S. government there. I mean, I’ll leave it there.
So, how we are looking at AI is that the AI revolution, we all know, will unfold over the next few years, and this probably still is an early stage. So, the India AI Impact Summit that we are hosting, I think it comes at a very defining moment where we are already seeing that there are transformations by AI in our daily lives. And of course, there’s immense scope for further innovation. So, how we are approaching this AI moment is with a lot of purpose and intentionality. If you would recall the previous AI summits, so the discussions in UK, Seoul, and Paris, I think they laid really important groundwork by focusing on safety, ethics, and collaborative action. However, now we are aiming to shift the conversation from commitments to outcomes, from principles to practice, and from national strategies to measurable global progress. That is why we are calling it the Impact Summit. So, how we are looking at it, in three themes. We see the summit being anchored in the themes of people, planet, and progress. And this is highlighting the view that we have towards AI adoption and governance.
So, if we really talk about the objectives of the upcoming India AI Impact Summit, I think it has been really designed with a very clear focus on outcomes. So, the first objective of the summit is impact. So that would be how AI models, applications, and the overall AI ecosystem can be used to improve efficiency, increase productivity, and create a multiplier effect for the economy.
The second objective is accessibility, particularly for India and the Global South. We have already proved our unique ability in building the UPI and DPI stack. I presume, Akriti, people here would be versed with what I’m talking right now, DPI and… Yeah? So, the world is now looking to India to see whether a similar scalable and affordable stack can also be created for AI.
And the third objective is, of course, safety. There’s a need to… You said that the conversations have shifted from the risk part of it, but I think there still exists some apprehensions about AI. So there is a need to build appropriate guardrails, guidelines, and safety features.
So, what we are discussing today would intersect directly with at least one of the objectives, which is the accessibility objective of it, and maybe indirectly with the other two, too. So, if we talk about the policy priority of India, especially in terms of democratizing access, I think we are looking at it by democratizing in the sense that you make the AI infrastructure available to everyone, and not just available, but also affordable to everyone. So, when compute, datasets, and model tooling, which are the three basic AI infrastructure layers, when they are broadly available, individuals and institutions expand what they are able to do, maybe like aiming to design local language tools or adapt assistive technologies.
If I talk about some of the examples of how we are enabling some of these for public good, we have developed various AI applications that are transforming actually the very nature of Indian society, because there are these sectors which are very peculiar to the Indian society. Maybe agriculture part, which touches a lot more lives in India as compared to maybe people in other countries. So, in agriculture, healthcare, education, manufacturing, transport, and governance and climate action. So, if we speak about agriculture, we have developed sort of AI-based sowing advice that helps farmers increase their crop yields. In healthcare, we are bringing top-notch hospital capabilities to district hospitals. And then there are some Indian startups who are developing AI screening tools that are supporting doctors by analyzing some of the medical data that they have.
Then, you know India is a land of diversity. We have many languages. We have 22 national languages. So there is an Indian company, which is called as Sarvam AI. They’re building AI models in multiple Indian languages.
And these models are being trained from scratch in 22 Indian languages and also six UN languages. So, these models with India-centric foundations will support research, startups, and public sector applications. And this is actually not also just text-based, it is voice-based. So, as you know, maybe a farmer who is living in a remote district, it’s far more easier to him to just leave a voice note or just chat like a voice-based chatting with a device.
So, access to AI infrastructure, including compute and datasets and model ecosystems, they are, I think, a critical determinant of the innovation, competitiveness, and governance in the digital economy. So, for India, democratizing access means treating all these building blocks as shared resources so that innovators everywhere can participate in shaping the AI age. And we are looking at it in a matrix of three enablers, if we can call them enablers. First is expanding access to high-quality and representative database, datasets. Second is providing affordable and reliable access to computing resources. And third is integrating AI with the DPI. So, democratizing access to AI infrastructure has really become a policy priority for us.
I would like to speak a little bit about India-U.S. cooperation. I would like to say that we bring our own intrinsic strengths to the large-scale adoption of AI. The scale and the diversity that we have, it makes us one of the world’s largest and richest resource of varied datasets. Digital Public Infrastructure built on unique identification and other population scale platforms, they already provide a blueprint for frugal, inclusive, and globally deployable innovation tools.
In India, we are viewing at AI as a transformative force for inclusive growth and our development. Our GDP is currently $4 trillion. By 2030, we aspire to be the world’s third-largest economy with a GDP of $7 trillion. So, we see the U.S. government and U.S. tech industry ecosystems as part of the India’s AI journey. You would recall that in November and December, we saw a lot of AI-related investments from the U.S. in the Indian AI ecosystem. We saw sharp acceleration in investment from the
U.S. hyperscalers, including Amazon, Microsoft, and Google. And they were primarily directed towards cloud infrastructure, data centers, and AI compute. So, our strengths are complementary. The U.S. brings cutting- edge AI research, advanced labs, global cloud providers, and deep venture capital ecosystems. What we bring on the table is the scale, talent, diversity, and an expansive digital and startup landscape. We are already seeing promising momentum. Industry partnerships that bring together U.S. leadership in advanced technologies and India’s strengths in scale and implementation, I think, can really generate solutions with global relevance.
So, as we move towards the Delhi summit, our message to the world is clear, that the future of AI must be inclusive, transparent, safe, and governed through collective stewardship. So, I think India AI Impact Summit will be a pivotal platform to turn the bilateral vision that we have into global collaborative action. And I really look forward to the discussions today, especially with the eminent panelists that we have here. So, thank you for this opportunity again, and thank you so much, Akriti.
That’s all from my side.
Giulia Neaher: Thank you so much, Sunayna, for your remarks today. It’s really a pleasure to have you on board. And I think you’ve given us a lot of great food for thought for today’s discussion with the panelists. I’m really looking forward to seeing how the summit’s discussions on access play out. We do a lot of work in that space here at Stimson, and it’s really evident that different areas and different economies have respectively different needs and expectations for AI. So, hopefully the summit can be a really good ground for collaboration in that space as well.
Today’s discussion is going to focus on kind of a specific piece of this access and democratization puzzle, which is open-source and open-weight AI. It’s been touted as a real potential equalizer in AI innovation and accessibility around the world. We’ll get into that a little bit more in the discussion. We’re going to weigh some of the ways that the U.S. and India may be able to collaborate in that space as well in particular.
I should introduce myself. My name is Giulia Neaher. I’m a research analyst in the Strategic Foresight Hub here at Stimson, where we run several ongoing projects related to the future of AI and responsible AI and accessibility. As Akriti mentioned, I’ll be co-moderating today’s panel with her, and we’re really looking forward to speaking with our wonderful panel guests today.
So, I’m pleased to now introduce our three panelists. We have Vivan Amin, Jibu Elias, and Anand Raghuraman. Between them, they bring a great balance of technical expertise and also policy knowledge of the AI world in both the U.S. and India. And I’ll briefly introduce each of them before we jump into the moderated discussion.
So, going alphabetically, I’ll first introduce Vivan Amin, who is a research incubations director and research technical program manager at Microsoft Research. He plays a pivotal role there in accelerating research and innovation globally, and his work spans across technical and socio- technical research teams and projects, driving research to business impact. He leads cross-company Embodied AI research within MSR. We’re really excited to have his technical perspective on the panel.
Jibu Elias is an AI ethicist, researcher, and author whose work focuses on the real-world social, cultural, and political impacts of AI. He’s the co- creator and former head of research at IndiaAI, where he helped shape the country’s AI policy ecosystem. And today, he works independently through nonprofit initiatives and international fellowships, including with the Mozilla Foundation and with us here at Stimson in the Responsible AI Fellowship.
Anand Raghuraman is the director for Global Public Policy at Mastercard, where he supports the Policy Center for Digital Economy and helps drive Mastercard’s thought leadership initiatives at the intersection of trade, technology, and geopolitics. He’s also focused on emerging trends in data nationalism and digital governance in the Indo-Pacific. And he previously was a vice president at the international consulting firm The Asia Group, where he was responsible for the firm’s client servicing and strategic partnerships in South Asia.
So, as you can all tell, we have a really expansive set of perspectives on the panel today. And with that, I think we can jump into the panel discussion. Akriti, do you want to kick us off?
Akriti Vasudeva: Yes, thank you so much, Giulia. And thank you again to our panelists for joining us today. I want to start off the conversation a bit broadly, and I think Sunayna touched on this a little bit. But maybe Anand, to you, and Jibu, to you as well, let’s reflect a little bit over the past year of U.S.-India AI cooperation. So, it was almost a year ago that President Trump and Prime Minister Modi had announced a U.S.-India roadmap on accelerating AI infrastructure. This is, of course, part of the TRUST initiative, which is a new partnership focused on critical and emerging technology. And parallelly, in that time, we’ve also seen American firms really increase their footprint in India. So you’ve seen investments and infrastructure and scaling initiatives from Anthropic, OpenAI, Google, and Microsoft. And I really want to understand, how has this dual government and industry push helped to actualize AI cooperation? And if there have been some holdups in this endeavor, what are they? So Anand, maybe to you first, and Jibu, to you then.
Anand Raghuraman: Thanks, Akriti. And let me first off start by saying what a pleasure it is to be with all of you today for this really important discussion. Akriti, Giulia, thank you, and the Simpson Center, and also the embassy, for pulling this together.
So, Akriti, I want to take your question head on and kind of distill it down to its essence, which is, to me, a simple three words. Is this working? Can we look what’s happening in the policy space and the investment that we’re seeing in the private sector and start to connect the dots into a trend in the AI space? And to me, the headline here is two words that will be familiar to many of us in the improv kind of space, which is yes, and.
Right? So, let’s start by unpacking the yes, and then we’ll switch to the and.
So, three green shoots that I would highlight over the course of the year in the AI space. So, the first is insulation. Beyond the specific working groups and the policy recommendations that TRUST has helped set up, really the mere existence of these platforms have embedded AI cooperation into an institutional channel that links government, regulators, the private sector into a protected lane of continuity. Now, that’s important because even as tariffs fluctuate, as trade battles continue to flare on, sovereignty debates sharpen, the AI channel continues moving, right? It’s moving steadily with some level of insulation from the broader political theatrics that we see from time to time. And I think in this environment, that’s no small feat. So, insulation is sort of the green shoot number one.
The second, I would say, is signaling, right? Billions of dollars are flowing from the private sector into data centers in India, semiconductor operations, much of it led by the U.S. companies that you had laid out earlier. Now, I’d attribute largely that flow to the fundamentals of India, right? It offers scale, talent, demand, some level of stability, the data, all of those elements that Sunayna, I think, highlighted quite nicely in her remarks. But I would argue also that the policy work has an important signaling effect as well, right? The TRUST initiative is sending a signal to C-suites that is affirming that the corridor between the U.S. and India is open, that it’s here to stay, that it’s durable, and that if you bet big on the U.S.-India AI story, you will see returns now and into the future, right?
So, the third green shoot that I would highlight, and this is really a sleeper achievement, is permissioning. Now, before TRUST, companies were in many ways flying in a fog, right? There was uncertainty around rules on compute access, trusted vendors, data governance, technology protection. And what we’ve seen with TRUST over the past year is a beginning of a process to clear that fog, right? Sketching out a pathway where American AI technology can scale in India while really threading the needle between Washington’s considerations on security and then also Delhi’s important desire to see India’s AI ecosystem play a more prominent role in this equation.
So, I’d say those three pieces are the green shoots that I attribute to the yes. So, now let’s turn to the and. And here, I think it’s really a set of a couple of imponderables that we have to consider that will shape kind of the trajectory of AI collaboration going forward.
The first big one to me is sovereignty. As India’s own AI ambitions grow, how comfortable will we be hosting large-scale foreign infrastructure?
And that question, in many ways, becomes sharper after what just happened last week in Davos.
Second, the export control politics, right? The AI Diffusion Rule may be rescinded, but export controls remain a feature of the landscape. So, ultimately, the question about compute access that can be deployed in India and how it scales, that’s going to be shaped in part by decisions that take place here in Washington, right?
Third imponderable, geopolitical weather. We still see trade disputes. There’s the visa friction. There is defense procurement tensions, crisis that can spill into technology cooperation. I’ve said TRUST provided some level of insulation from volatility, but it doesn’t eliminate it altogether.
Fourth imponderable, the physical constraints, right? These just going back to basics on power, land, grid reliability, water for cooling, right? These are the bottlenecks that we’ve seen traditionally. And diplomacy can help here, and we see discrete initiatives to unlock some of those constraints, but we can’t scale AI faster than what infrastructure and electrons allow. Right?
The last one that I’d point out, again, reflecting on the events of the last couple weeks, is the rise of competing tech blocs, right? Pax Silica is one poll. We’re seeing Europe starting to build its other. China, obviously, constructing its own.
So, the success of U.S.-India cooperation, in part, will hinge on how India wants to play and calibrate its engagement with each of these different blocs going forward.
So, I’ll pause there for a moment. And again, I welcome further discussion on this point.
Akriti Vasudeva: And I thank you. That was really comprehensive. Jibu, what’s your take?
Jibu Elias: Yes. Thank you for this opportunity, and great to be here with this group of really brilliant minds. So, I will open from where I sit in India. So, I would say the cooperation is real, but uneven, right? On paper, the last year has moved things forward, as we have seen. The U.S.-India roadmap put AI infrastructure explicitly on table. At the same time, U.S. firms have continued to deepen their footprint in India, like through cloud platforms, enterprise AI adoption, service delivery, and access to models. That combination matters. It has actually moved the cooperation from an abstract strategy to more like a day-to-day cooperation reality. But the holds-up are not ideological. They are actually structural. And something Anand just mentioned before me, we can’t accelerate AI beyond the infrastructure. So, the biggest constraint here is infrastructure, not intent, right? So, the AI at scale, as you know, needs reliable power, land, network backhauls, predictable timelines. Even when capital is available, the execution slows down because of the infrastructure-related things, like grid stability or permitting or AI grid, data centers’ readiness. That takes a lot of time.
And a second point of friction, I will say, is more of a trust and compliance. So, we are operating in a world of, like we said, export controls, sensitive chips, and trusted supply chains. And that creates unavoidable friction, even when both sides want to cooperate and move forward.
And of course, third part will be mostly on data governance and the liability gap. AI systems, as we know, are probabilistic. And Indian enterprises and public institutions still struggle with this question. Who owns the outcome, or who is accountable when there’s an issue with the model, or how liability is assigned, right? So, procurement frameworks weren’t designed for this. And largely, there is a genuine concern regarding trust amongst Indian enterprises when it comes to AI adoption.
And so, from my view is that alignment exists at the strategic level, and the industry cooperation is happening on a practice or practical level. But the next phase depends on whether we invest in the boring but necessary layers, such as compute, the power structures, compliance, capacity, and trust mechanism in a larger sense. So, India and U.S. agree on where to go. But at this point, we are still building that road that will take us there.
I’ll stop here.
Giulia Neaher: Awesome. Thank you both. Vivan, I’d love to hear your perspective there as well, but I would also love to tap your expertise on the technical piece as we kind of jump into that part of the discussion as well. So, we’ve talked a little bit about how a common goal here is accessibility and there are a lot of different ways that governments are tracking that goal. I think open-source and open-weight AI has been highlighted as a potential avenue for access, especially across the global majority. And I think that it would be helpful as we jump into this discussion to get a little bit of a sense of what are the benefits of open-source and open-weight AI as opposed to closed models, and whether you have a sense of the different attitudes in the U.S. and India towards those benefits and those drawbacks as well, because there are unique risks in the space, too. So, I’ll hand it over to you.
Vivan Amin: Yeah. Thank you, Giulia. And thanks again, Akriti and Sunayna, for those insightful remarks and to my co-panelists. Also, it’s great to hear how quickly we pivoted to an online discussion based on the mother nature and the weather here in Washington, D.C. So really applaud to the team there.
Would love to share a few pointers there from open source and open weights, right? So, it starts with the two… The way I view it, it’s through two distinct lenses. One is the strategic lens of trust. What does trust really mean? Is it transparency or is it explainability? And then the technical lens is through how is AI going to operate with physics, right? It’s the world that we live in.
So, let’s talk about trust and sovereignty. So, for a long time, the conversation has been around open source, closed source. It’s something even we as Big Tech companies struggle with, right? What is the right medium to get distribution to the masses for an outcome-based mentality there? But for governments and critical industry, whether you’re here in D.C. or in Delhi, it’s actually a sovereignty decision over trust.
So, when you solely rely on closed models, which we call it the black box APIs, you’re essentially renting intelligence. You don’t really own the brain. You’re just leasing it to access it. You have an API that runs. You can’t really inspect for bias, because you don’t know what the model itself, when it’s closed, it’s built upon. You can’t really audit the responsible vulnerabilities, whether it’s on safety, privacy, transparency, et cetera. And critically, you don’t own the continuity. Like, “Hey, how long are these prediction tokens when I start talking to them? And what happens when the provider updates the model itself? Do you know what updates are being made to the weights and the biases of those models?” So, that’s the closed model weight, but they do offer a positive outcome as well, which is getting it out to the masses, continuing to make sure there’s safety gate, some of the safety guardrails embedded into them.
Open-weight models sometimes flip the dynamics. They allow nations or enterprises to look under the hood, which allows us to inspect the weights, the building blocks, the bricks, the cements, the concrete that’s been laid out. It can help you fine-tune. So, hey, if something’s not working for you, how can you fine-tune to make your model more secure? You can localize your data. And security may mean different things within the country, within the culture, and the industries that operate. So, security, for instance, in a industrial environment versus your home are two different things. One can weigh privacy more versus security. So, how can you turn on the knobs when you have an open-source model that allows that accessibility? And if we want AI to be true digital public infrastructure, that level of inspectability, it’s got to be non-negotiable, right?
The second one I’ll pivot to more is on the technical lens. There’s a lot of talk about AGI and all that. But unless the models come and interpret our physical world and meaningfully impact people to lift them, whether it’s the GDP of the nations, lift people out of poverty, et cetera, yeah, AGI is just a talk within the digital world, right? So, the physical world matters a lot because that’s the world that we live in particularly. And that the work that we are doing at Microsoft Research, both on language, socio- economical, climate change, physical AI infrastructure, we’re trying to lay out the connective tissue to empower everyone on this planet to achieve more.
We often forget that the cloud isn’t everywhere. It isn’t just an instant. In the world of embodied intelligence, robots, drones, autonomous systems, latency could become a safety hazard through connectivity as well. And open-weight models are also key to solving this. They allow us to take this massive power of these foundation models and distill them to run localized, at the edge, because you can fine-tune the parameters to see what matters more directly on the device that’s operating in our real world, which can help you unlock this class of AI that’s private, fast, and resilient, right? So the data that never leaves the device, talking about privacy there. Fast meaning zero latency in decision-making. That’s different when you have an agentic AI in the digital world versus an autonomous car or a robot operating, right? Decision-making is different there. And then resiliency. So, even when you lose connectivity, you lose power, what happens then?
So, for me, open source, it isn’t just a nice-to-have. It’s almost a requirement for both, the U.S., India, both because of the developer community is so strong across both of these countries. And it isn’t just the fundamental architecture required to build that’s both trustworthy for the state and safe for the physical world. It’s got to come with its own challenges of the cultural nuances as well.
Giulia Neaher: Awesome. Thank you so much, Vivan. Oh, back to you, Akriti.
Akriti Vasudeva: Thank you so much, Vivan. I do want to remind our audience at this point that we will be taking questions towards the second half of this discussion. So please do enter your query into the Q&A box at the bottom of your Zoom screen, and we will bring it to our panelists.
And I’m really learning a lot from this discussion. And we have looked at the policy environment so far. We’ve talked a little bit about the technical aspects. Let’s tie this together and think about avenues for U.S.-India collaboration. So, one of the things that experts talk about often in the bilateral context when we talk about open-source AI is how India can essentially adopt U.S.-origin open-weight models and they can build on top of those models, right? And that’s a key area of cooperation. What are some of the challenges and opportunities in doing that from both the U.S. and Indian perspective? And I want to open this question up to all of you. Jibu, maybe we’ll start with you, go to Vivan then, and Anand, if you have thoughts as well.
Jibu Elias: Sure, Akriti. I think when it comes to adoption of U.S.-origin open- weights models, there are opportunities and the challenges from both countries. And it’s one of the most practical collaborations path we have, if you are honest about the trade-offs.
Now, in India, the opportunity is quite clear, right? Open-weight models dramatically lower the entry barrier, right? They allow Indian teams to focus mostly on localization, multilingual systems, and domain-specific adaptions without burning capital training these frontier models from scratch. I think previously, Sunayna has actually mentioned about Sarvam and other entities which is working in agriculture and healthcare and all. And Sarvam is a very good example of how they have been doing localization and building multilingual systems and all, right? So, they offer this middle path on sovereignty so that models can be hosted, fine-tuned, and governed locally rather than pushing sensitive data through opaque APIs. And as Vivan pointed out, it also helps these enterprise startups to run this thing locally, on edge, and answering the privacy question on large as well.
But on the other hand, it also brings real challenges. Running these open- weight models still require a lot of compute. And it requires security discipline, operational maturity and capabilities that are right now uneven across the AI ecosystem. And importantly, the larger question about governance comes into picture. The governance responsibility will now shift downstream. And once you adapt these models, it’s on the entrepreneur or startups who are doing that. So, the larger question of documentation, evaluation, and, importantly, explainability. And many Indian firms are still building these muscles. So this responsibility will fall on them on an early stage.
So, there’s also a strategic risk as well. If India only builds on top of external foundations without investing in its own evaluation and research capacity, then that dependency changes the shape of the whole cooperation itself as well. So, I think that’s one of the reasons why we are seeing startups like Sarvam, Soket AI, and few others are building Indian foundation models as well as we speak. So, yeah, I think I will stop here.
Vivan Amin: Yeah. That’s a fantastic point, Jibu. I really like that.
So, the way I view it, it’s a perfect engine and a chassis partnership between our two countries here. And the way… The opportunity here is clear, and it’s ever evolving in this day in AI. Because today, the U.S. is building these best world-class engines, and you have to apologize for my car fashion and my aerospace background, best engines that are these large, massive foundation models, right? Whether it’s closed source, open source, and so on. But India has the world largest ecosystem of engineers who can actually build the car, right? That’s the application layer that we want to really drive that forward. You’ve got the engine coming in, but you’ve got these engineers who are thriving to be assemble this and can you make it meaningful to actual products and applications for humanity and to progress the forward.
So, if India becomes the application capital today of the world using open models, I think the both economies can win here, right? Because right now, what’s happening is, models are getting built, they’re getting scaled, some of them are massive, some of them may or may not align to the socio-norms or socio-technical norms. But yeah, the challenge, Jibu, that you talked about, that’s real. We have those infrastructure challenges here in the U.S. as well, whether it’s sheer compute, power, water, right? You can download a model for free, but you can’t really run it in air, right? So you need that infrastructure built. The other day, I was in a data center, and I just saw the sheer volume of cooling capacity. We saw a GPU rack, and three times the GPU rack was the cooling rack right next to it. So you can just imagine that infrastructure that’s needed.
So, yeah, there isn’t compute divide. And I do feel like today, in 2026, for India to truly adopt and adapt these models, we need to solve that infrastructure and hardware bottleneck, which I think both the countries have phenomenal intellectual capacity and power to do that. We do need more local GPU capacities from an infrastructure standpoint in India. So developers aren’t just consumers of AI. They are also creators who can actually fine-tune these models to the local needs, to what Sunayna’s earlier point about, how do we get it to the farmer and deploy it as well?
Anand Raghuraman: Akriti, I think I’ll just add from my perspective, we’ve talked about at the level of building the engines. Vivan, I love this metaphor from the cars of, okay, we’ve got the road, we’ve got the engine, we’ve got now the application layer, as you mentioned.
I want to take it a step further to say, let’s imagine a world 5-10 years down the line where the adoption of open-weight systems and the applications has truly democratized the use of these applications in India. I think we need to be preparing between the U.S. and India, exploring new models of cooperation that ensure that the value creation with AI is also democratized and extends beyond the corporate sector and is broadly felt. At Mastercard, we have this thesis around inclusive growth. How does the rising tide lift all boats? And with India and the U.S., I think there’s an opportunity to look at types of partnerships and types of models that ensure as the diffusion of these AI applications extends beyond kind of core metropol cities, that these communities are brought into the value- creation story as well.
So, one example of this that I often think about is Mastercard, our Center for Inclusive Growth partners with Karya, which is India’s first ethical AI data cooperative, which is connecting lower and rural income communities, primarily women, to digital micro tasks that generate the training data in underrepresented Indian languages, right? That’s a beautiful partnership. And the result is that we get more inclusive AI models, digital livelihoods that start to build trust in the technology itself, and also greater adoption amongst folks who are encountering AI in the applications for the first time. Right?
So, that, to me, is going to be the real test, where we can talk in theory about the models, we can talk about the diffusion of the applications, but how does this all translate into genuine, inclusive growth that is felt beyond the urban center? That, to me, is a headline project that both the
U.S. and India have to take on and can really be a model for the world as we think beyond just India and the U.S.-India axis, but really the Global South, and also, frankly, the entire international community.
Giulia Neaher: Thank you all so much for that. I’d like to turn a little bit also to how international collaboration may then impact technology on the cutting edge. And you’ve all kind of alluded to this in the sense of… I also loved the car metaphor of the Indian engineers building the chassis of the car. But I’m wondering, Vivan, in particular, if you have any thoughts about what a world where there is strong collaboration in this space versus a world where there’s less collaboration in this space might mean for you and for other developers who are working on the technical side.
Vivan Amin: Thanks, Giulia. Yeah. What Anand said was really resonating, that the inclusivity is a big keynote here. Just look at the diversity of languages, gestures, dialects, cultures in India, and still we have not perfected it, right? Even as humans, there are certain words and pronunciations that need emphasis. So, how do we get a generative AI model to kind of understand those nuances?
So, I do think that currently, we are at a massive inflection point just because the technology and the rapid adoption is moving so quickly. So, we’ve been living in this gen AI era, GPT moment, et cetera, and that’s been amazing, largely through the digital AI, chatbots, code generators, pixels on the screen, diffusion models, which is creating images, et cetera.
I think as we step into, already into this year and into the next few years, how do we translate this from generative AI to moving towards interactive AI, which is more as you’re starting to see the grassroots of that becoming, which is through this agentic world, into the physical AI. So, not just a model that can predict tokens to talk but actually act in the physical world. We are effectively giving the brain a body to do meaningful work. So, from a manufacturing, logistics, and healthcare, agriculture, the environment that I can’t emphasize is probably the most complex in this environment. And you could just watch the traffic videos is India, right?
Like, how do you get a physical embodiment to come and experience the world that can also translate and become the model for the international community as well? And having all these nuances of variables to train it.
So, the grand challenge we face is, like, hey, we’ve successfully trained the models nearly to its entirety of text context on the internet. They know how to write poetry in Sanskrit, just write, not express, write code in Python. But these data, they still have very little socio-cultural nuances or interaction with the physical world or geometry representations. They don’t know how to fold laundry. I mean, yes, there is a lot of videos. Our team is also trying to do that. But yes, we are trying to navigate this chaotic intersection.
And our world is messy, right? We have poor data when it comes to interaction, when it comes to the cultural nuances. We don’t record everything, et cetera. And one of the projects that we have started is called Project Gecko, and MMCTAgent, is to get the Copilots into the farmers’ hands for the global set. That actually can help them with the weather patterns, to the cultural nuances, taking images of the crops, to now going into the physical world. “Hey, can I actually deploy a drone to go inspect?” et cetera.
So, I do feel like the U.S.-India partnership stops being just a diplomacy but becomes more of a technical necessity that if we train in a pristine structure, whether it’s in Seattle, Silicon Valley, Bangalore, anywhere, does it account for the full environment and the variability in that environment itself?
So, that brings me to the core thesis. The robustness is divided. India does offer one of the most complex, dynamic, variable, unstructured environment, which is essentially so critical to understand because we don’t have data from the unstructured world. The diversity of edge cases and traffic or infrastructure situation, the weather is unmatched. If we can build a collaborative system where you can navigate a busy street in Bangalore or even in the Silicon Valley here or manage the supply chain in rural India, we haven’t just built a solution for both the countries, I think we’ve built a globally robust system that will actually work on any part of the planet, separating the socio-cultural nuances. And the logic is simple at the end of the day. If it works in complex environments, it could work in structured ones. But the reverse cannot be true.
Akriti Vasudeva: Thank you so much for that, Vivan. I really appreciate the real-world examples and talking about instances of bilateral or broader collaboration.
So, we’ve talked a little bit about enablers. Now, I want to talk a little bit about constraints or potential other factors to consider. So, beginning of discussion, we talked a little bit about sovereignty, transparency, explainability as factors to consider in this collaboration. And Jibu, this is for you, but others as well, if you want to jump in, how do we think about standards and interoperability in this collaboration? How can Washington and Delhi actually work together on common AI standards for both safety and governance? And again, what are opportunities and challenges there?
Jibu Elias: Yeah. Before I jump into the answer, I really like the answer Vivan gave for the last question. I mean, we call it chaos, but we would call it unstructured. So, there’s a saying within India’s AI leadership. I mean, it came actually from the prime minister himself, honorable prime minister himself, that if we can build a model in India in midst of the scarcity of resources and all this unstructured environment, everything, it works everywhere else. So, that’s the larger picture of India aspires to be, “What if we can be the garage?” Right? Using the car analogy here as well.
Now, speaking about standards and interoperability, for me, they are the quite backbone of collaboration, right? Without them, I think everything will stay at pilots and maybe MoU stage. And we have been seeing that happening a lot in Global South, right? Hundreds of pilots stuck in a process where they’re not able to scale up.
But now, coming to India and U.S., I don’t think… we don’t need an identical AI laws or identical regulation or anything to have this process. I mean, even India has came out and clearly said, “We don’t want to follow you. We don’t want to follow U.S. or anybody else when it comes to formulating our own AI regulation. And we wanted to create a regulatory environment that supports innovation.” But what we need is interoperability of expectations, right?
And I can say this in three points, actually, right? The first one will be the model trust and evaluation, right? If an Indian company, let’s say, Sarvam or whoever, builds on a U.S.-origin model, and there needs to be a shared understanding on how safety was evaluated or what the limits are, non- limitations are, and how performance is monitored, right? And that doesn’t require a single regulator, but it requires compatible evaluation and documentation norms as well, right?
Now, the second point will be operational governance inside organizations we’re talking about, right? So, that’s where questions of model versioning or incident reporting, importantly, and change management… I know all these things doesn’t sound exciting, they sound dull, but they’re what make AI dependable and defensible, right? And when these practices align, cross-border deployment becomes much easier, right?
And finally, the regulatory part. Let me go into that as well. So, we don’t want same regulation, but we need a larger sort of a regulatory mapping. Not the same rules, but mutual legibility on regulation, right? And Indian deployment should be explainable to a U.S. partner or an auditor or an investor without going through all the translation challenges. So, practically, this means focusing less on declarations and more on joint rails, like shared safety test suites for open-weight models or baseline disclosure templates for enterprise use, and, importantly, institutional cooperation between safety institutes, standard bodies, and regulators, right?
So, interoperability is how trust kind of become… how do I say, carried across the border, I would say. I think Vivan has to add.
Vivan Amin: Yeah, thanks, Jibu. Completely agree. Interoperability of expectations is exactly the right framing, because we do get stuck, models get stuck, people get stuck when there is no interoperability standards. And it’s less about the identical rules, more about the shared trust anchors. So common evaluation methods, transparent documentations aligned with these safety baselines that make these collaborations seamless without constraining innovations.
One way that we’ve been exploring to do this is to create ecosystems where you have rules and regulations body. Like, let’s say in the U.S., NIST, Responsible AI Institutes, et cetera. I do feel like those ecosystems do need to expand out. Right now, they’re being very sovereign for many reasons. But I do think that a strong partnership amongst the two countries, three countries, multiple countries, would be a huge, exponential movement, I’ll put it that way, if you can expand these ecosystems rather than just being localized to folks that are building the models with a chassis to the folks that are driving and also laying out the road as well. So, yeah, just wanted to comment on that. Yeah, it’s very powerful analysis of the interoperability and transparent, responsible systems.
Giulia Neaher: Great. Thank you both for that. I’d like to turn now back to Anand to hear the purpose for which we’re all gathered here today, is to talk about the summit, right? So I’d like to bring the conversation a little bit back towards that. And Anand, I’d love to hear from you how you’re tracking the summit in Delhi next month. And what are you hoping to see from policymakers and stakeholders who are in and around this kind of open- source agenda and around the question of accessibility?
Anand Raghuraman: Absolutely. Thanks, Giulia. The summit is going to be a landmark event in the history of AI policy development. I mean, we’ve started largely in, I’d say, the Western-centric world. And this is the first time that we’re seeing India as a leader of the Global South, driving the AI conversation and really convening with an era of legitimacy and different perspective than what we’ve seen before.
What I would love to see from the summit, as we transition from, I’d say, this focus on impact, building on the previous framing of safety and action here, is really a lot of themes that have come up in this discussion, right? Interoperable governance, right? Do we have the common approaches on… It’s basics, but data governance, accountability, and the risk that allows the AI systems to work safely across borders.
The operational trust piece, I think, is also key, right? So how do we start to move beyond broad principles towards concrete guardrails on certification, auditability, and really start those technical exchanges between the safety kind of institutes in different countries to continue that on a rolling basis?
And then the third piece, I’d say, is the inclusive participation, which is the theme I hit on earlier, which for India, I think, has been such a champion at democratizing access to digital systems, democratizing the impact of those systems. So, what does that look like?
And I want to pick up on a theme that Vivan had laid out earlier, which is, we’ve been dealing largely with a world where AI is gen AI, it’s chatbots, it’s kind of conversational. But the big shift to me is going to be the transition and the advent of this agentic AI, agentic commerce, right?
Because that’s really where AI stops being conversational and really driving action in the real world where there are real dollars, cents, rupees attached to the transaction. And that introduces a whole nother layer of complexity as well as opportunity, right?
So, the opportunity that I see is tremendous efficiencies in terms of business processes, in terms of volume of transactions that could take place, in terms of price optimization that can happen. I mean, businesses should be so excited, and consumers as well, about the experience that the world of agentic commerce will also help unlock. So, it would be great if the AI summit started thinking about how do we prepare for this world and build the trust infrastructure needed to facilitate agentic transactions.
The flip side is also there, where as we’re seeing agentic cybercrime start to take place. And here, I would point to last November, I think one of these watermark moments was when Anthropic had pointed out that there had been kind of a cyber group that had modified one of its models and then used it to basically execute an agentic cyber attack that targeted businesses, that targeted governments as well. And that is… For me, it was this incredible moment, because you realize now that the costs of mounting these cyber attacks, which were already fairly limited, are now scaling on the basis of compute, while defense continues to scale linearly on the basis of how many bodies do you have to kind of manage your systems.
Now, where does that create risk and vulnerability. The large companies, every one of the companies that is being represented on this panel, we know how to handle this. And we are constantly investing billions of dollars into security systems. But think about the MSME, the small shopkeeper, the small business that does not have the cyber protection.
And so, we can imagine there’s a cyber tsunami that could be taking place. But the MSMEs that are ultimately the most vulnerable parts of the economy and ultimately the most connected into large enterprises through supply relationships and the like are the most exposed. So, I would love to see the AI Summit drive a conversation that centers MSMEs, centers cyber defense, and starts to think about how do we work towards what we could call an MSME cyber shield, that every player in the ecosystem, whether governments, private sector, can come together and contribute to that baseline level of security. That would be a win. I don’t know if we’ll necessarily get there with this, but we’d love to start the conversation moving in that direction.
Giulia Neaher: Awesome. Thank you so much, Anand. I think we’re ready now to start opening up for the audience Q&A. So, I’ll kick us off with one of the first questions from the audience. Then we’ll do a couple of more of audience questions before circling back to the panelists for just some final closing words to end out today’s discussion.
So, let me kick off with the first question which I saw on here, which is from Gabrielle Delsol, who asked, “The Indian government is currently exploring a proposal to establish a compulsory license for all content on which AI models were trained, assessed retroactively, and on the basis of global revenues, distributed through a collective management organization. Do you see the Indian government holding up this proposal as a model through the summit? And to what extent does this proposed approach align with the summit’s core themes?” I will hand it over to you, any of you, if you’d like to jump in, go ahead. I see Jibu, yeah.
Jibu Elias: I will take this one. Yeah. I mean, yeah, this is an important question. And I would like to separate what’s being explored and what is being positioned as a model, right? So, first on the facts, right? Yes, there is an ongoing policy discussion in India, like through DPI, IT Act, and around a centralized or compulsory licensing mechanism for AI training on copyrighted content. And sometimes described as a blanket or a statutory license potentially assessed on, as the question said, global revenues and administered through collective bodies.
So, this is in consultation stage right now, not an enacted law, and it’s still being actively debated by creators, publishers, tech companies, and policymakers. And now, will India hold this up as a model through the summit? My personal reading is it’s not a finished template as of now. But as an example of a larger governance question, India is willing to confront directly. So, India is unlikely to present this as an answer that others could copy. Instead, it may frame it as one possible approach to a problem that every country is struggling with right now, how to balance large-scale AI training with creator rights, economic fairness, and innovation. So, in that sense, it does align with summit’s core themes, but more as a conversation starter rather than as a settled solution.
So, where it aligns strongly with summit agenda here, since the person asked the summit question here, is, first is on accountability and value sharing as well. The proposal speaks that AI systems are built on cultural and informational labor, and that there needs to be a mechanism, at least in principle, for creators to participate in the value chain.
Secondly, governance over litigation. I mean, it’s another long conversation out there, right? Rather than relying entirely on courts and other enforcement mechanism, because Indian courts have been logged with decades-long court battles. So the idea is here to create another governance mechanism for this kind of litigation.
And third is global relevance, right? Because AI training is cross-border, any licensing or a remuneration framework inevitably raise the questions, international questions. So India understands, I think, that whatever it does here will be read globally, and which is why it’s proceeding cautiously, right?
So, yeah, if India raises this at summit, I don’t expect India to say, “This is a model everyone should follow.” It will be rather like, “This is a problem space. And here is one of the approach we are testing, like many other things we’ve been doing.” And frankly, that is consistent with India’s broader AI posture, right? Principles first, experimentation second, then lock-in later, if at all.
So, right now, India has not implemented any licensing regime and not exporting any licensing regime yet. It is doing stakeholder consultation, exploring a willingness to grapple openly with one of AI’s hardest governance questions.
Akriti Vasudeva: Thanks so much, Jibu. We’ll go to our next question. We have a couple coming in and encourage others as well to please type in your queries into the Q&A box. We have a few more minutes of discussion.
I want to take the question from Madhumita Dutta because she asks about the geopolitics of AI. So, she asks, “How does the present geostrategic global situation affect U.S.-India collaboration on AI? Will the U.S. be able to compartmentalize AI collaboration?” I mean, I would personally also add to that, what does sort of the turmoil in the bilateral U.S.-India context, especially into the new year where we’re still waiting on the trade deal and we still have not heard about the Quad summit happening, how do some of those dynamics impact the pace and momentum of AI collaboration between the U.S. and India? Anand, do you want to take that?
Anand Raghuraman: Yeah, happy to take that. I think the question almost leads to an answer. So it’s less a matter of will the U.S. be able to compartmentalize this. We don’t know, right? But I would say the imperative is that it does so at this moment, right? There’s so much turmoil that’s going on globally at this moment, it’s hard to predict. You read the news every day and another part of the world seems to be unstable. We start to see sovereignty, obviously, considerations in places like Europe, across the world, right? And so, the core thesis here is whether or not these two countries can continue to see value in the other partner, scaling systems, bringing expertise and continued cooperation.
Now, I’d say in earlier years, we have actually taken that compartmentalized approach. And sometimes I, amongst other folks, have been arguing that we should be taking a more wider angle, because obviously, as we know, the trade issues are somewhat inseparable from the scaling of AI systems and commercial technologies. So unless you kind of unpack the issues around data governance, cross-border data flows, and then the revenue opportunities and trade investment opportunities for companies, that will have an impact at which large companies, small companies are beginning to invest. You’re investing on the basis of what will be the return over a time horizon.
So, these issues are ultimately intractable from the trade conversation, but they are issues, I think, that will, at the current state of play, should not be distracting from the technical work that I think we’ve been talking about. There are ways in which we can continue those exchanges going forward, whether it’s with the safety institute in India connecting with its counterparts in the U.S., whether at the level of strategy, we have common views of how to engage on Pax Silica, for instance, and going beyond the bilateral corridor to building a trusted ecosystem as well. So, it is, from my view, absolutely imperative that the two sides find a way to compartmentalize and continue to move the ball forward, even as the trade conversation and frictions continue. I mean, that’s probably the imperative for the next couple years.
Giulia Neaher: Thank you, Anand. I’d like to turn to a question that I think it does not have an easy answer that’s mentioned in the Q&A box about figuring out a workable framework for ethical issues and ethical challenges. I think, obviously, in the discussion today, we’ve touched a lot on how different contexts from country to country and from situation to situation can be really impactful in determining what the standards are that we want to adhere to and what the ethical standards are that we want to achieve. But I think it would be good to hear a little bit more from you all, in your own words, about how you think that ethical issues in particular are being approached in the context of the U.S.-India relationship. And I know in particular there’s been a lot of discourse in the U.S. about the language that we use to discuss AI safety and AI ethics and that sort of thing. So, I’d be interested to hear your perspectives there.
Vivan Amin: Thanks, Giulia. Yeah, happy to take a stab at this one. And that’s a really important question. Ethics is not a one-size-fit-all, to Jibu’s point earlier. It can be interoperable. So, the key is, how do we move from the abstract principles to enforceable practices, whether it’s the share audit trails, transparent model documentation, inclusive data governance, transparency, transferability, and explainability as well for a model to kind of go and act.
So, when ethics do become measurable and verifiable, it stops being the philosophical debate about and starts being more operational trust. And it has to continuously evolve. It can’t be, “Hey, a rule or a law has been passed and thou shall it be.” The world is ever-changing. We are going to see these different nuances. As Anand was giving an example of, hey, cybersecurity is going to be a big play. So, now how do we continuously design systems? There’s new technologies and infrastructures coming.
What happens to quantum encryption when quantum computing and a machine learning model’s built on that technology?
So, they have to be iterative and deployable and adjusted to their norms. And it can’t be just one-size-fits-all.
Jibu Elias: Yeah. I just want to add maybe a practical way to address it is to maybe anchor ethics in use cases, not just on ideas, because, as we said, many of those terminologies… For example, take privacy. How privacy is being pursued in societies like in Europe and in U.S. and in India and China itself varies, not just how it’s defined, but how it’s been exercised. For example, couple of years back, I was in a small Dutch town, and I could see people open up all the curtains and everything, and I could see what’s happening in the living room. We can’t imagine doing that here in India. We have curtains everywhere. We close the door. And I don’t know if any of you are familiar, we will have an extra door in front of that door, a grill kind of door.
And the other question of individual privacy. For example, that might be very paramount in other countries, but growing up, I never knew the concept of privacy. You are living with your extended family, you’ll be sharing your room with a cousin or you’ll be sharing it with your brother. So, the idea… I mean, everybody knew everything that’s happened in your life. Who are your friends, who you are talking, what happened in the school, or a larger thing. So, that’s something we need to fundamentally understand.
But for one thing is… And this also looks differently depending on where AI is. The ethical questions or ethical risk on AI being used in something like credit or healthcare or education or, importantly, in policing, that totally looks different. Like Vivan said, it’s not a one-size-fits-all. A single global moral rule won’t work. And so we need context-specific guardrails, and that’s why…
And interestingly, the whole policy of AI ethics changes accordingly, right? There is interesting book written by this Thai scholar. I don’t want to butcher his name. It’s a huge, 300-400 page on AI ethics from a Buddhist perspective, right? And if you look at what the Japanese think about AI, right? I remember a few years back reading one of the projects, and it says they want to build a society of humans, robots, and AI that coexist together. So, the whole idea changes as we go from West to East. So, yeah, that’s the one thing, right?
So, that’s what I’m saying. Separate values from mechanism, right? And most countries can agree on a high-level values. Like, yeah, we can agree on fairness, safety, human oversight or accountability. But the larger question will be how we implement them. So, I mean, we can work on a shared framework, with shared outcomes, while mindful of different institutional paths to get there.
Akriti Vasudeva: Anand, did you want to add anything?
Anand Raghuraman: I’m just reflecting on the discussion, and it’s one of those pieces where you do see on… Let’s take the consumer privacy question. And in general, I think about data governance, which factors in here. The ethical considerations on this will vary. And in the local context, India will have its own view on this. Europeans, obviously, have a very distinctive view on this. I think the U.S. has its own view. And so, as we start to think about the… It’s not just geopolitical blocs but kind of, I would say, data custodianship philosophies, right? It’s not enough to just have these bilateral conversations, because if you have a bilateral U.S.-India context that kind of solves between the frameworks that the U.S. and India have, you may leave the Europeans out of the equation. And then, we’re going to have a EU-India, big political alignment, possible FDA trade agreement, actually, kind of moving forward there. That’s also approaching this question from a different lens that kind of potentially leaves the U.S. out of the equation, right?
So, for me, it goes back to what Jibu was talking about earlier on interoperability of expectations and interoperability, I would say, of ethics. You may not have locally the same precise view on this, but how do you agree to a certain set of baseline standards and a framework that says, “Okay, we can move data across borders. We can govern law enforcement access to data in ways that enable the type of outcomes we want to see.
Greater investment across borders, greater adoption of these technologies, whether on open source or kind of at the application layer.” That is what we need to work towards. And for me, at least, it’s more of a triangle as opposed to bilateral linkages at the moment.
Akriti Vasudeva: Thank you so much. And I really appreciate you all riffing off of each other and you’re taking forward each other’s thoughts. So, thank you so much for doing that. And I think in doing so, you all have actually answered a question that was in the Q&A box about privacy concerns. So, maybe we will take one more question and then we’ll go to each of you for some closing remarks. And I think if I understand this question correctly, it’s about whether collaboration in AI is zero-sum or it’s okay for each country to have multiple partners. So the context here from IronEdge is, “If India has other partners that it’s reaching out to on AI, like the EU or Canada or UAE, will this impact U.S.-India collaboration or will it be a force multiplier?” Who would like to…
Jibu Elias: I will take that. Because today, India signed a Free Trade Agreement with EU, which the EU person is calling mother of all deals. So, it has a larger implication on AI as well. But I wouldn’t characterize India’s relationship with the U.S. as, in any way, just was reading the question, as cooling in any way that undermines cooperation, especially in AI and other emerging technology.
But what we are actually seeing is, I would say, such strategic diversification, not decoupling from U.S. So, India’s outreach to the EU, UAE, Israel and other partners in the Global South, it reflects a deliberate choice to avoid our dependence on any single technology or governance model. That’s rational for a country of India’s scale and India’s responsibility. Right? So, from U.S. perspective, I don’t know, this doesn’t have to be a zero-sum issue. So, yeah, I’m just giving India… India has multiple partners, and India’s multi-partner strategy is actually hedging against the U.S. It’s building a stronger platform for collaboration with everyone, including the U.S.
Anand Raghuraman: I would add, Akriti, maybe from my perspective, agree, not a zero-sum game here. I do think, though, that there’s some competitive dynamics at play here that I think we should be clear-eyed and realistic about, right? Which is, even as we continue to deepen U.S.-India AI collaboration and frame this as an anchor point of the relationship, we’re also seeing other bilateral corridors take this up as well, right? I mean, I think what we’ve seen from the Gulf, for instance, between the U.S. and the Gulf, has been, from the perspective of D.C., quite fruitful, right? I mean, the amount of investment that’s come forward, the natural synergies that that helps unlock is really important. Similarly, even between the U.S. and EU, despite tensions, we continue to see that kind of technical exchange continuing.
And I bring this up only to say that for this administration, for the U.S. at this moment, there will be benchmarks in terms of what is significant diplomatic cooperation. What does that entail, right? What does it mean to be doing significant AI cooperation at this time? And how do we measure the success? Is the KPI, for instance, number of dollars that is unlocked, invested in the U.S. or abroad in the Gulf? Is it the number of startups that are spun off? These become real questions. And from the perspective of a corporate, when you’re thinking about which of these corridors should you be prioritizing for the long term, that’s a relevant trade-off to have here.
So, I think as we think about the long-term U.S.-India, we really have to have a clear sense of, again, what is that end-state vision that we’re both driving towards? 5-10 years down the road, what would success look like in terms of actual two-way investment? And again, I emphasize two-way because it’s got to be reciprocal with both governments, frankly. And then third, from the company standpoint, how will both sides, Indian companies investing in the U.S. as well as U.S. companies investing in India, how will they know that the bets that they’re making now will pay off in 10 years and the expected return will come forward?
So, I would offer those only to say that these are live considerations right now as we’re thinking about deploying capital. And I say we in the proverbial sense, that’s the broader industry, but it’s deeply impacted by the different geopolitical considerations that are coming up.
Vivan Amin: I’ll just round it out. So, I don’t think it’s going to be an absolute zero-sum. I think the AI or the technological advances, especially between the U.S., India, we would love to see it inherently be multiplicative force rather than a zero-sum. I think the more we share from a knowledge, data standards, framework, safety practices, environments, the more resilient the ecosystems become for innovation. So, I think that the technology as a centerpiece around where these two global powers can come together and innovate on a flatbed or a platform can certainly be a multiplicative force based on the other nuances between the trade deals and other blockers that are happening today.
Giulia Neaher: Great. Thank you all so much for your responses to the questions. I think we’re going to conclude the Q&A now, and I’ll turn to each of you and ask if you have any kind of closing thoughts that you’d like to share with the audience. If there’s one thing that people take away from what you’ve said today, what would you like that to be, especially in the lead-up to the summit? And maybe we’ll just go… we’ll do it alphabetically again. So, we’ll do Vivan, Jibu, and then Anand to close us out.
Vivan Amin: Yeah. Thanks, Giulia. Yeah. If I could leave the audience with one thought, it would be, listen, the technology has accelerated pretty rapidly in the past four years. And we’re living in the era where it’s even accelerating faster. So we’re moving from this digital world and entering the physical world where the rubber meets the road.
So, for this U.S.-India partnership and the summit that’s coming up, I think one of the biggest opportunity is that how do we collaborate as two countries together, whether it’s the engine analogy or an aircraft analogy of building the engine with massive models, India brings the scale, complexity, and the ultimate testing playground and a deployment ground as well.
So, my hope is, under the new TRUST framework, we stop looking at our differences in the regulation and start focusing on our shared innovation goals and engineering goals. Because if we can build an AI system that’s safe, efficient, robust enough to work in India, we just haven’t solved a local problem, we have built a system that could work and scale globally as well. So, would love for the summit to kind of address the building of the physical infrastructure for the future together. And these two countries can be the anchor points to scale a model.
Jibu Elias: Yeah. I will pick from what Vivan just mentioned, right? Real challenge is not AI today. I mean, it’s not mostly about speed and scale or even capability. It’s all about trust. And as Vivan mentioned, the acceleration has been phenomenal, right? Technology is moving faster than governance or institutions and shared norms can catch up, right? And the gap is where most of this risk sits, right? But it’s also where the largest opportunity lies.
So, for India and for partners like U.S. and others, the question is no longer whether we can build advanced AI. Of course, we can. The question is whether we can build AI systems that are trusted across border, across sectors, and across society, despite the ethical challenges and other things. So, that means moving beyond pilots and press releases and MoUs to hard work, things like standards, interoperability, human oversight, accountability. These are not the glamorous part, but it’s what turns AI from a short-term advantage into a long-term infrastructure.
And so, collaboration matters not because it makes AI faster, but it makes AI more safe, more resilient, and legitimate. In a fragmented world we live, shared trust is one of the most valuable thing we can still scale together.
Anand Raghuraman: And then from my end, Giulia, and again, building off of Vivan and Jibu, one takeaway for me is, the KPI is, how does AI deployment and everything that we’ve been discussing today impact and ultimately uplift the lives of micro and small enterprises? Full stop. Right? That, I think, will be the real proof point for, is the development of this technology having an uplifting impact on employment, on the rise of the Global South, on women entrepreneurs all across India and, really, frankly, the world, right? That, I think, we should have a single-minded focus between the private sector, governments to say, “Okay, it’s all well and good to talk about the models.” What we’re seeing at the level of enterprise collaboration, that will continue, obviously, but there is a need for real brainpower to be put on how do we democratize not just access to the solutions, but democratize the impact so that we get the inclusive growth that we’re moving towards. So, inclusive growth and MSMEs, that’s my key message for this group.
Giulia Neaher: Awesome. Thank you all for joining us today. And thanks to Sunayna as well for her remarks at the beginning. It’s been really a pleasure to have all of you on the panel and to hear your insights and thoughts in the lead-up to the summit. And I hope that this discussion can be insightful for those who are attending the summit, and hopefully we can bring these findings forward. I’m hopeful that this discussion will encourage stakeholders to continue this discussion and explore ways to make open-source AI and open-weight AI work for all of us. So, thank you, everyone, for your time and attention, and we hope to see you at future Stimson events. Thanks, everybody. Have a good day.