Who Will Write the Rules When AI Rules Us?
Artificial Intelligence is no longer a novelty that needs an introduction. In this article, K.G. Sharma argues that India has already witnessed both faces of AI: its benevolent role in agriculture and healthcare, and its darker side in social media, where it has been used to spread misinformation with alarming ease. Sharma suggests that the time has come for India to take the lead in shaping ethical frameworks—rules that ensure AI becomes a tool for empowerment rather than a weapon of exploitation and control.
When Algorithms Rule: Can India Write the Ethics of AI?
K. G. Sharma
“Whoever controls the media, controls the mind” - George Orwell’s timeless warning feels eerily prophetic in the age of artificial intelligence—commonly known today as AI. No longer confined to laboratories or science fiction, AI now shapes what we read, curates our social media feeds, and can even mimic the faces and voices of public figures with unsettling precision. The technology is at once liberating and coercive, democratic and authoritarian, empowering and exploitative. Whether it becomes a servant of freedom or an instrument of domination depends less on its code than on the rules, values, and institutions that govern its use.
The War Over Truth
AI’s most immediate battleground is not robotics or automation, but the war over truth. During India’s 2019 general election, social media platforms were flooded with misleading clips, doctored images, and mass-forwarded WhatsApp messages designed to polarise voters. Most were crude edits rather than full-fledged deepfakes, yet their sheer volume demonstrated how technology can turbocharge disinformation.
The United States witnessed a similar episode: a doctored video of House Speaker Nancy Pelosi, slowed to make her appear intoxicated, garnered millions of views before fact-checkers intervened. In Myanmar, Facebook’s algorithms amplified hate speech against the Rohingya—a failure the United Nations said contributed to horrific violence.
The consequences are corrosive. Authoritarian regimes deploy AI-driven troll farms to drown out dissent. Corporations use synthetic reviews and manipulated videos to burnish reputations. Political operatives exploit bot armies and doctored media to create echo chambers. Citizens are left not merely divided by ideology, but disoriented by the very nature of truth. Democracy is weakened not because people disagree, but because they can no longer agree on what is real.
People’s Tool
Yet it would be a mistake to dismiss AI only as a weapon of manipulation. Precisely because it is a general-purpose technology, it also holds immense promise for the common person. For farmers in Maharashtra, CropIn and Plantix help detect crop disease early and predict weather, saving livelihoods. For patients in Kerala, eSanjeevani, India’s national telemedicine service, connects rural households with doctors hundreds of kilometers away.
AI already shields bank customers from phishing and scams. Teachers in remote villages use adaptive tutoring platforms to personalise lessons. Visually impaired citizens rely on apps that read out text or interpret images, while journalists deploy deepfake-detection tools to verify suspicious media.
The same algorithms that generate propaganda can generate transparency reports. The same models that enable deepfakes can also expose them. Whether AI is a tool of surveillance or solidarity depends not on its technical potential but on the governance frameworks we build around it.
Why Global Rules Matter
Because digital platforms are borderless, no country can fight AI-enabled manipulation in isolation. A deepfake forged in one corner of the world can tarnish reputations or sway elections continents away. China exports facial recognition systems to governments in Africa and Latin America. Russia refines disinformation campaigns across Europe and Asia, increasingly aided by generative AI. American tech giants, meanwhile, deploy micro-targeting tools globally with little oversight.
Left unchecked, this risks creating a “marketplace of manipulation,” where truth itself becomes a commodity.
To counter this, Global rules must:
• Guarantee transparency in high-stakes AI decisions,
• Protect privacy and uphold human rights, and
• Mandate watermarking of synthetic media.
Equity is equally crucial: low- and middle-income countries often lack the capacity to detect disinformation or defend against AI-driven fraud. Without inclusive norms, the digital divide will widen, leaving the most vulnerable populations exposed.
ITU and Good AI
The International Telecommunication Union (ITU) has emerged as a key platform for global AI governance. ITU convenes governments, researchers, industry and civil society to discuss safe, inclusive and human-centric AI applications—from healthcare to disaster response. Its technical standards and ethical dialogues matter because they set the baseline for inter-operability and accountability across borders.
Parallel to this, campaigns such as the “Good AI” movement push for transparent, accountable and ethical AI. By insisting on fairness in algorithms, visibility of training data, and guardrails against misuse, they represent the voice of civil society and academia—a vital counterweight to state and corporate power. Together, ITU and Good AI show that shaping AI’s future is not just a task for governments or tech giants, but a collective responsibility requiring diverse stakeholders.
India at the Crossroads
This is where India’s role becomes decisive. As a founding member of the Global Partnership on AI (GPAI) and an active participant in UNESCO’s AI ethics framework, India already has a seat at the global table. Domestically, it has lived both sides of the AI revolution: manipulated media shaping political discourse, and AI-powered tools transforming agriculture, healthcare and education. That dual reality gives India credibility to argue for balance.
At home, India must strengthen data protection, regulate facial recognition, and ensure independent oversight of AI in welfare and policing. Abroad, it should champion strict limits on biometric surveillance, push for global watermarking standards, and lead initiatives to make deepfake-detection tools freely available to journalists and civil society.
Unlike Beijing’s surveillance-first model or Silicon Valley’s laissez-faire capitalism, India can argue for a middle path: one that aligns innovation with rights, growth with safeguards, and technology with humanity. And by engaging fully with platforms like ITU’s AI for Good and the Good AI campaign, India can amplify the call for responsible, inclusive governance.
Between Power and People
The stakes could not be higher. Left unchecked, AI risks creating a new digital feudalism where truth and privacy belong only to the powerful few. But governed wisely, it could become the most democratic tool humanity has ever built—empowering the farmer, patient, student and citizen as much as the government or corporation.
"The ultimate measure of a society is not how it treats its most powerful, but how it safeguards its most vulnerable." That reminder from Martin Luther King Jr. now applies not just to states but to the global community.
India, with its scale, diversity and democratic ethos, has the chance—and perhaps the obligation—to lead in writing rules that make AI an instrument of empowerment rather than oppression. The question is whether we will seize the opportunity before the algorithms, and those who control them, seize us.
************

Krishan Gopal Sharma; kgsharma1@gmail.com; Freelance journalist, retired from Indian Information Services. Former senior editor with DD News, AIR News, and PIB. Consultant with UNICEF Nigeria. Covered BRICS, ASEAN, Metropolis summits and contributed to national and international media.
(Views are personal)