TIACON 2025: AI Content and Reality
- Tarunima Prabhakar

- Nov 21, 2025
- 3 min read
The Trusted Information Alliance (TIA) hosted its inaugural conference, TIACoN 2025 in New Delhi on 6 November 2025, which saw over 160 participants come together to learn more about medical misinformation, tricks of fraudsters in the current digital age, how to tackle the harms of misleading Ai-generated content and what can be regulated for better or for worse.
‘AI content and reality’, a panel discussion on the harm caused by misleading Ai-generated content was moderated by Tarunima Prabhakar, co-founder of Tattle Civic Technologies, where the panelists examined the existing safeguards and systems that need to be put in place to protect the general public from harm.
The panel featured eminent speakers including Nikhil Naren: Assistant Director, Cyril Shroff Centre for AI, Law and Regulation; Assistant Professor, OP Jindal; Shakoor Rather: Co-founder, Science Matters; Geetha Raju: Senior Policy Analyst on deepfake detection; Sachin Dhawan, The Dialogue.
Here are the key-takeaways from our panel discussion:
A significant challenge lies in the absence of tools capable of demarcating synthetic text. This raises the question: Does the act of labeling content diminish its persuasiveness?
There are inadequacies in the current AI literacy framework, necessitating a shift from functional to responsibility-oriented literacy. Key considerations include:
Demographic Inequity: The role and support for staff within the broader context of AI literacy are often overlooked.
Proactive Measures: A transition from reactive responses to harmful synthetic media towards proactive strategies is crucial for preventing misinformation, as exemplified by the EU Digital Services Act.
Core Principles: Sustainability and climate change must be central tenets of any AI literacy framework.
Unequal Value Chain: An examination of who primarily benefits from the creation of AI value chains is warranted.
The potential for synthetic media to be utilized for beneficial purposes is evident, including:
Content Creation and Education: Enhancing the capabilities of content creators and enriching educational materials.
India’s Legal Landscape:
Existing Framework: The legal system primarily addresses users and platforms responsible for user-generated content.
AI's Role: The emergence of AI introduces new complexities for which the law currently lacks clear rules and guidelines.
Regulatory Focus: The primary regulatory focus remains on platforms and intermediaries.
Legal Literacy: There is a critical need for enhanced legal literacy regarding individual actions, proactive measures by platforms, and interventions by state-instituted legal committees to resolve these issues.
Terminology: The concept of Synthetically Generated Information (SGI), used in the Amendment to the Intermediary Guidelines, is relevant in this context.
Labeling:
There are two purposes to labeling:
Social: Informing users of the source or the status of the content being AI generated.
Technical: Labels that are machine labeled and may or may not be human readable. Mechanisms are susceptible to circumvention.
Issues with labeling: Manual/Human-based labeling methods are inherently fragile. Technical methods are more resilient but still not foolproof.
Persistent Relevance: Despite these challenges, the concept of labeling continues to hold currency.
Conclusion
The discussion underscored that the rise of synthetic media is exposing deep gaps in our technical, legal, and societal readiness. Existing AI literacy frameworks remain too functional and reactive, falling short of preparing citizens, institutions, and platforms for the responsibilities that accompany pervasive AI-generated content. At the same time, regulatory and legal systems, still structured around user-generated content, lack the tools and precedents needed to address harms emerging from AI-driven information flows. While labeling remains a widely discussed mitigation strategy, current techniques are easily circumvented or insufficient on their own. Yet, synthetic media also presents opportunities for creativity, education, and innovation when used responsibly. The session made it clear: meaningful progress will require strengthening literacy, rethinking legal guardrails, and building technical systems that can coexist with both the risks and benefits of AI-mediated information.
Way Forward
Invest in resilient technical solutions for identifying synthetic media
Future systems must move towards durable methods such as cryptic labeling and robust watermarking backed by platform due diligence and complemented, not replaced, by informed human judgment.
Develop legal and regulatory structures specifically for AI-generated content
Clear rules for redress, platform obligations, and state oversight mechanisms must evolve beyond the current UGC-focused system to address the realities of Synthetically Generated Information (SGI).
Shift AI literacy from functional skills to responsibility-oriented frameworks
This includes preparing all demographic groups, embedding sustainability principles, and building proactive strategies rather than reacting to harmful media only after it spreads.




Comments