The Controversy: When a Trial Court Cited Cases That Never Existed
On 2 March 2026, a Bench of the Supreme Court comprising Justice Pamidighantam Sri Narasimha and Justice Alok Aradhe delivered a stark warning that has sent ripples through India’s legal fraternity. The Court criticised a trial court for relying on judgments that were later found to have been entirely fabricated by artificial intelligence tools — describing the conduct as potential misconduct that strikes at the “integrity of the adjudicatory process.”
The case has brought into sharp focus a growing and deeply troubling trend: the use of generative AI tools like ChatGPT, Gemini, and other large language models for legal research — without verifying whether the case law they produce actually exists.
What Happened?
While the Supreme Court’s order did not elaborate the full facts in detail, the key findings were unambiguous:
- A trial court had issued an order that relied on judgments later discovered to be completely fabricated by an AI tool.
- The cited case names, citations, and even the legal propositions attributed to those cases were non-existent — hallucinations generated by an AI model.
- The Supreme Court held that reliance on such fabricated authorities goes beyond a “mere legal error” and may constitute judicial misconduct.
- The Court emphasised that judicial decisions must be grounded in authentic and verifiable legal precedents.
Why AI “Hallucinates” Case Law
To understand the gravity of this problem, one must understand how generative AI works. Large language models (LLMs) are trained to predict the most statistically probable next word in a sequence. They do not “know” law — they pattern-match. When asked to cite a judgment on a particular point of law, an AI model may:
- Invent a plausible-sounding case name (e.g., “Ramesh Kumar v. State of Maharashtra (2019) 4 SCC 312”) that has never been decided.
- Fabricate entire holdings, ratios, and obiter dicta that sound legally convincing but are entirely fictional.
- Combine real elements — actual court names, actual judges, real legal principles — into a citation that looks authentic but is wholly manufactured.
This phenomenon, known as “hallucination” in AI terminology, is particularly dangerous in the legal field because the output is sophisticated enough to deceive even experienced practitioners who do not cross-verify every citation.
The Ethical Dimension: Bar Council Rules and Advocates’ Duties
The Supreme Court’s observations carry significant implications under the professional ethics framework governing Indian advocates:
Rule 4, Chapter II, Part VI of the Bar Council of India Rules requires every advocate to act with utmost honesty and integrity before the court. Citing non-existent authorities — whether knowingly or through negligent reliance on unverified AI output — violates this duty.
Section 35 of the Advocates Act, 1961 empowers State Bar Councils to take disciplinary action against advocates guilty of professional misconduct. If an advocate submits AI-generated fake citations without verification, it could invite proceedings for misleading the court.
For judicial officers, the implications are equally serious. The Bangalore Principles of Judicial Conduct and the Restatement of Values of Judicial Life adopted by the Supreme Court require judges to exercise due diligence in legal reasoning. Reliance on unverified AI-generated case law undermines both competence and integrity.
The Global Context: India Is Not Alone
This is not the first time a court has confronted AI-fabricated citations. In June 2023, a New York federal court sanctioned attorney Steven Schwartz for submitting a brief containing six entirely fictitious case citations generated by ChatGPT. The judge described the conduct as an “unprecedented circumstance.” Courts in Canada, the United Kingdom, and Australia have since issued practice directions on AI use in legal submissions.
India’s Supreme Court intervention is significant because it goes further — characterising the conduct not merely as professional negligence but as potential misconduct that compromises the adjudicatory process itself.
Practical Guidelines: How Advocates Should Use AI Responsibly
AI tools can be powerful aids for legal research, drafting, and analysis. But they must be used as assistants, not authorities. Here are our recommendations:
- Never cite an AI-generated case without verification. Cross-check every citation against official databases — SCC Online, Manupatra, Indian Kanoon, or the Supreme Court and High Court websites.
- Use AI for direction, not destination. AI can help identify relevant legal principles, suggest search terms, or outline arguments — but the final citation must come from a verified primary source.
- Maintain a verification log. Document how each cited authority was located and verified. This protects you in case of any challenge.
- Disclose AI usage where required. As courts begin framing guidelines, proactive disclosure of AI assistance in legal research may become mandatory. Getting ahead of this trend demonstrates professional integrity.
- Use specialised legal AI tools. General-purpose chatbots like ChatGPT are not designed for legal citation. Prefer tools specifically built for Indian legal research that source from verified databases.
- Train your team. Junior associates and paralegals who are most likely to use AI tools must be trained on verification protocols.
What This Means Going Forward
The Supreme Court’s observations, though made in a specific case, are likely to catalyse several developments:
- The Bar Council of India may issue formal guidelines on AI use in legal practice — similar to guidance already issued by the American Bar Association and the Solicitors Regulation Authority in England and Wales.
- High Courts may begin requiring disclosure of AI tools used in drafting submissions.
- The judiciary’s own AI integration initiatives — such as SUPACE (Supreme Court Portal for Assistance in Court Efficiency) — will likely incorporate safeguards against hallucinated citations.
- Law firms and legal departments will need to establish internal AI use policies with mandatory verification protocols.
Conclusion
Artificial intelligence is transforming legal practice worldwide, and India is no exception. But the Supreme Court’s 2 March 2026 intervention is a critical reminder: technology is a tool, not a substitute for professional judgment. The duty to verify, the duty to be accurate, and the duty to maintain the integrity of judicial proceedings remain non-negotiable obligations — obligations that no algorithm can discharge on our behalf.
At Juris Altus LLP, we embrace legal technology while maintaining rigorous verification standards in every brief, petition, and opinion we produce. If you need assistance with litigation strategy, legal research, or understanding how AI impacts your practice, contact our team for a consultation.
This article is for informational purposes only and does not constitute legal advice. For specific guidance on AI use in legal practice, please consult a qualified advocate.
Author: Juris Altus LLP — Litigation & Legal Technology Practice
Published: March 2026