AI Usage in Court of Law: Lessons from the Delhi Builder Case - Cyber Legals
- Cyber Drome
- Oct 5
- 2 min read

Summary: An advocate for a Delhi builder cited a fabricated case created by ChatGPT as supporting authority in court. The false citation was exposed, highlighting risks of uncritically using generative AI for legal research. Below is a practical, SEO-friendly guide for Indian advocates on where AI helps, where it harms, and how to safely use it for electronic evidence and submissions.
#AI Usage in Court of Law
The incident (concise)
An advocate relied on a case citation and record generated by ChatGPT; the cited judgment did not exist. The fabrication was detected, undermining the advocate’s credibility and raising ethical and evidentiary concerns about AI-generated legal content.
Why this matters for advocates
Credibility risk: Courts treat fabricated authority seriously; relying on AI without verification can lead to sanctions.
Evidentiary risk: AI can invent judgments, dates, or citations that aren’t in any law reporter or database.
Ethical risk: Duty of competence and candour to the court requires verification of authorities cited.
Where AI is useful (recommended)
Summarising judgments — Use AI to produce quick summaries of real, verified cases you have already checked.
Drafting templates and boilerplate — For pleadings, notices, or checklists that you will edit and verify.
Research direction — Use AI to suggest search terms, likely relevant statutes, or case names — then confirm with authoritative databases (SCC, Manupatra, Indian Kanoon, Judgments.gov.in).
Workflow automation — Transcription, note-taking, and extraction of key facts from verified documents.
Client communication drafts — Prepare plain-language explanations, subject to lawyer review.
Where NOT to use AI (strict no)
Generating or relying on case citations, verbatim extracts, or legal authorities without independent verification.
Producing evidence, affidavits, or court records claimed as original sources.
Submitting AI-generated content to court as factual or authoritative without human corroboration.
Using AI to fabricate timelines, metadata, or chain-of-custody records.
Practical checklist for safe AI use (for advocates)
Verify every citation: Cross-check case names, citations, and quotes in at least one authoritative database.
Confirm primary sources: Always obtain and attach primary judgments or legislation PDFs before filing.
Label AI assistance: Internally note sections drafted or researched with AI; do not misrepresent AI output as human-only work if required by court rules.
Metadata diligence: When dealing with digital evidence, preserve original files and forensic reports; do not rely on AI reconstructions of deleted or altered data.
Maintain chain of custody: Document each step with timestamps and verified exports from devices or cloud services.
Use disclaimers in drafts: Treat AI outputs as preliminary drafts — “for internal use; verify before filing.”
Continuing legal education: Train teams on AI limits and verification protocols.
How to handle an AI-generated error if discovered
Immediately verify the claim.
Retract or correct filings as soon as possible and notify the court if necessary.
Preserve all AI prompts/outputs and communications for transparency.
Review internal processes and retrain staff to prevent recurrence.



