top of page

AI Usage in Court of Law: Lessons from the Delhi Builder Case - Cyber Legals

  • Cyber Drome
  • Oct 5
  • 2 min read
AI Usage in Court of Law

Summary: An advocate for a Delhi builder cited a fabricated case created by ChatGPT as supporting authority in court. The false citation was exposed, highlighting risks of uncritically using generative AI for legal research. Below is a practical, SEO-friendly guide for Indian advocates on where AI helps, where it harms, and how to safely use it for electronic evidence and submissions.


#AI Usage in Court of Law


The incident (concise)

An advocate relied on a case citation and record generated by ChatGPT; the cited judgment did not exist. The fabrication was detected, undermining the advocate’s credibility and raising ethical and evidentiary concerns about AI-generated legal content.



Why this matters for advocates

  • Credibility risk: Courts treat fabricated authority seriously; relying on AI without verification can lead to sanctions.

  • Evidentiary risk: AI can invent judgments, dates, or citations that aren’t in any law reporter or database.

  • Ethical risk: Duty of competence and candour to the court requires verification of authorities cited.


Where AI is useful (recommended)

  1. Summarising judgments — Use AI to produce quick summaries of real, verified cases you have already checked.

  2. Drafting templates and boilerplate — For pleadings, notices, or checklists that you will edit and verify.

  3. Research direction — Use AI to suggest search terms, likely relevant statutes, or case names — then confirm with authoritative databases (SCC, Manupatra, Indian Kanoon, Judgments.gov.in).

  4. Workflow automation — Transcription, note-taking, and extraction of key facts from verified documents.

  5. Client communication drafts — Prepare plain-language explanations, subject to lawyer review.


Where NOT to use AI (strict no)

  • Generating or relying on case citations, verbatim extracts, or legal authorities without independent verification.

  • Producing evidence, affidavits, or court records claimed as original sources.

  • Submitting AI-generated content to court as factual or authoritative without human corroboration.

  • Using AI to fabricate timelines, metadata, or chain-of-custody records.


Practical checklist for safe AI use (for advocates)

  1. Verify every citation: Cross-check case names, citations, and quotes in at least one authoritative database.

  2. Confirm primary sources: Always obtain and attach primary judgments or legislation PDFs before filing.

  3. Label AI assistance: Internally note sections drafted or researched with AI; do not misrepresent AI output as human-only work if required by court rules.

  4. Metadata diligence: When dealing with digital evidence, preserve original files and forensic reports; do not rely on AI reconstructions of deleted or altered data.

  5. Maintain chain of custody: Document each step with timestamps and verified exports from devices or cloud services.

  6. Use disclaimers in drafts: Treat AI outputs as preliminary drafts — “for internal use; verify before filing.”

  7. Continuing legal education: Train teams on AI limits and verification protocols.


How to handle an AI-generated error if discovered

  1. Immediately verify the claim.

  2. Retract or correct filings as soon as possible and notify the court if necessary.

  3. Preserve all AI prompts/outputs and communications for transparency.

  4. Review internal processes and retrain staff to prevent recurrence.

 

 
 

CyberLegals © All rights reserved.

 
bottom of page