Achieve your IAS dreams with The Core IAS – Your Gateway to Success in Civil Services

Context

With the growing adoption of Artificial Intelligence in courts, it is essential to establish clear frameworks that ensure its safe, ethical, and responsible application.

Introduction

The Kerala High Court’s initiative represents a progressive milestone in integrating AI into judicial processes. By balancing technological efficiency with strict safeguards, the policy sets a model for other courts to follow. As India grapples with a massive pendency of cases, responsible AI adoption can become a valuable ally, provided it remains firmly anchored to the principles of justice and fairness.

Operational Risks of AI in Courts

  • Translation & Transcription:
    • Errors like “leave granted” mistranslated as “chhutti sweekaar (holiday approved)”.
    • In Noel Anthony Clarke vs Guardian News & Media Ltd. (2025), “Noel” transcribed as “no”.
    • OpenAI’s Whisper hallucinated entire phrases during pauses.
    • Safeguards: Manual vetting by experts; limit AI to low-risk tasks.
  • Legal Research:
    • Bias may invisibilise precedents.
    • Studies show LLMs fabricated case laws and cited false references.
    • Safeguards: Guidelines for AI research; AI literacy training for lawyers/judges.
  • Adjudication:
    • Risk of reducing nuanced reasoning into rule-based outputs.
    • Safeguards: Keep AI assistive, not decisive; preserve human discretion.

Institutional and Structural Challenges

  • Pilot Programs:
    • Ongoing pilots (oral argument transcription, witness depositions) lack timelines, benchmarks, or data safeguards.
    • Safeguards: Define success metrics, ensure data protection, upgrade infrastructure.
  • Procurement & Risk Management:
    • Court tenders show adoption without ethical/legal frameworks.
    • Safeguards: Standardized procurement norms on explainability, data use, and risk checks; pre-procurement assessments.
  • Human Oversight & Hallucinations:
    • LLM hallucinations are inherent features, not accidental bugs.
    • Safeguards: Mandatory oversight in high-risk applications; periodic audits of AI outputs.

Capacity, Rights, and Transparency

  • Capacity Building:
    • Judges, staff, and lawyers often lack AI readiness; courts remain largely paper-based.
    • Safeguards: AI literacy training via judicial academies; partnerships with AI governance experts.
  • Litigant Rights & Transparency:
    • No clear disclosure when AI aids in research or judgment drafting.
    • Safeguards: Litigants must be informed of AI use; allow opt-out from AI-driven processes.

On the eCourts project

  • Monitoring Compliance: Frameworks are needed to help courts track vendor performance and compliance, tasks that often go beyond the expertise of judges and registry staff.
  • Vision Document Phase III: The eCourts Project (Supreme Court e-Committee) highlights the importance of creating technology offices to guide courts in selecting, assessing, and supervising complex digital solutions.
  • Bridging Expertise Gaps: Establishing such scaffolding will ensure that courts receive specialist advice on infrastructure, software, and AI deployment.
  • Dedicated Specialists: By involving experts, courts can adopt AI tools with greater clarity, oversight, and long-term planning, reducing risks linked to limited technical capacity.

Conclusion

As courts gradually move towards the adoption of artificial intelligence (AI), it is crucial to remember that its primary role is to advance the cause of justice. In today’s fast-changing technological environment, the introduction of AI in judicial processes must be guided by clear and transparent frameworks. Such guidelines are necessary to ensure that the pursuit of efficiency and speed does not overshadow the deliberate reasoning, empathy, and human judgment that lie at the core of the adjudicatory process.


Leave a comment

Your email address will not be published. Required fields are marked *