AI in Global Majority Judicial Systems

While the judicial system benefits from AI implementation in case management and processing, the technology poses a threat to ethics and trust

As artificial intelligence continues to grow in popularity and application globally, various fields have shown both apprehension and fascination with the new technology. The judiciary seems to recognize the great potential, as well as risk of embracing artificial intelligence. On one hand, the judiciary can benefit greatly from its application by improving administrative tasks and automating case management. On the flip side, using AI in judiciary systems risks building distrust, undermining accountability, and eroding privacy. While the double-edged nature of AI is not a new concept, the high stakes of judicial decision-making warrants caution and consideration of the risks that the new and emerging technology poses.

Editor’s Note: Since 2023, Microsoft’s Office of Responsible AI has partnered with the Strategic Foresight Hub at the Stimson Center to convene a diverse group of experts from the Global South to evaluate the impacts of AI in emerging markets. Guided by the question of how AI-related risks and benefits might manifest in various social, cultural, economic, and environmental contexts, program participants identify technological and regulatory solutions that can help mitigate risks and maximize opportunities across the globe. Fellows also have the opportunity to publish at Stimson; in the RAI Case Studies, Fellows share insights about responsible AI governance from within their own thematic and geographic areas of expertise. Ibrahim Sabra is an alumnus of the Responsible AI Fellowship.

By Giulia Neaher, Managing Editor for RAI Case Studies

Introduction 

The assertion that artificial intelligence (AI) will increasingly replace humans across various sectors continues to gain traction, reflecting both fascination and apprehension about the future of work. This sentiment is particularly relevant in the context of the judiciary, where the adoption of AI tools, particularly generative AI, is becoming a powerful global phenomenon. 

A recent UNESCO survey highlights this trend, revealing both varying AI awareness and significant adoption gaps among judges. While 92% report some AI understanding, expertise varies widely: 31% consider themselves highly knowledgeable, 41% say they have moderate knowledge, and 20% possess only slight familiarity, with 7% admitting they have no knowledge at all. Regarding practical use, 44% have used AI tools for work, with 41% specifically using LLMs like ChatGPT. However, slightly more than half have not used AI tools professionally, and 59% avoid LLMs entirely. Common AI applications include information searching (43%), document drafting (28%), brainstorming (14%), concept explanation (5%), data analysis (4%), and information verification (3%). It is important to note, however, that most users exercise caution. While 55% incorporate AI output into their work, they typically review and edit it, with only 6% using content without verification. Despite growing interest, organizational support remains inadequate, with only 9% of respondents reporting their organizations have issued AI usage guidelines or provide training resources. 

Building on these findings, this ​case study​ examines the challenges and opportunities associated with the integration of AI in the judiciary. It aims to provide a broader understanding of AI’s role in judicial systems, contribut​ing​ to the ongoing discourse on the responsible and effective use of AI in the judiciary within the unique socio-political and legal contexts of the Global Majority. 

AI Systems Categories in the Judiciary 

The integration of AI into judicial systems can be broadly categorized into three primary groups: AI clerical assistive systems, AI-based recommendation systems, and AI semi-decision-making systems. Ranging from the most basic to the most advanced applications, these categories reflect the degree of complexity and autonomy of AI systems​.​​     ​ 

AI Clerical Assistive Systems 

These systems are designed to enhance court efficiency by automating repetitive tasks to reduce working hours per case and ensure accurate and consistent documentation. For example, they support administrative and procedural tasks, such as document classification, case management, and automatic transcription, thereby streamlining court operations. In Egypt, AI transcription tools have been developed as part of judicial digitalization efforts to automate court transcripts, significantly reducing the manual effort required and improving session documentation efficiency. 

Similarly, Türkiye has implemented several AI initiatives under its eJustice system (UYAP). These include speech-to-text systems, legal text summarization, and document classification for enforcement processes. The country has also developed the Belonging Decision Estimation and Inconsistency Detection in Bill of Indictment AI systems, which aim to expedite decision allocation and validate indictment data, respectively.  

Other countries have also adopted similar technologies. For example, Azerbaijan’s e-Enforcement Information System facilitates enforcement case management, while India’s SUVAS program translates judicial decisions into regional languages. Singapore has also deployed the Intelligent Court Transcription System (iCTS) in its courts. Morocco and Tanzania employ transcription and translation tools to improve multilingual accessibility in courtrooms. 

As part of China’s Smart Courts project, various AI systems such as optical character recognition, automatic speech recognition, and natural language processing have been deployed in courts to create automated transcriptions. These innovations arguably have reduced average trial times by 30% and significantly decreased the manual workload of court clerks, but critics still raise concerns around the use of AI by law enforcement in high-stakes contexts. 

AI Recommendation Systems 

To offer insights that assist judicial decision-making, these systems analyze historical data, legal codes, and case precedents, while performing tasks such as legal research, analysis, and drafting. In the United Arab Emirates, Aisha, an AI-powered virtual employee, for example, aids judges by analyzing past cases and providing jurisprudential insights. In Singapore, a Small Claims Tribunal AI system helps litigants by guiding them through claims processes, identifying required materials, filing their claims accurately, and providing insight into potential claim outcomes and amounts. 

Brazil has implemented SIGMA, an AI system designed to assist judges in the drafting process of judicial decisions. The program analyses stored texts and compares them with procedural documents to identify relevant information. It suggests models and templates for drafting reports, decisions, and judgments, automating the preparation of judicial documents. By referencing past cases and legal precedents, SIGMA is expected to ensure consistency in judicial decisions.  

In June 2024, the Shenzhen Intermediate People’s Court became the first to systematically integrate a large language model into judicial reasoning, collaborating with ModelBest to address its high caseload. The system, trained on two trillion Chinese characters of legal texts, assists judges in civil and commercial cases across family law, property disputes, contracts, torts, corporate law, and labor disputes, with criminal and administrative functionalities expected by 2025. Shenzhen’s Intelligent Adjudication System focuses on core judicial reasoning through three key functions: summarizing case facts and disputed issues from complaints and evidence; generating tailored hearing prompts to guide judges through proceedings; and assisting in crafting judgements by generating written reasoning based on judges’ preliminary decisions, which judges then review and refine. 

AI Semi-Decision-Making Systems 

As of now, there are no fully autonomous AI decision-making systems that render final court decisions in the Global Majority. However, several AI systems may significantly contribute to decision-making processes.  

In Egypt, an AI tool under development estimates spousal support in alimony cases, ensuring harmonized and equitable outcomes by analyzing annotated datasets of historical court cases. Furthermore, in China, Shanghai’s 206 AI System and Hangzhou Internet Court’s Xiao Zhi AI system analyze past cases and legal codes, assisting with evaluating evidence, validating sentencing requirements, and identifying sentencing severity. 

Several Latin American countries have deployed AI systems that increasingly influence judicial decision-making, as well. Argentina’s Prometea automates deadline management, analyzes paperwork, predicts outcomes using precedents, and drafts legal opinions through guided prompts, evolving from an assistive tool to a critical contributor in judicial decisions. Brazil employs two notable systems: Victor, used by the Federal Supreme Court to identify cases meeting public interest requirements for review, arguably decides case admissibility by automating traditionally human-conducted screening; and Sócrates, deployed by the Superior Court of Justice to analyze appeals by examining procedural documents and offering potential actions like accepting or rejecting appeals. 

Colombia’s PretorIA supports the Constitutional Court by analyzing actions concerning fundamental rights. The system processes cases, classifying them against 33 predefined criteria, then identifies which case meets the criteria, aiding judges in case selection. These systems collectively demonstrate AI’s progression from assistance to active participation in judicial processes. 

Challenges 

AI in the judiciary is promoted as reducing human biases through more rational, consistent rulings. Proponents argue AI could help judges identify and mitigate cognitive biases from internal factors (mood, political views, socio-economic backgrounds) or external events (weather, sports outcomes, defendant’s appearance). However, it is important to note that AI systems introduce their own biases that can potentially be more harmful, if unchecked.  

Sir Robert Buckland identifies four main bias sources: data bias, where AI reflects training data biases, often disadvantaging marginalized communities; coding bias, as translating laws into software requires programmer assumptions leading to divergent interpretations and inconsistent enforcement; intentional misuse in weak judicial systems where AI could be exploited politically under false impartiality claims; and human-AI interaction bias, where judges may over-rely on or dismiss AI recommendations. 

In addition to these, the following points are some of the key challenges that the use of AI in the judiciary could amplify: 

Privacy Erosion 

Sensitive information, including financial records, medical histories, private communications, and evidence, is often shared during legal proceedings. If AI systems are used to process or analyze this data, there is a risk that the information could be exposed to unauthorized parties, either through data breaches or misuse by the AI itself. This is particularly concerning when these models are trained on legal documents or case records, since there is a risk that private information could inadvertently be memorized and later reproduced by the AI, violating privacy and confidentiality. 

Legal Stasis 

The concept of Law Fluidity underscores a critical challenge in integrating AI into the judicial system: the potential loss of the dynamic, evolving nature of the law. In common law systems, the law is not static; it evolves through judicial decisions that interpret, refine, and sometimes even reshape legal principles. A process that relies heavily on the human element — judges who balance the need for legal stability with the necessity of adapting to new societal, technological, and ethical realities. AI, by contrast, operates on predefined algorithms and data, lacking the capacity to engage in the nuanced, context-sensitive reasoning that underpins the development of legal principles. The absence of human judgment in an AI-driven courtroom could lead to a stagnation of the law, as AI systems would likely prioritize consistency over adaptability. While consistency is important in the legal system, excessive rigidity can prevent the law from evolving to address novel situations or societal changes or correct past injustices. In short, AI adjudication could “promote legal stasis and impede the natural fluidity of the law,” thereby hindering its improvement. 

Autonomy Reduction 

The law entrusts decision-making authority to human judges, not AI. Legal decisions, while grounded in evidence and rules, also require uniquely human qualities such as empathy, intuition, and moral reasoning. These elements enable justice to be administered with nuance, particularly in cases that demand moral discretion. Integrating AI into judicial processes risks diminishing these human traits, eroding judges’ autonomy and independent judgment over time. While AI can provide valuable insights and support informed decision-making, it may inadvertently discourage judges from drawing on their own experience, moral judgment, and values. Despite AI’s ability to process vast amounts of data, it cannot fully comprehend human experiences or moral complexities, leading to over-standardization. Widespread AI adoption risks promoting a one-size-fits-all approach that undermines contextual justice and limits a judge’s ability to tailor decisions to individual cases. Moreover, judges may feel pressured by AI recommendations, leading to superficial reviews and overreliance on algorithmic outputs. Such a dynamic could reduce judges to mere supervisors of AI tools, effectively shifting judicial authority to the private entities that design and control these technologies.  

Obscure Justice 

Employing opaque AI tools in the judiciary risks creating an accountability deficit, leading to obscure justice, which would undermine transparency and fairness in the judicial system, as exemplified by the COMPAS case. The use of proprietary algorithms, like COMPAS, raises significant concerns, particularly when their methods and training data remain undisclosed despite public outcry over bias and due process violations. Such opacity can stem from corporate or state secrecy, wherein critical details about how the AI operates are hidden. It can also arise from the technical complexity of AI systems, which may lead judges to misinterpret outputs, for instance by confusing correlations with causation due to a lack of technical literacy. Finally, intrinsic opacity results from the fundamental differences between human and algorithmic cognition, making AI’s decision-making processes difficult to explain or interpret, often rendering them a black box. 

Public Distrust  

Trust is crucial to the judicial system, and the use of opaque AI tools risks eroding public trust, as people comply with laws they perceive as legitimate and fair. Research highlights a human-AI fairness gap, with human judges often seen as fairer than AI, potentially leading to rejection of AI judgments by the broader public. Expanding AI’s role in courts could therefore trigger negative public perceptions, increasing appeals to human judges and undermining the efficiency gains AI promises. Therefore, it is vital to assess whether AI courts can foster public participation, ensure individuals feel heard, and uphold the legitimacy of both the courts and the broader governmental systems in which they operate. 

Opportunities 

While challenges are important to consider, the integration of AI into the judiciary can also present transformative opportunities to enhance efficiency, accuracy, and accessibility across the legal system, offering tangible cost and time savings.  

Administrative Task Automation and Improved Case Management 

Automating repetitive tasks might reduce administrative expenses and expedite case resolution, contributing to more effective resource allocation. By promoting consistency in routine case outcomes, AI could minimize disparities caused by human error. Furthermore, AI could improve access to justice by addressing case backlogs and expediting routine matters. Simplified filing processes through AI-powered portals might empower self-represented litigants to navigate legal procedures, potentially making justice more accessible. 

Similarly, AI-powered translation tools could enhance access to justice in the judiciary by facilitating accurate, real-time communication across language barriers, enabling individuals to fully participate in legal proceedings. These tools could translate spoken and written communications, such as testimony and case documents, ensuring equitable access to information for all parties. While potential errors, such as misheard phrases or inaccuracies, mirror those of human interpreters, AI systems can reduce such risks through standardization and consistency, with human oversight. 

Streamlined Case Processing and Enhanced Support for Complex Cases 

AI can streamline case processing across multiple judicial areas. In routine administrative, civil, and small claims cases, digital submission systems with AI integration might process filing data directly, thus reducing manual entry, minimizing errors, and accelerating case management. For family and employment matters, AI can assess proposed agreements for legal compliance, ensuring prompt and accurate judgements. AI-powered filing portals could also help litigants prepare properly formatted submissions, reducing delays from incomplete filings.  

In complex cases, AI-powered knowledge systems could further provide judges with rapid access to relevant precedents, statutes, and case law while organizing large digital files for easier analysis. These tools might help with extensive case data, such as financial records or digital communications, to uncover patterns or anomalies that inform decision-making. AI could also offer real-time insights during proceedings or simulate legal scenarios, aiding judges in their deliberations. This could allow judges to focus on substantive legal issues with greater clarity and efficiency, improving overall decision quality.   

Cross-Jurisdictional Learning and Collaborative Innovation 

The scalability and customizability of AI technologies suggest that they could adapt to various case types, legal systems, and jurisdictions. AI could also facilitate global knowledge sharing, allowing jurisdictions to exchange best practices, harmonize legal standards, and streamline cross-border disputes. Such developments might foster international collaboration and contribute to a more cohesive judicial framework.   

AI could also play a role in judicial training and development by creating realistic simulations and training programs for judges, lawyers, and court staff. These programs might help legal professionals stay updated on evolving legal standards and improve their decision-making skills. Additionally, AI systems could continuously learn from new cases and legal developments, ensuring they remain relevant and up to date.   

Conclusion  

While the potential applications of AI in the judiciary are vast, the current lack of robust guardrails necessitates caution and an unwavering commitment to judicial independence. Judges, as ultimate decision-makers, must retain personal responsibility for all material produced in their name. Being human, judges may initially distrust AI but could eventually over-rely on it, potentially overlooking errors, inaccuracies, or injustices. This underscores the importance of using AI sparingly and only for non-decisive aspects, to mitigate risks of inaccuracy and overreliance.  

Judges should not employ AI for legal research or analysis without rigorous verification due to the dangers of misinformation and AI hallucination, nor should automated services handle private or confidential data. Judicial reasoning must remain rooted in independent legal research and analysis, and judges must reach their conclusions before incorporating AI into the drafting process. Human oversight is critical at every stage of AI integration. To ensure transparency and build public trust, external audits of AI systems within the judiciary should complement internal reviews. Such measures would safeguard the judiciary’s integrity, independence, and credibility while responsibly leveraging AI’s benefits.  

Simply put, the use of a faulty AI application to transcribe oral hearings is far less consequential than relying on a flawed AI system to determine the length of a criminal sentence. The stakes in judicial decision-making are incomparably higher, demanding the utmost caution and ​​precision. 

Recent & Related

Field Note
Courtney Weatherby • Allison Pytlak
Policy Memo
Kalliopi Mingeirou • Yeliz Osman • Raphaëlle Rafin