In 2024, a Stanford University study revealed a startling statistic: popular AI chatbots hallucinated between 58% and 82% of the time when responding to legal queries. Yet, despite this alarming error rate, lawyers around the world continue to rely on artificial intelligence to draft motions, conduct research, and even prepare pleadings. The consequences of such reliance were laid bare in June 2023, when New York attorneys Steven Schwartz and Peter LoDuca were sanctioned by a federal court for submitting a legal brief written by ChatGPT, one that confidently cited non-existent judicial authorities and fabricated case law. Judge P. Kevin Castel found that the lawyers had “abandoned their responsibilities,” fining them and ordering them to notify every judge falsely cited in their AI-generated filing.
This incident is not an isolated embarrassment; it is a cautionary tale. From healthcare algorithms that systematically disadvantaged Black patients in U.S. hospitals, to AI-powered financial systems denying credit without explanation, artificial intelligence has already proven capable of scaling both efficiency and error at unprecedented speed. Whether in our phones, offices, or homes, AI silently assists our daily lives, yet when it fails, the law is often unprepared.
This brings us to a critical and timely question: Who is legally responsible when AI gets it wrong? When an autonomous system misdiagnoses a patient, rejects a loan application, or generates false legal authority, does liability rest with the developer, the data provider, the deploying institution, or the end user who relied on the output?
In this article, we unpack the emerging doctrine of AI liability through the lens of Nigeria’s evolving legal and technological landscape. We will explore the existing frameworks, the gaps in regulation, the potential parties who may bear responsibility and most importantly, offer practical guidance for innovators, legal practitioners and policymakers navigating a world where algorithms now make decisions once reserved for humans.
The Rise of AI in Nigeria and the Need for Liability Clarity
Nigeria’s tech ecosystem is thriving with over 200 million people and a youthful population. We are seeing AI applications in banking (think fraud detection by apps like Opay), healthcare (AI-powered telemedicine in the country) and even governance (predictive analytics for traffic management in megacities). However, errors in AI systems, often called “algorithmic errors”, can lead to financial losses, injuries or even fatalities. Liability refers to the legal responsibility for these harms, ensuring victims can seek redress.
In Nigeria, where trust in technology is high but legal protections lag, understanding liability is crucial. Without it, businesses risk lawsuits and consumers face uncompensated harms. As of 2025, Nigeria lacks a dedicated AI law, unlike the European Union’s AI Act, which categorises AI risks and imposes strict liabilities. However, with Nigeria signing the Bletchley Declaration on AI Safety in 2023, along with 27 other countries, agreeing to address AI risks and develop risk-based policies, it mounts pressure for reforms.
Nigeria’s Current Legal Framework for AI Liability
While no specific statute governs AI in Nigeria, several laws apply indirectly. The Nigeria Data Protection Act (NDPA) 2023 is pivotal, as AI often relies on vast datasets. Under the NDPA, enforced by the Nigeria Data Protection Commission (NDPC), organisations must ensure data processing is lawful, transparent and secure. If an AI system errs due to biased or mishandled data, controllers could face liability for breaches, with fines up to 2% of annual turnover. The NDPA’s General Application and Implementation Directive (GAID) of March 2025 further guides compliance, emphasising accountability in automated decisions.
Beyond data protection, general principles from tort, contract and consumer protection laws fill the void:
- Tort Law (Negligence and Nuisance): Rooted in common law, as seen in cases like Donoghue v. Stevenson (1932), which influences Nigerian jurisprudence, liability arises if someone owes a duty of care, breaches it, and causes harm. For AI, if a developer fails to test an algorithm adequately, leading to errors, that could be negligent. Nigerian courts have applied this in product liability cases, such as defective goods causing injury.
- Contract Law: If AI is part of a service agreement, breaches can lead to claims. For instance, a Nigerian bank using AI for loan approvals must ensure it doesn’t violate contractual terms of fairness. The Evidence Act 2011 recognises electronic evidence, aiding proofs in AI-related disputes.
- Product Liability: Under the Consumer Protection Council Act and standards from the Standards Organisation of Nigeria (SON), AI-embedded products (like smart devices) are treated as goods. If defective, manufacturers are strictly liable, meaning no need to prove negligence, just that the product caused harm. This echoes global trends but is under-tested in AI contexts here.
- Intellectual Property and Other Laws: The Copyright Act 2022 and Patents and Designs Act touch on AI-generated works, but ownership remains unclear, potentially complicating liability if AI “creates” infringing content.
Who Bears Responsibility? Key Parties in AI Liability
Determining fault in AI errs involves multiple actors. Here is a breakdown relevant to Nigerian scenarios:
- Developers and Manufacturers: Often the primary targets. If an AI algorithm is poorly designed (e.g., trained on biased data reflecting Nigeria’s diverse ethnicities), developers could be liable under negligence. Globally, cases like Uber’s 2018 self-driving car fatality held the company accountable; in Nigeria, similar logic applies via tort law.
- Deployers and Operators: Businesses integrating AI, such as a Lagos fintech using chatbots for customer service. If the AI errs (e.g., giving wrong financial advice), the company is vicariously liable for its “agent.”
- Users and End-Consumers: In some cases, misuse by users (e.g., overriding AI safety features in a drone) shifts blame. However, if the AI is marketed as foolproof, strict liability protects consumers.
Challenges in Enforcing AI Liability in Nigeria
Several hurdles exist:
- Attribution of Fault: AI’s opacity makes proving causation difficult. Who “caused” an error; the code, the data, or external factors.
- Jurisdictional Issues: With global AI firms like Google operating here, enforcing judgments across borders is tricky, though treaties like the Hague Convention help.
- Regulatory Gaps: No mandatory AI audits or risk assessments, unlike the EU. This leaves victims relying on slow court processes.
Internationally, the EU AI Act 2021 imposes high-risk AI requirements, including transparency and human oversight, with fines up to €35 million. The US focuses on sector-specific rules, while Africa’s landscape varies; South Africa and Egypt lead with strategies. Nigeria could adopt a hybrid: a national AI policy incorporating global best practices.
Conclusively, artificial intelligence is no longer a distant concept; in fact, it represents both the present and the future of the global community, one in which Nigeria is certainly not left behind. It is already shaping how Nigerians work, trade and interact. Yet, as we embrace the benefits of AI, we must also prepare for its legal and ethical implications. Ensuring accountability when algorithms err is essential to maintaining public trust and safeguarding rights in our digital age.
Recommendations for Nigerians:
- For Businesses: Conduct AI impact assessments, include indemnity clauses in contracts, and comply with NDPA. Train staff on AI risks.
- For Individuals: Read the terms of service for AI tools, report errors to regulators like the Nigeria Data Protection Commission, and seek legal advice promptly.
- For Policymakers: Enact an AI Act focusing on liability, drawing from the EU AI Act 2021 while suiting our context, emphasising affordability for startups.
Contributors

Abiola Mohammed
Associate II