Nigeria is on the verge of formulating its inaugural national AI policy, a strategic initiative poised to drive the country’s aspiration for transformative growth through AI technology. This blueprint underscores Nigeria’s commitment to leveraging AI for sustainable development while harnessing its potential to fuel innovation, bolster national productivity, and enhance human welfare. Pioneering AI advancements on the African continent, Nigeria stands as a vanguard, having established the National Centre for AI and Robotics (NCAIR) and government institutions dedicated to fostering a knowledge-based economy. These entities play a pivotal role in nurturing AI research and development within the nation.
The Nigerian landscape also boasts a vibrant and thriving pan-African AI ecosystem, teeming with private entities, businesses, and startups at the forefront of showcasing, implementing, and evolving AI systems within the country. Analogous to the European Union’s General Data Protection Regulation (GDPR), the Nigeria Data Protection Regulation (NDPR) furnishes a robust legal framework for the orchestration and exchange of electronic data. The NDPR not only aligns with international standards safeguarding individual data privacy but also safeguards the integrity of transactions involving personal data exchange. Its overarching objectives encompass the prevention of personal data manipulation, the fortification of Nigerian businesses in global trade through equitable data protection mechanisms and the alignment of the Nigerian data protection authority with global best practices.
A cornerstone of Nigerian cyber law, the Cybercrimes (Prohibition and Prevention) Act of 2015 wields significant influence. Enacted to counteract cyber threats, this act crafts a comprehensive blueprint encompassing legal, institutional, and regulatory dimensions. It renders cybercrime illegal, fostering detection, prosecution, and punitive measures against offenders. Furthermore, the legislation champions cybersecurity, shielding computer systems, networks, electronic communications, data, and software. It stands resolute in upholding intellectual property rights, privacy, and the preservation of vital national information infrastructure. In its entirety, the act erects a comprehensive scaffolding for addressing cybercrimes within Nigeria, from prevention to resolution.
In a parallel development, the year 2020 witnessed the National Information Technology Development Agency (NITDA) roll out the Guidelines for the Management of Personal Data by Public Institutions in Nigeria. These guidelines encompass stipulations with direct and indirect implications for diverse facets of AI utilization in Nigeria. This signals Nigeria’s emergence into an epoch of “AI normative emergence,” poised to usher in a wave of legislation, regulations, and directives governing AI implementation and usage. Anticipating these transformative shifts, Paradigm Initiative is actively engaging with key stakeholders in Nigeria, advocating for an AI strategy that upholds human rights and ethical considerations.
Presently, Nigeria lacks a formalized national AI policy. Yet, the National Information Technology Development Agency (NITDA), in tandem with NCAIR and other vested stakeholders, is at the forefront of driving progress in this realm. Notably, numerous government ministries, departments, and organizations in Nigeria are actively engaged with AI and other emerging technologies. These include the Ministry of Communication and Digital Economy, the Federal Ministry of Science and Technology, the National Board for Technology Incubation, the National Office for Technology Acquisition and Promotion, and the Nigeria Communications Commission.
While Nigerian law extends its reach into AI deployments, the efficacy of enforcement against the risks posed by AI systems remains a pertinent query. A nuanced exploration seeks to ascertain whether existing legal instruments suffice or if tailored legislative adaptations are requisite to effectively address emerging AI challenges.
Considerations and Recommendations of Artificial Intelligence (AI) Regulation in Nigeria
The emergence of AI poses significant challenges to the rule of law, fundamental rights protection, and the integrity of Nigeria’s judicial system. The implications become even more profound when considering the potential incorporation of AI-based decision-making tools within the realms of justice and law enforcement. Preserving the core principles of the rule of law, anchored in fundamental rights, stands paramount and cannot be compromised for the sake of expediency or cost-effectiveness within the legal framework and its beneficiaries. Effectively navigating this transformative landscape demands the establishment of precise guidelines and regulations, coupled with a well-defined role for AI systems within Nigeria’s judicial sectors.
The elevation of transparency, equity, accountability, and ethical tenets should take precedence. Enabling the integration of AI systems as an integral facet of Nigeria’s democratic fabric necessitates a departure from blind reliance solely on the expertise of computer scientists and engineers. Collaboration between domain experts, public servants, and individuals engaged in sectors pertinent to the rule of law is imperative to restore trust. This amalgamation mandates a meticulous assessment of the distinct skill sets and responsibilities of diverse professionals and entities. To ensure transparency and applicability, mere adherence of AI service providers to certifications, approvals, and trust marks indicative of ethical adherence is insufficient.
The growing application of AI in criminal proceedings introduces new complexities for defense lawyers, including the need to comprehend and interpret data pivotal to legal cases. Clients involved in such proceedings should rightfully anticipate their defense counsel’s ability to identify and elucidate any substantial and recurring biases inherent in AI-driven analyses. The Nigerian government and policymakers must place confidence in the intended performance of AI technologies. The overarching objective is to leverage AI’s advantages in enhancing public access to judicial systems while concurrently mitigating the attendant risks and downsides.
For practitioners aiming to leverage AI-powered tools in legal service delivery, a comprehensive understanding of the inner workings, limitations, as well as potential risks and benefits of these applications should precede their adoption. As such, training should prioritize the ethical and human rights dimensions, facilitating users’ comprehension of the technical milieu they are poised to operate within.
The key takeaway from this article underscores the imperative for the Nigerian government and law enforcement to closely monitor AI’s impact on the nation’s legal, economic, social, and judicial structures. Nigeria has demonstrated its readiness to establish a framework encompassing the study, creation, implementation, coordination, and oversight of AI systems as catalysts for transformative agendas spanning employment generation, economic expansion, and governmental transparency.
In shaping a comprehensive AI policy, Nigerian authorities and pertinent stakeholders must contemplate the most effective approach to safeguard citizens’ human rights while fostering an AI-powered economy aligned with best practices. These best practices encompass algorithmic accountability, data protection, transparency in machine learning-based decision-making, and beyond. Pioneering an effective AI strategy entails harnessing advancements in AI and allied technologies to address pressing challenges such as food security and healthcare accessibility. Given Nigeria’s burgeoning youth population, a well-crafted strategy becomes indispensable to mitigate potential job displacement and guide the youth toward meaningful engagement with the burgeoning AI economy.
Moreover, the unforeseen utilization of AI and digital technologies has rendered Nigerians’ personal data susceptible to misuse or illicit exploitation. Pertinent concerns like algorithmic bias, privacy erosion, opacity in AI deployment, and the challenge of cultivating trust and comprehension among Nigerians regarding AI must all be accorded due consideration by the government.
Given the gravity of the situation, the development of an AI policy rooted in respect for human rights is pivotal, upholding Nigeria’s democratic ideals and constitutional requisites, while simultaneously addressing the socioeconomic needs and aspirations of its populace. Enacting an AI strategy that places paramount importance on ethical data acquisition and utilization from the outset can catalyze genuine transformation, fostering a robust AI ecosystem in Nigeria characterized by the promotion and safeguarding of human rights.
As the Nigerian National Information Technology Development Agency (NITDA) diligently works toward crafting a National Artificial Intelligence Policy (NAIP), the delineation of its scope assumes pivotal importance within the forthcoming regulatory framework governing AI. In light of escalating cyberattack risks, potential loss of control, challenges in anomaly detection and monitoring, and the expansive attack surface in the digital domain, a prudent recommendation entails the principal objective of the National Artificial Intelligence Policy (NAIP) centering on safeguarding the nation against AI-driven attacks. This involves mitigating the risk of attacks on AI systems and minimizing the impact of successful breaches. Achieving this goal necessitates encouraging stakeholders to adopt a comprehensive set of best practices for fortifying systems against AI-driven attacks, encompassing considerations of risk and attack surfaces during AI system deployment, implementing reforms that heighten the complexity of executing attacks, and formulating response plans to effectively counteract attack-induced damages.
Additionally, it is advisable to mandate that certain public and private sector entities adhere to the National Artificial Intelligence Policy (NAIP). In the public sector, compliance should be mandatory for all instances of AI application by the government and be made a prerequisite for private companies engaged in selling AI systems to the government. For the private sector, high-risk AI applications should necessitate compliance, while low-risk applications should not face such a requirement, ensuring innovation remains unhindered in this rapidly evolving field. This approach is poised to bolster communal, military, and economic security in the face of AI-driven attacks. However, comprehending the intricacies of this issue marks the initial stride for policymakers and stakeholders alike on the path to achieving such security, and that is precisely where our focus shall be directed.
Legal Framework and Policy Development for Artificial Intelligence (AI) in Nigeria
The utilization of artificial intelligence (AI) must align with national principles concerning human rights, democracy, and the rule of law. This necessitates the establishment of a robust legal framework. This framework should adopt a risk-based strategy to cultivate a regulatory atmosphere that encourages positive AI innovation. Simultaneously, it should address the potential threats highlighted and rectify the substantive and procedural legal gaps identified. The goal is to ensure its pertinence and effectiveness in comparison to existing protocols.
To achieve this objective, foundational principles governing artificial intelligence (AI) should be defined. From these principles, specific rights applicable to individuals can be delineated. These rights could encompass existing ones, newly tailored rights addressing AI-related challenges and opportunities, or supplementary clarifications of preexisting rights. Correspondingly, guidelines for AI system creators and implementers should be established to ensure compliance with these specifications.
The development of forthcoming legal measures may introduce fresh rights and responsibilities if deemed essential, advantageous, and proportionate to the ultimate aim: safeguarding against potential adverse repercussions of AI system development and application on human rights, democracy, and the rule of law. This must be carried out while weighing the equilibrium of the varied legitimate interests in play. Moreover, where permissible and in accordance with legal provisions, provisions for exceptions to both new and established rights should exist to safeguard public safety, national security, or other valid public concerns.
The core principles deemed critical in the AI systems context are elaborated in the ensuing paragraphs, along with the corresponding rights and obligations tied to them. These foundational elements could potentially form part of a future legally binding national AI policy for Nigeria. While these principles, rights, and mandates are delineated in a manner that possesses cross-cutting applicability, they can be harmonized with a sector-specific approach. This approach could entail contextual prerequisites in the form of non-binding regulatory tools such as sectoral guidelines or assessment checklists.
Upholding the Value of Human Dignity
At the core of all human rights lies the principle of human dignity. This principle acknowledges the intrinsic worth of every individual, purely by virtue of their humanity. Human dignity stands as an inherent and inalienable entitlement. Consequently, even when circumstances warrant the limitation of a human right – such as when balancing competing rights and interests – the preservation of human dignity remains paramount. This underscores that the creation, advancement, and utilization of AI systems must consistently uphold the dignity of those who engage with them or are impacted by their actions. People should be treated as ethical entities, not objects to be categorized, evaluated, foreseen, or controlled.
The realm of AI applications holds the potential to champion human dignity and empower individuals, yet it also holds the potential to inadvertently erode that very dignity. To safeguard human dignity, individuals must be informed when they are interacting with an AI system and must not be misled in this regard. Additionally, individuals should possess the autonomy to choose whether or not to engage with such systems, and they should not be subjected to decisions that are substantially influenced or made by an AI system, especially if these decisions violate the tenets of human dignity. Furthermore, due to the potential implications for human dignity, certain tasks might necessitate human involvement rather than machine intervention. As a general principle, the design and deployment of AI systems should be orchestrated in a manner that safeguards and nurtures both the physical and mental well-being of humans.
- Nigerian statutes and policies must ensure that exclusively humans undertake tasks that would transgress the boundaries of human dignity if executed by machines.
- In instances of uncertainty, Nigerian laws and policies should mandate that those deploying AI systems explicitly inform individuals that they are engaging with an AI system rather than an actual human being.
Protections of Human Rights, Democracy, and the Rule of Law through AI Safeguards
The incorporation of artificial intelligence (AI) systems within security and protection frameworks holds the potential for minimizing risks to individuals, the environment, and interconnected systems. However, the deployment of AI systems also harbors the possibility of misuse, resulting in adverse impacts on individuals, societies, and the environment. A central tenet is the prevention of harm, with a particular focus on preserving human rights, upholding democratic principles, and maintaining the rule of law. The preservation of physical and mental well-being remains paramount, especially for those who are more susceptible. Notably, vigilance is required in scenarios where AI applications might exacerbate existing disparities due to disparities in power or information – examples encompass relationships between employers and employees, businesses and consumers, and governments and citizens.
Importantly, beyond the individual impact of AI systems, harm prevention necessitates accounting for the natural ecosystem and its diverse life forms, as these underpin human sustenance. Furthermore, considerations extend to ensuring the security and reliability of AI systems, encompassing safeguards against technological vulnerabilities and potential threats like adversarial attacks or malicious manipulation.
In light of these considerations, Nigerian regulatory bodies hold the responsibility of instituting robust measures to mitigate and forestall harm stemming from the development and deployment of AI. This harm may manifest physically, psychologically, economically, environmentally, socially, or within legal frameworks. This cautious approach finds particular relevance within the context of public procurement procedures and the establishment of automated public procurement systems. The formulation of Nigerian policy should adopt a risk-centered methodology, with provisions for a circumspect stance and even potential prohibitions when warranted, such as in situations marked by elevated risk coupled with considerable uncertainty. Moreover, Nigerian policy should explore leveraging AI-based mechanisms to counteract and avert harm arising from human actions.
- Nigerian regulators must enforce that creators and implementers of AI systems integrate necessary measures to preclude physical or mental harm to individuals, society, and the environment. Strategies could involve making potentially hazardous AI systems operate on an opt-in basis, or where this is not feasible, imparting clear instructions on discontinuing usage and presenting alternative AI-free options.
- It falls on Nigerian regulators to ensure the existence of robust safety, security, and resilience standards embedded within AI systems, which developers and deployers must adhere to. These standards encompass facets like defense against attacks, precision, dependability, and the maintenance of data integrity. Rigorous testing and validation of AI systems prior to deployment and throughout their lifecycle are imperative to curtail such occurrences.
- Regulatory authorities in Nigeria must ensure that AI systems are conceived and employed sustainably, harmonizing with prevailing environmental protection legislation.
- Where pertinent, Nigerian regulators can foster the deployment of AI systems as a means to counteract and alleviate harm originating from human actions and technological systems, concurrently upholding human rights, democratic principles, and the rule of law. These efforts may also encompass endorsing AI solutions that safeguard human dignity and contribute to environmental solutions.
Safeguarding Human Freedom and Autonomy
The fundamental principles of human freedom and autonomy are integral components embedded within the framework of human rights enshrined in Nigerian legislation. Within the realm of artificial intelligence (AI), these principles are interwoven with the capacity of individuals to exercise independent judgment when it comes to utilizing AI systems and comprehending the consequences for themselves and others. This encompasses the critical decision-making process of determining when, how, and whether to harness AI technologies.
The deployment of AI has the potential to undermine human freedom and autonomy through a diverse range of avenues, including the facilitation of AI-propelled mass surveillance or targeted manipulation, regardless of whether initiated by public entities or private enterprises. Instances may involve the utilization of remote biometric recognition or online tracking.
As a foundational principle, AI systems ought to be employed to amplify and complement human abilities rather than to dominate, coerce, mislead, control, or condition them. Mechanisms of human oversight need to be woven into the fabric of AI implementation to ensure the availability of human intervention when the core tenets of human rights, democratic principles, and the rule of law are at risk. Despite this, the existing legal framework does not currently encompass the establishment of adequate mechanisms for effective human oversight. The extent and frequency of surveillance must be tailored to align with the distinctive context of each AI application while preserving the autonomy inherent in these human interventions. However, it is essential to guarantee that instances requiring human intervention are overseen by individuals vested with the autonomous authority to override system decisions, devoid of any influence from automation bias or constraints on review time.
Key Factors to Consider:
- Any AI-facilitated manipulation, personalized profiling, or predictions involving the processing of personal data must strictly adhere to the regulations delineated in the Data Protection Act.
- Nigerian regulatory bodies should mandate AI developers and implementers to institute protocols for human oversight that safeguard human autonomy in a manner that is specifically calibrated to address the distinct risks prevalent in the context of AI system development and application. It is imperative to ensure a substantial level of human engagement in the operation of AI systems, contingent upon a contextual assessment of risks that encompasses the impact of the system on human rights, democratic values, and the rule of law. Competent individuals should possess the capability to deactivate or adjust the functionalities of AI systems as deemed necessary and feasible, grounded in a comprehensive risk evaluation. Those responsible for designing and managing AI systems must possess the requisite skills or qualifications to ensure adequate supervision that upholds human rights, democratic ideals, and the rule of law. In order to preserve the mental and physical well-being of individuals, practitioners of AI should steer clear of adopting models rooted in the “attention economy,” as these models could potentially curtail personal freedoms.
- Nigerian regulatory authorities should mandate that individuals engaged in AI development and utilization effectively communicate the rights of individuals in a timely and accurate manner.
Adherence to Non-discrimination, Gender Equality, Fairness, and Diversity in AI Systems
The utilization of AI systems has the potential to negatively impact fundamental principles such as non-discrimination, gender equality, and fairness. Extensive research has demonstrated that these systems can inadvertently perpetuate and amplify biased, discriminatory, and harmful stereotypes, thereby affecting not only individuals subject to such technology but also the broader fabric of society. Embracing AI systems that harbor inherent biases could exacerbate existing inequalities, jeopardizing the bedrock of social unity and equity crucial for the functioning of a democratic society.
Although international legal instruments already recognize and safeguard the right to non-discrimination and equality, it’s essential to contextualize these principles within the distinctive challenges posed by AI. The emerging concept of proxy discrimination in the realm of machine learning presents interpretative challenges in distinguishing direct from indirect discrimination and evaluating the adequacy of this distinction as currently defined. Similarly, the conventional norms for justifiability in cases of discrimination may necessitate re-evaluation in the context of machine learning.
Given the potential risks of perpetuating gender-based discrimination, stereotypes, and sexism through AI systems, particular attention must be directed toward their impact on gender equality. Cautious considerations are also warranted to prevent the amplification of prejudices against marginalized and vulnerable individuals, spanning racial, ethnic, and cultural backgrounds, as well as the perpetuation of racism. Addressing the glaring lack of diversity within the AI industry itself is a pressing concern; fostering diverse representation in decision-making processes related to AI systems, particularly in sensitive sectors, could serve to prevent and mitigate adverse human rights consequences, particularly concerning equality and non-discrimination. It’s equally crucial to confront the prospect of AI systems engendering intersectional discrimination and facilitating biased treatment or false associations.
Key Points to Consider:
- Regulatory bodies in Nigeria should ensure that AI systems deployed under their purview do not inadvertently foster illegal discrimination, harmful stereotypes (including those centered on gender), and wider societal inequalities. Thus, exercising utmost caution when endorsing or employing AI systems in sensitive public policy domains like law enforcement, justice, asylum, and migration is imperative. Rigorous testing and validation of AI systems should be conducted before implementation, and these processes should persist throughout the system’s lifecycle, facilitated by regular audits and reviews.
- Regulators should establish comprehensive rules to counteract potential discriminatory impacts arising from AI systems, whether employed in public or commercial sectors. These stipulations must safeguard individuals from the adverse consequences of such systems, proportionate to the associated risks. These regulations should span the entire lifecycle of an AI system, encompassing tasks such as bridging gender data gaps, ensuring data sets’ representativeness, quality, and accuracy, optimizing algorithm design, system use, and rigorous testing and evaluation to identify and mitigate discrimination risks. Ensuring transparency and audibility of AI systems is crucial to detecting biases throughout their lifespan.
- Regulatory bodies should actively advocate for diversity and gender balance within the AI workforce, encouraging consistent input from a diverse array of stakeholders. Enhancing awareness about the risks of discrimination, encompassing novel forms of bias within the context of AI, is paramount.
- Regulatory bodies should endorse the implementation of AI systems where they have the potential to counteract biases intrinsic to both human and machine decision-making processes. This approach can facilitate equitable and unbiased outcomes.
Promoting Transparency, Explainability, and Clarity in AI Systems
When there is a lack of disclosure regarding the utilization of an AI system within a product or service, and the criteria upon which it operates, assessing the potential infringement of human rights, democratic principles, and the rule of law becomes a complex or unattainable task. Moreover, without this essential information, the validity of decisions influenced or made by an AI system cannot be effectively challenged, nor can the system be enhanced or rectified in cases of harm. Thus, transparency becomes a pivotal factor in accommodating other principles and rights, such as the right to seek a viable remedy when violations occur, encompassing the right to scrutinize and rectify decisions influenced by AI. Consequently, the tenets of transparency and comprehensibility hold paramount importance in the realm of AI, particularly when a system holds the potential to undermine human rights, democracy, or the rule of law. Despite this, existing legal mechanisms inadequately safeguard these principles.
Transparency encompasses rendering AI operations traceable, possibly through documentation or recording, along with furnishing pertinent insights into the system’s capabilities, constraints, and objectives. These insights must be tailored to suit the context and target audience. Regulatory bodies and law enforcement agencies should devise methodologies enabling impartial and effective audits of AI systems, permitting a substantive evaluation of their impact. Individuals directly affected by decisions predominantly influenced by or executed by an AI system must receive timely alerts and access to the aforementioned information.
Furthermore, individuals should be well-informed about the processes driving decisions that impact them. While providing a rationale for every specific outcome generated by a system might not always be feasible, ensuring the system’s suitability under such circumstances is crucial. Balancing commercial proprietary information and intellectual property rights against other legitimate interests becomes imperative. Instances of potential non-compliance with prevailing regulations should empower public authorities to audit AI systems. The technical requisites of transparency and comprehensibility must not unduly hinder market prospects, particularly in scenarios where threats to human rights, democracy, and the rule of law are less evident. A risk-based approach should be adopted, striking an equitable equilibrium to avert or minimize the risk of consolidating major market players or stifling innovative and socially beneficial research and product development.
- Nigerian regulatory bodies ought to mandate AI system developers and implementers to establish effective communication channels. Users should be aware of their entitlement to human assistance whenever they engage with an AI system that could impact their rights, especially concerning public services. Knowledge about obtaining such assistance should be readily accessible.
- In instances where the deployment of AI systems jeopardizes human rights, democracy, or the rule of law, law enforcement agencies and legislators should impose obligations regarding traceability and information sharing on AI developers and implementers. Relevant parties with legitimate interests, such as customers, citizens, oversight agencies, or others, should have uncomplicated access to contextually pertinent information about AI systems. This information should be intelligible and easily accessible, encompassing elements like the types of decisions or circumstances susceptible to automated processing, pivotal decision-making criteria, data utilization details, and a comprehensive depiction of the data collection process. An overview of the potential legal or other repercussions of the system should be accessible for assessment or review by expert independent bodies.
- In the case of children or other vulnerable populations engaging with AI systems, a heightened level of caution is essential. Nigerian legislators and law enforcement should enforce documentation prerequisites for AI developers and implementers. AI systems with the potential to adversely impact human rights, democracy, or the rule of law should be traceable and amenable to auditing. The datasets and methodologies leading to the AI system’s judgments, encompassing data collection, labeling procedures, and utilized algorithms, should be meticulously documented, allowing for retrospective auditability of the system. Documentation approaches that are both qualitative and effective must be instituted.
- Nigerian regulatory authorities should make all pertinent information about AI systems employed in providing public services (including operational mechanisms, optimization approaches, underlying logic, and data utilization methodologies) publicly accessible and easily discoverable while upholding legitimate interests like public safety and intellectual property rights while ensuring the full observance of human rights.
Upholding Data Protection and the Right to Privacy
The right to privacy stands as an extension of the broader right to private and familial life, garnering specialized safeguarding within the realm of personal data. This right is also integral for the realization of other fundamental human rights. Consequently, the creation, evolution, training, testing, utilization, and evaluation of AI systems reliant on personal data processing must effectively preserve an individual’s entitlement to privacy and family life. This encompasses the “right to informational self-determination” concerning their data. Individuals should possess both access to and authority over their personal data. While consent isn’t the sole legal foundation for personal data processing, it bears paramount significance in this scenario. However, for consent to hold validity, it must be knowledgeable, exact, voluntary, and unambiguous. The presence of power dynamics or informational imbalances could compromise the voluntary aspect of consent, underlining its limitations in certain contexts and underscoring the demand for more robust legal underpinnings for processing in specific cases.
Nigerian regulatory bodies are duty-bound to effectively enforce the prevailing data protection policy, ensuring the safeguarding of individuals concerning automated processing of personal data, in addition to adhering to other binding international data protection and privacy protocols. Not all AI systems involve the processing of personal data. Even in instances where AI systems are not explicitly designed for handling personal data and instead operate on anonymized, non-identifiable, or non-personal data, the distinction between personal and non-personal data is growing increasingly indistinct. Therefore, it becomes imperative to delve deeper into the interplay between personal and non-personal data to rectify any potential legal gaps in protection. Machine learning algorithms possess the capability to deduce personal information about individuals, including sensitive details, from anonymized or anonymous data, or even from data pertaining to other individuals. This underscores the necessity for heightened precautions in shielding individuals from inferred personal data.
Finally, regardless of the potential benefits derived from deploying a particular AI system, any encroachment upon the right to privacy, particularly by governmental entities, must be in accordance with the law, especially in situations where conflicting fundamental rights are at stake. This is a prerequisite in a democratic society.
- Nigerian regulatory authorities should ensure the preservation of the right to privacy and data protection across the entire lifecycle of AI systems implemented either by them or by commercial entities. Fair and transparent handling of personal data must be maintained at all stages, including during data set usage. This encompasses principles of fairness, transparency, proportionality, the legality of processing, data accuracy, protection against solely automated decisions, other data subject rights, data security, accountability, impact assessments, and privacy by design.
- Specific measures should be taken by Nigerian regulators to shield individuals from AI-driven mass surveillance, including remote biometric recognition technology or other AI-powered tracking mechanisms, as these practices run counter to human rights, democratic values, and rule of law standards. In such scenarios, the government could contemplate introducing supplementary regulations or limitations on the infrequent and controlled deployment of such technology, and potentially even imposing bans or moratoriums to ensure the protection of human rights.
- While procuring or implementing AI systems, Nigerian regulators should evaluate and mitigate potential negative impacts on the rights to privacy, data protection, and the broader domain of private and familial life. This entails assessing the extent of intrusion posed by the system in relation to its intended objectives and gauging the necessity of the system for achieving those goals.
- Nigerian regulatory bodies should explore the development and application of AI solutions that leverage (personal) data for the advancement and preservation of human rights, such as the right to life (evident in AI-driven evidence-based medicine). In doing so, they must ensure that the automated processing of personal data adheres to the stipulations of the Data Protection Act.
- Given the pivotal role of data in AI, Nigerian regulators should establish appropriate safeguards for cross-border data transfers to ensure compliance with data protection regulations.
Establishing Accountability and Responsibility
It is imperative for both individuals and entities, whether operating in the public or private sector, involved in the conception, creation, deployment, or assessment of AI systems, to acknowledge their responsibility for these systems. They should be prepared to face consequences when breaches of legal norms or instances of unjust harm to end-users or others occur based on these principles. This underscores the necessity of ensuring that AI systems adhere to Nigerian standards for human rights, democratic values, and the rule of law throughout their lifecycle—from conception and construction to deployment and utilization.
To accomplish this objective, Nigerian regulatory bodies must take proactive measures. These measures might involve instituting civil or criminal liabilities in cases where the design, development, or utilization of AI applications infringe upon human rights or undermine democratic processes and the rule of law. Crucially, vigilance is required to identify, investigate, document, and mitigate potential adverse impacts of AI systems. Additionally, safeguards should be in place to protect individuals reporting such impacts, such as whistleblowers. Effective mechanisms for public oversight and control, built on a risk-based approach, are essential to ensure that AI developers and implementers comply with pertinent legal stipulations. These mechanisms should empower state authorities to intervene if compliance is lacking.
Conversely, it is incumbent upon Nigerian regulators to ensure that individuals or entities harmed by AI systems have accessible and efficient avenues to seek redress against the creators or users of these systems. These remedies should be clearly communicated, with particular attention paid to marginalized or vulnerable populations. Adequate remedies should encompass restitution for suffered damages and may involve procedures within civil, administrative, or, when warranted criminal frameworks. Given the diverse applications of AI, tailored solutions are necessary for each context. This includes addressing illegal conduct and committing to non-repetition, as well as rectifying incurred harm and adhering to fundamental tenets of anti-discrimination legislation concerning evidence-sharing and burden-shifting.
- Nigerian regulatory bodies must guarantee the availability of effective remedies within their respective national jurisdictions, encompassing both civil and criminal dimensions. They should ensure that those adversely affected by AI applications can easily restore their infringed rights.
- In this context, exploring the implementation of class action mechanisms in cases where AI systems cause harm could be considered. Additionally, adherence to fundamental anti-discrimination principles, including equitable allocation of the burden of proof, should be ensured.
- Nigerian legislators should establish avenues for the public to monitor AI systems that might transgress human rights, democratic principles, or the rule of law.
- Regulators should mandate AI system developers and implementers to recognize, document, and report potential negative impacts of these systems on human rights, democracy, and the rule of law. Furthermore, these stakeholders must adopt appropriate measures to hold accountable any party responsible for harm caused.
- Nigerian regulatory authorities should institute measures facilitating the audit of private AI systems by public entities, thus ensuring conformity with legal requirements and imposing liability on private actors.
Protecting Democracy against AI Threats
To uphold the integrity of democratic decision-making processes, the preservation of pluralistic values, unfettered access to information, and the autonomy integral to democratic ideals, it is imperative to establish effective, transparent, and universally accessible oversight mechanisms. These mechanisms are particularly crucial in shielding economic and social rights that may be jeopardized by the emergence of AI technologies.
Whenever feasible and appropriate, Nigerian regulatory bodies should ensure a participatory approach that engages diverse stakeholders including civil society, private sector entities, academia, and media, in the deliberations concerning the implementation of AI systems in the public sector. A focal point of attention should be the inclusion of marginalized and vulnerable demographics. Such inclusivity is pivotal in fostering trust in AI technology and its responsible integration.
The utilization of AI systems also carries the potential to disrupt electoral processes by reinforcing the dissemination of false information and coordinated deceptive behaviors, thereby challenging the principles of equitable elections and the freedom of voters to form opinions. Maintaining adherence to Nigerian electoral norms is of paramount importance.
While AI technologies can enhance the efficiency of public institutions, there is a risk of compromising transparency, human agency, and oversight. Furthermore, public entities often rely on private entities to acquire and deploy AI systems, amplifying the challenge of ensuring accountability, independent supervision, and public scrutiny, particularly in the presence of opaque AI systems. Therefore, an effective governance framework for AI should facilitate responsible development and deployment in accordance with the law, while also permitting appropriate remedies and state intervention when necessary.
Embedding principles such as fairness, accountability, transparency, and equality within public procurement procedures pertaining to AI is indispensable. Strengthening these through legislative safeguards serves a dual purpose: ensuring that only mechanisms aligned with human rights, democracy, and the rule of law are employed by governments, and creating economic incentives for private sector actors to design and implement systems that adhere to these principles. Given the higher standard of transparency and accountability expected from AI systems in public services, procurement from third parties failing to meet legal information obligations or unwilling to waive information restrictions hindering human rights impact assessments should be avoided.
- Nigerian regulatory bodies must take measures to prevent the illicit use of AI systems in electoral processes, targeted political manipulation lacking transparency and accountability, and any endeavors contravening laws protecting human rights, democracy, and the rule of law.
- Strategies and measures should be devised to combat misinformation and identify online hate speech, thereby ensuring equitable information dissemination.
- Rigorous oversight mechanisms should be applied to the public procurement of AI systems, encompassing legally binding requirements that uphold the principles of transparency, fairness, accountability, and responsibility.
- Adequate monitoring of AI systems employed in the public sector is necessary, with recourse to ombudspersons and the courts. Specific public sector entities using AI systems should be supervised, intervened in, and coordinated with to ensure alignment with human rights, democracy, and the rule of law. Expert input from various fields should guide government usage of AI systems to comprehend potential effects on administrative governance and citizen-state relations.
- Vital information concerning AI systems utilized in delivering public services and safeguarding legitimate interests should be easily accessible to the public. This encompasses operational details, optimization methods, underlying logic, and data types employed.
- Efforts should be made to enhance digital literacy across all sectors of the population. Adjustments in educational programs should foster a culture of responsible creativity grounded in human rights, democracy, and the rule of law.
- Encouraging the adoption of AI solutions and tools that empower citizens’ informational autonomy, enhance political engagement, combat corruption, and augment democratic institutions can cultivate trust and improve public service provision. Throughout this endeavor, the principles of human rights, democracy, and the rule of law must remain sacrosanct.
Defending the Rule of Law from AI Threats
It remains evident that the integration of AI systems within judicial frameworks holds the potential to enhance the efficiency of legal systems, but it also introduces substantial risks to the rule of law. Drawing insights from the European Ethical Charter concerning the utilization of AI within the judicial domain and its surrounding context, it is imperative that when AI tools are employed to settle disputes, assist in judicial decision-making, or offer public guidance, they must not compromise the essential guarantees of the right to access a judge and the right to a fair trial. This necessitates the preservation of the principles of equal legal standing and the adversarial process. Furthermore, the integration of AI systems should not undermine the court’s ability to dispense justice impartially and independently.
To achieve these objectives, the regulatory framework in Nigeria should duly consider the significance of upholding the quality and security of court rulings and data, as well as ensuring transparency, impartiality, and fairness in data processing methodologies. Additionally, provisions for the accessibility and comprehensibility of data processing procedures, alongside the feasibility of external audits, should be incorporated. Consequently, Nigerian regulators must closely scrutinize the deployment of AI systems within the judicial system, ensuring their alignment with the aforementioned standards.
In situations where legal disputes emerge in the context of AI system utilization within the private or public sectors, individuals initiating claims pertaining to infringements or damages resulting from AI system usage must be granted access to pertinent information held by the defendant or any relevant third party. This access to pertinent information is particularly pivotal when AI systems are employed to support judicial decision-making, as it establishes a foundation for parity between the involved parties. This encompasses data derived from training and testing, insights into AI system utilization, transparent and understandable elucidation of how AI systems generate recommendations, decisions, or forecasts, as well as particulars regarding the comprehension and application of the system’s outputs.
In this context, a judicious equilibrium must be struck between the varied legitimate interests of the involved parties. This may encompass considerations of national security in instances where AI systems are publicly deployed, while still upholding intellectual property and other rights, all while steadfastly safeguarding human rights. Moreover, individuals seeking recourse for alleged human rights violations stemming from AI system applications should not encounter undue hurdles in substantiating their claims.
- Nigerian regulators have the responsibility to ensure that the integration of artificial intelligence (AI) systems within the realms of justice and law enforcement adheres unwaveringly to the foundational principles of the right to a fair trial. To achieve this, they should take into account the need to preserve the quality and security of court judgments and data, alongside fostering transparency, impartiality, and fairness in data processing methodologies. To realize this aim, safeguards should be instituted to ensure that the accessibility and comprehensibility of data processing procedures are manifest, with provisions for external audits being feasible.
- Nigerian regulators must ensure that individuals whose rights are transgressed by the conception or application of AI systems in contexts where the rule of law prevails have accessible and effective avenues for the restoration of their rights.
- Nigerian regulators should provide pertinent information to citizens concerning the utilization of AI systems within the public sector, particularly when such employment bears significant consequences for individual lives. This information assumes paramount importance in instances where AI systems are incorporated into justice and law enforcement, encompassing both the modus operandi of AI systems within these processes and the means through which individuals can challenge decisions influenced by or originating from AI systems.
- Nigerian regulators are tasked with the duty of ensuring that the integration of AI systems does not impede judges’ capacity to issue verdicts nor compromise the autonomy of the judiciary. Additionally, it is crucial that all judicial determinations remain subject to human review.
Roles of Public and Private Actors in Regulation of Artificial Intelligence (AI) in Nigeria
The utilization of AI systems by both public and private entities can significantly impact human rights, democracy, and the rule of law. In Nigeria, it is imperative for regulators to not only safeguard human rights within the public sphere but also assume the responsibility of ensuring that private actors adhere to human rights standards. This obligation is reinforced by international guidelines, such as the UN Guiding Principles on Business and Human Rights, which emphasize the necessity for private companies to uphold human rights.
The preceding section underscored the responsibilities of regulators in upholding Nigerian human rights, democracy, and rule of law standards within the context of AI systems. National authorities are tasked with conducting thorough evidence-based assessments of domestic legislation to ascertain its alignment with human rights principles and its capability to safeguard these rights. Remedial actions should be taken to address any identified legal gaps. Furthermore, the establishment of regulatory frameworks and the provision of appropriate legal recourse mechanisms are crucial in cases where the development and utilization of AI lead to legal violations. To facilitate this, national oversight bodies should possess the capacity to audit and assess the functioning of AI systems, whether public or commercial, especially when indicators of non-compliance are evident. These oversight efforts complement existing legal frameworks such as data protection laws, incorporating elements like the accountability principle, impact assessments, and engagement with supervisory agencies, thereby enhancing transparency. It’s worth considering maintaining a certain level of anonymity in situations involving privacy or intellectual property concerns.
A notable point is that a considerable number of public entities acquire AI systems from private actors, relying on these private entities to provide essential data for AI deployment and access to the underlying infrastructure. Hence, private actors assume a pivotal role in this landscape and are accountable for designing and employing systems that adhere to principles, rights, and societal demands. Given that the interests of commercial private actors and those of individuals and society might not always align, it could be fitting to establish a legal framework that compels private entities to adhere to specific rights and obligations in the realm of AI. This becomes particularly relevant when the risk of conflicting interests arises, ensuring avenues for seeking redress if commitments are not upheld.
In the pursuit of safeguarding the enumerated principles, rights, and obligations concerning AI, Nigerian regulators are advised to adopt a risk-based strategy, augmented by a cautious approach when necessary. This strategy acknowledges that not all AI systems pose equal risks, and regulatory interventions should reflect this diversity. It entails a systematic evaluation of the risks AI systems pose to human rights, democracy, and the rule of law, followed by tailored countermeasures.
To implement a risk-based strategy and determine appropriate regulatory actions to mitigate risks, regulators can draw guidance from a range of parameters commonly utilized in risk-impact assessments. These factors encompass the potential scope of adverse impacts on human rights, democracy, and the rule of law; the likelihood of such impacts occurring; the extent and pervasiveness of the impacts; their geographical and temporal reach; and the potential reversibility of adverse effects. Moreover, specific AI-related factors influencing risk levels can be taken into account, including the level of automation within the program, the underlying AI technology, the availability of testing tools, and the degree of transparency.
Safety and Liability Concerns in the Realm of Artificial Intelligence
The emergence and utilization of artificial intelligence (AI) systems present novel safety and liability challenges. There exists a divergence of opinions regarding the applicability of existing liability frameworks versus the creation of specialized regimes tailored for the AI context. However, it is noteworthy that the extensive deployment of AI systems can introduce complexities in construing and executing prevailing liability statutes. To illustrate, the current legislation concerning Product Liability in Nigeria seemingly pertains to AI systems categorized as tangible products (hardware), excluding software, and solely encompasses AI systems positioned as products, not services. In light of this, it might be advisable to clarify that standalone software can be categorized as a product within the ambit of the prevailing product liability law.
The opacity inherent to certain AI systems, coupled with the information asymmetry between AI developers and individuals potentially adversely affected by these systems, can impede the latter’s ability to meet the requisite burden of proof for substantiating damage claims in specific cases. However, by and large, the established allocation of the burden of proof can offer pertinent and rational resolutions concerning AI systems.
In the event that Nigerian regulators decide to address liability concerns in an upcoming legal framework, the following facets warrant consideration:
- Establishment of a Balanced Liability Regime: Striking a fair and equitable liability framework is crucial for both consumers and manufacturers, fostering legal certainty.
- Equitable Protection for Harmed Parties: Ensuring equivalent safeguarding for individuals harmed by AI systems as for those harmed by conventional technologies holds paramount importance.
- Comprehensive Lifespan Liability: Liability for undue harm should be applicable across the entire lifespan of the AI system.
- Contextual Distinctions: Distinct allocation of liability in business-to-consumer and business-to-business scenarios may necessitate contractual agreements for liability among business entities, rather than enforcing a specific liability regime.
- Addressing Trans-Border Responsibility: The issue of trans-border liability comes into play, particularly when a company employing an AI system is registered in one state, the system’s developer in another, and a user experiencing harm resides in a third state.
- Integration of Ethical Codes of Conduct: In specific sectors, industry-specific ethical codes of conduct might complement liability regulations, enhancing public trust in sensitive AI domains.
- Private Actor Due Diligence: The extent to which private entities ensure and invest in due diligence mechanisms can serve as a pertinent factor in assessing private actors’ liability and the burden of proof.
In conclusion, the landscape of Artificial Intelligence (AI) regulation in Nigeria is a tapestry of strategic initiatives, legal frameworks, and ethical considerations aimed at harnessing the transformative power of AI for sustainable growth while safeguarding human rights, democracy, and the rule of law. Nigeria’s commitment to advancing AI technology is evident through the establishment of institutions like the National Centre for AI and Robotics (NCAIR), the Nigeria Data Protection Regulation (NDPR), and the Cybercrimes (Prohibition and Prevention) Act of 2015. The country’s burgeoning AI ecosystem, bolstered by private entities and startups, further exemplifies its readiness to embrace AI-driven innovation.
The imperative for AI regulation extends into the realms of justice and law enforcement, demanding transparency, equity, and accountability. Collaborative efforts involving domain experts, public servants, and stakeholders are essential to navigate the complexities introduced by AI in legal proceedings. This article emphasizes the need for comprehensive training, understanding, and ethical considerations in adopting AI-powered tools within the legal sector.
As Nigeria works toward formulating its National Artificial Intelligence Policy (NAIP), the overarching aim should be safeguarding the nation against AI-driven attacks, and promoting compliance across public and private sectors. The development of a robust legal framework, aligned with national principles, human rights, and democracy, is pivotal to guiding the responsible and ethical integration of AI technology. This framework should balance innovation with potential risks, ensuring a harmonious coexistence between AI advancements and societal values.
Ultimately, Nigeria stands on the brink of an era where AI’s potential is embraced, harnessed, and governed by a comprehensive framework that upholds fundamental rights, encourages innovation, and positions the nation as a vanguard in responsible AI implementation. The journey involves not only policy development but also collaboration, education, and continuous evaluation to shape an AI landscape that empowers the nation while safeguarding its citizens.
Artificial Intelligence (AI) & Cybersecurity Thought-Leader