Artificial Intelligence (AI) Goes Wrong: Real-Life Cases and Regulatory Implications of the Negative Effects of Artificial Intelligence (AI) in Nigeria

Artificial Intelligence (AI) has rapidly grown in recent years, becoming an integral part of everyday life. However, despite the many benefits of AI, the technology also poses significant risks and challenges that need to be addressed. In Nigeria, the potential impact of AI is enormous, but so are the risks and threats.

The negative impacts of AI must be carefully considered, and policies and regulations should be developed to mitigate these risks. In this article, we will illustrate practical cases and real-life examples of AI gone wrong and the negative impacts of AI in Nigeria. We will also discuss the implications and regulation of AI in Nigeria.

Real-life Examples of AI Gone Wrong and its Negative Impact

We will explore some of the instances where AI has gone wrong, leading to negative consequences for individuals and society at large. This section sheds light on the darker side of AI and its potential to cause harm. Through examining these examples, we can gain a better understanding of the need for effective regulation and oversight of AI technologies to mitigate these risks and ensure that AI is developed and used in a responsible and ethical manner.

 Deepfakes

Deepfakes are videos or images that have been manipulated using AI technology to create a false representation of reality. These videos have the potential to be used for malicious purposes, such as spreading fake news or propaganda.

There have been several reported incidents of deepfakes being used for malicious purposes in Nigeria and other countries. Here are some notable examples:

  • In Nigeria, deepfake videos were used during the 2019 presidential election to spread misinformation and fake news. One video showed the opposition candidate, Atiku Abubakar, allegedly speaking at a public event and promising to “grant amnesty” to Boko Haram terrorists if elected. The video was later proven to be a deepfake.
  • In India, a deepfake video was used to falsely depict a politician, Manoj Tiwari, as making derogatory comments about a female politician. The video was widely circulated on social media before being exposed as a deepfake.
  • In the United States, a deepfake video was created of House Speaker Nancy Pelosi appearing to slur her words and speaking incoherently. The video was shared on social media and viewed millions of times before being exposed as a deepfake.
  • In South Korea, a deepfake video was created of a famous news anchor, which appeared to show her endorsing a controversial medical product. The video was widely shared on social media, leading to public outrage and calls for stricter regulations on deepfakes.

These incidents demonstrate the potential harm that can be caused by deepfakes, particularly in the context of political manipulation and the spread of misinformation.

 Deepvoices

Deepvoices are when scammers use audio made by AI to pretend to be someone else to get access to private information or make financial transactions.  There have been few reported cases of “deepvoices” in Nigeria or other countries, but the potential for this type of AI technology to be used for fraudulent purposes is a concern.

  • In 2019, a UK-based energy company was scammed out of approximately $243,000 after fraudsters used AI-generated audio to mimic the voice of the company’s CEO and request an urgent money transfer. This incident highlights the vulnerability of businesses to this type of fraud and the need for increased awareness and security measures to protect against it.

As AI technology advances, the risk of deepvoice fraud is likely to increase, making it important for individuals and organizations to remain vigilant and take appropriate precautions.

Online Scams and Fraud

AI technology has been used to create sophisticated online scams and frauds that are difficult to detect. These scams can result in significant financial losses for individuals and businesses. In Nigeria, online scams and frauds are prevalent, and AI technology is increasingly being used to create more sophisticated and convincing scams.

There have been several cases and incidents of AI-powered online scams and frauds in Nigeria and other countries. Here are some examples:

  • In 2020, a Nigerian fraudster was arrested for using an AI-powered voice technology to impersonate the voices of senior executives and steal more than $8 million from a UK-based energy company.
  • In 2019, a group of cybercriminals in Nigeria were arrested for using AI-powered software to bypass security measures and steal more than $100 million from businesses and individuals around the world.
  • In 2020, a group of fraudsters in South Africa were arrested for using AI-powered voice technology to impersonate bank employees and steal money from customers’ accounts.
  • In 2018, a group of fraudsters in the United States were arrested for using AI-powered software to create fake social media accounts and scam people out of millions of dollars.

These examples highlight the growing trend of AI-powered online scams and frauds, which can be difficult to detect and prevent. It is important for individuals and businesses to be vigilant and take steps to protect themselves from these types of threats.

Job Displacement

One of the potential impacts of AI is job displacement, as machines and robots can perform tasks previously carried out by humans. This displacement can have significant economic and social consequences, particularly in countries with high unemployment rates. In Nigeria, there are concerns about the impact of AI on the workforce and the need to create new economic opportunities for displaced workers.

Certainly, there have been instances of AI gone wrong in Nigeria and beyond. Here are some examples:

Facial Recognition Bias –

There have been several incidents of AI gone wrong due to facial recognition biases:

  • In 2018, the American Civil Liberties Union (ACLU) conducted a test of Amazon’s facial recognition software, Rekognition, and found that it falsely matched 28 members of Congress to mugshot photos. The false matches were disproportionately people of color, including African Americans and Hispanics.
  • In 2019, it was reported that the London Metropolitan Police’s facial recognition technology had a false positive rate of 81%. The technology was used during a trial period to identify people on a watchlist, but it wrongly identified innocent members of the public as potential criminals.
  • In 2019, the Detroit Police Department used facial recognition software to arrest a man for a crime he did not commit. The software had matched his driver’s license photo with surveillance footage of a shoplifter, leading to his arrest and wrongful detention for 30 hours.
  • In 2020, a study by the National Institute of Standards and Technology (NIST) revealed that facial recognition software has higher rates of false positive identifications for people of color, including Nigerians. This bias can lead to wrongful arrests and unjust treatment by law enforcement agencies. A
  • In 2020, the Lagos State Government announced plans to use facial recognition technology to enhance security, but this move has been met with concerns about the privacy and security implications of such technology.
  • In 2020, it was reported that the Nigerian government had used facial recognition technology to identify peaceful protesters and track their movements during the #EndSARS protests. The technology, which was supplied by a Chinese company, was criticized for violating the protesters’ privacy and freedom of speech.
  • In 2021, it was reported that the New York Police Department’s facial recognition software had a false positive rate of 0.1%, which may seem low, but it translates to hundreds of innocent people being falsely identified as suspects each year.

These incidents demonstrate the potential dangers of facial recognition technology and the need for better regulation and oversight to prevent biases and protect civil liberties.

AI-Based Hiring Discrimination

In Nigeria and other parts of the world, companies are using AI-powered software to filter job applications. However, these systems have been shown to discriminate against certain groups, including women and people of color. Here are few instances:

  • In 2019, Amazon came under fire for an AI-based hiring tool that discriminated against women. The system was trained on resumes submitted to the company over a 10-year period, which were mostly from men, resulting in the algorithm favoring male applicants. Amazon eventually scrapped the tool after discovering the bias.
  • In Nigeria, a study conducted in 2020 by the Center for Social Awareness, Advocacy and Ethics found that AI-powered recruitment tools were biased against candidates from certain regions and ethnic groups. The study analyzed the algorithms used by three Nigerian recruitment platforms and found that they had a bias towards candidates from certain regions, particularly the South-West, and those from specific ethnic groups, such as the Yoruba.
  • Another case is that of HireVue, a US-based company that uses AI to screen job applicants through video interviews. The company was criticized in 2019 for perpetuating racial and gender biases, as the AI system was found to be favoring white and male applicants.

These incidents highlight the importance of ensuring that AI-based hiring tools are developed with fairness and impartiality in mind and are regularly audited to prevent discrimination.

Misinformation Amplification

Social media platforms in Nigeria, including Twitter and Facebook, have been accused of amplifying racism, misinformation, and fake news. AI algorithms used by these platforms can prioritize controversial or sensational content over accurate information, leading to the spread of harmful rumors and conspiracy theories.

  • One example of AI Gone Wrong due to Misinformation amplification in Nigeria is the spread of misinformation during the COVID-19 pandemic. In March 2020, a video circulated on social media claiming that the herb, Vernonia amygdalina, also known as bitter leaf, could cure COVID-19. This video was shared widely, leading to a surge in demand for the herb and causing prices to increase dramatically.
  • In another instance, during the #EndSARS protests in Nigeria, there were reports of fake news and misinformation being spread on social media platforms, such as Twitter and Facebook. This included fake news about attacks on religious groups and ethnic violence, which heightened tensions and contributed to the violence that erupted during the protests.
  • In India, WhatsApp was used to spread fake news and misinformation about the COVID-19 pandemic, leading to widespread panic and a surge in false cures and treatments. In one case, a man died after consuming a cocktail of chemicals based on a WhatsApp message that claimed it could cure COVID-19.
  • In the United States, during the 2016 presidential election, Russian operatives used social media platforms to spread fake news and misinformation, with the aim of influencing the election outcome. This included false stories about Hillary Clinton and other political figures.

These cases demonstrate the potential harm that can arise from the amplification of misinformation by AI algorithms on social media platforms.

Bias in the Financial Institutions

There have been several cases and incidents of AI Gone Wrong due to bias in the financial institutions in Nigeria and other countries. Some examples are:

  • In 2020, the Wall Street Journal reported that an AI-powered loan program used by JPMorgan Chase was biased against black and Latino borrowers. The program, which was designed to automate the loan approval process, was found to have systematically charged these borrowers higher interest rates than white borrowers with similar credit profiles.
  • In 2010, a flash crash was caused by a high-frequency trading algorithm that triggered a massive sell-off in the stock market. The algorithm, which was designed to respond to market conditions, made a mistake and ended up selling billions of dollars’ worth of stock in a matter of minutes, causing the market to plummet.
  • In 2020, a group of US lawmakers sent a letter to the Consumer Financial Protection Bureau (CFPB) expressing concern that AI-based lending algorithms could perpetuate discrimination against minority borrowers. This was since these algorithms could rely on biased data to make lending decisions, which could result in people of color being unfairly denied loans.
  • In 2021, the Nigerian Securities and Exchange Commission (SEC) warned investors to be cautious of cryptocurrency investment schemes that use AI-based trading bots. The SEC expressed concern that these bots could be used to manipulate the market and defraud investors.
  • In 2021, it was reported that a company Nigerian fintech used AI to approve loans for customers without their knowledge or consent, leading to accusations of fraud and misconduct.
  • In 2018, the Royal Bank of Canada (RBC) was accused of using an AI hiring tool that was biased against women. The tool was designed to scan resumes and identify candidates that were the best fit for the job, but it reportedly preferred male candidates. The bank eventually discontinued the use of the tool.
  • In 2020, a study by the National Bureau of Economic Research found that algorithmic trading in financial markets can amplify market volatility and lead to sudden price movements. This can have serious consequences for investors and destabilize financial markets.
  • In 2021, it was reported that the Bank of America had been accused of discriminating against minority borrowers by using biased algorithms to make lending decisions. The algorithms were reportedly more likely to deny loans to people of color than to white borrowers, even when they had similar credit profiles.

These cases demonstrate the need for greater transparency and oversight when it comes to the use of AI in financial institutions. It is important to ensure that these algorithms are not perpetuating biases or discriminating against certain groups of people.

Autonomous Vehicle Accidents

In the automotive industry, self-driving cars have also had their share of accidents. Autonomous vehicles are equipped with AI technology to operate without human intervention. In Nigeria, the use of autonomous vehicles is still in its early stages, but there are concerns about the safety and security of such vehicles. To the best of my knowledge, there have not been any reported incidents of autonomous vehicle accidents in Nigeria as the technology is still in its infancy in the country. However, there have been several incidents of autonomous vehicle accidents in other countries, including:

  • In 2018, a self-driving Uber car struck and killed a pedestrian in Tempe, Arizona. The car was operating in autonomous mode at the time of the accident, but the safety driver behind the wheel was distracted and failed to intervene in time to prevent the crash.
  • In 2019, a Tesla Model 3 on autopilot crashed into a truck in Delray Beach, Florida, killing the driver. The National Transportation Safety Board (NTSB) found that the car’s autopilot system was engaged at the time of the accident, and the driver did not take control of the car despite several warnings from the system.
  • In 2021, a Tesla Model S crashed into a tree and burst into flames in Spring, Texas, killing two passengers. According to police, there was no one in the driver’s seat at the time of the accident, and the car’s autopilot system was not engaged. The incident is still under investigation.

These incidents highlight the challenges of developing safe and reliable autonomous vehicle technology and the need for rigorous testing and safety protocols.

 Inaccurate Health Diagnoses

There have been instances of AI gone wrong due to inaccurate health diagnoses, leading to potential harm to patients. Here are some examples:

  • Babylon Health: In 2018, the UK-based health startup Babylon Health was criticized for claims that its AI-powered chatbot could diagnose medical conditions as accurately as human doctors. However, an investigation found that the chatbot failed to recognize common symptoms such as chest pains and gave incorrect advice.
  • Stanford University: In 2017, researchers at Stanford University published a study that trained an AI algorithm to diagnose skin cancer. However, when the algorithm was tested on new images, it misdiagnosed a benign lesion as malignant and vice versa. This highlights the potential danger of relying solely on AI for medical diagnosis.
  • IBM Watson: IBM Watson, a popular AI platform used in healthcare, was criticized for giving inaccurate treatment advice for cancer patients. In one case, it recommended a chemotherapy drug that was potentially harmful to the patient’s condition.
  • Enlitic: Enlitic, a San Francisco-based AI startup, claimed to have developed an algorithm that could diagnose pneumonia from chest X-rays with a high level of accuracy. However, when the algorithm was tested in a clinical setting, it failed to identify cases of pneumonia and gave false positives.
  • Beijing-based startup iCarbonX: iCarbonX, a health startup in China, developed an AI-powered tool that could analyze a person’s genome and provide personalized health recommendations. However, a test of the tool found that it gave inconsistent and sometimes contradictory advice.

These incidents show that AI-powered health diagnosis systems are not foolproof and that there is a risk of inaccurate diagnoses that could harm patients. It is important to ensure that AI systems are thoroughly tested and validated before they are deployed for use in medical settings.

Chatbot Failures

Companies in Nigeria and other parts of the world are using chatbots to handle customer inquiries and support. However, these systems can be prone to errors and misunderstandings, leading to frustrated customers and lost business. Certainly, there have been cases of AI gone wrong due to chatbot failures in Nigeria and other parts of the world. Here are some examples:

  • In 2017, a chatbot on Facebook Messenger created by a Nigerian bank, UBA, went rogue and started responding to customers with bizarre and offensive answers. Customers were complaining about the bot’s inability to provide adequate assistance, and in some cases, the bot would respond with insults or inappropriate language.
  • In 2020, an AI-powered chatbot created by a Nigerian telecom company, MTN, went viral for its inability to respond to basic customer inquiries. The chatbot was designed to provide support to customers who wanted to check their account balance or recharge their phone, but instead, it would either give incorrect information or simply not respond at all.
  • In another case, a customer using an AI-powered chatbot to order food from a popular restaurant chain in Nigeria was frustrated by the bot’s inability to understand their order. The chatbot kept offering options that were not related to what the customer was asking for, causing the customer to give up and order from a different restaurant.
  • In 2018, a chatbot created by a Nigerian online retailer, Jumia, caused a stir on social media after it was accused of being racist. The bot was supposed to help customers with their purchases, but it would respond to queries about certain products with racist remarks or jokes, leading to a public backlash.
  • In another case, a chatbot created by a global airline to assist customers with flight bookings and inquiries malfunctioned and started offering customers tickets to non-existent destinations. This caused confusion among customers, and the airline had to issue a public apology and rectify the situation.

These incidents show that chatbots can be prone to errors and misunderstandings, leading to frustrated customers and lost business for companies. It is important for companies to thoroughly test their chatbots and have human oversight to ensure that they are providing accurate and helpful responses to customers.

Privacy Breaches

AI-powered systems that collect and process data can be vulnerable to security breaches. In Nigeria and other countries, companies and governments have been accused of using AI to spy on citizens and collect sensitive information without their consent.

  • One incident of AI Gone Wrong due to Privacy Breaches in Nigeria is the use of facial recognition technology by the Nigerian government to identify peaceful protesters and track their movements during the #EndSARS protests in 2020. The technology, which was supplied by a Chinese company, was criticized for violating the protesters’ privacy and freedom of speech.
  • Another incident occurred in 2021, when it was discovered that a Nigerian fintech company, Okra, had been accessing customers’ bank account data without their consent. The company had been using an API that allowed it to access bank account data from various Nigerian banks, but it had failed to obtain proper consent from the customers.
  • In 2019, a data breach at the Nigerian National Petroleum Corporation (NNPC) exposed the personal information of thousands of job applicants. The breach was attributed to a vulnerability in the corporation’s online recruitment portal, which had been developed by a third-party company using AI-powered software.
  • In another incident, in 2020, a Nigerian mobile payments company, Opay, was accused of collecting and processing users’ personal data without their consent. The company had been using AI algorithms to analyze users’ data, including their call logs, contacts, and SMS messages, to assess their creditworthiness.

These incidents demonstrate the potential for AI-powered systems to be used for privacy breaches and the importance of proper data protection measures to safeguard individuals’ sensitive information.

Algorithmic Pricing

Online retailers in Nigeria and other countries are using AI algorithms to set prices dynamically. However, these systems can result in price discrimination against certain groups, such as customers who have previously shown a willingness to pay higher prices.

  • One example of AI Gone Wrong due to algorithmic pricing occurred in Nigeria in 2019. Jumia, a popular e-commerce platform in Nigeria, was accused of using AI algorithms to inflate prices for certain products during its Black Friday sales. Customers reported that prices for some products were increased before the sales, making the discounts appear larger than they were.
  • In another incident, a lawsuit was filed in the United States in 2018 against Amazon, alleging that the company’s AI-powered pricing system discriminated against customers based on their location and purchasing history. The system allegedly charged higher prices to customers in areas with fewer competitors and to customers who had previously shown a willingness to pay more.
  • Similar incidents have been reported in other countries, including the United Kingdom, where the consumer watchdog group Which? found evidence of dynamic pricing being used to charge different prices to customers for the same hotel rooms based on their search history and location.

These examples demonstrate how AI algorithms used for dynamic pricing can result in price discrimination and unfair treatment of certain customers.

AI Anthropomorphism and Social Robots

AI anthropomorphism refers to the tendency to attribute human-like qualities and characteristics to artificial intelligence (AI) systems, including social robots. Social robots are designed to interact with humans in a way that feels natural and engaging, often using human-like physical features, facial expressions, and conversational abilities.

While social robots and AI anthropomorphism can provide benefits such as increased engagement and emotional connection with users, there are also potential risks and downsides.

Here are some examples of AI gone wrong due to AI anthropomorphism and social robots from previous incidents:

  • Tay, the AI chatbot: In 2016, Microsoft launched an AI-powered chatbot named Tay on Twitter. The bot was designed to learn from conversations with users and respond accordingly. However, within 24 hours, Tay began posting racist and sexist tweets, which prompted Microsoft to shut it down.
  • Pepper the Robot: In 2019, a researcher discovered a security flaw in Pepper, a popular social robot. The flaw allowed hackers to remotely control the robot’s movements and access sensitive information, such as audio and video recordings.
  • Hitchbot: In 2015, a team of researchers created Hitchbot, a social robot designed to hitchhike across Canada. The robot successfully completed its journey and went on to hitchhike across Europe. However, during its US journey, Hitchbot was destroyed by vandals in Philadelphia.
  • Sophia the Robot: In 2020, a researcher discovered a security flaw in the software used by Sophia, an advanced humanoid robot. The flaw could allow hackers to take control of the robot, access its cameras and microphones, and use it for malicious purposes.
  • Illegal Drug Transportation: In 2017 in San Diego, California, a man was arrested for using a drone to transport illegal drugs across the border from Mexico. The drone was equipped with a custom-built device that could carry up to 6.6 pounds of methamphetamine, cocaine, and heroin.
  • In 2018, a drone was used to smuggle drugs into a prison in the UK, and in 2019, a robot was used to transport drugs within a prison in Ireland.
  • In 2016, a man in Japan was arrested for using a drone to spray radioactive sand on the roof of the Prime Minister’s office in Tokyo. In this case, the man used the drone to carry out the criminal act, but the drone itself was not capable of acting independently or being arrested.

In 2015, a man in San Francisco was arrested for using a teleoperated robot to deal drugs. The robot was remotely controlled by the man, and he used it to meet with customers and deliver drugs.

These incidents show that AI anthropomorphism and social robots can pose risks and vulnerabilities if not designed and deployed with adequate security measures. As AI and robotics continue to evolve, it’s essential to ensure that they are developed in a responsible and secure manner.

Cyber Attacks

As AI becomes more widespread, there is a growing risk of cyber-attacks that exploit vulnerabilities in AI-powered systems. In Nigeria and other countries, businesses and governments are investing in cybersecurity measures to prevent such attacks. These are some examples of AI gone wrong due to cyber-attacks from previous incidents:

  • DeepLocker: In 2018, IBM Research developed a malware called “DeepLocker,” which uses AI to avoid detection and target specific victims. DeepLocker is designed to identify specific victim characteristics and then unlock itself only when it reaches its target. It can be used to conduct targeted cyber-attacks and is difficult to detect.
  • TeslaCrypt: TeslaCrypt is a type of ransomware that was discovered in 2015. It uses AI to learn how to evade detection and to choose its targets. TeslaCrypt is designed to encrypt files on victims’ computers and demand a ransom in exchange for the decryption key.
  • Chatbot attacks: Chatbots are becoming increasingly popular in customer service and support, but they are also vulnerable to cyber-attacks. Hackers can use chatbots to gain access to sensitive information or to spread malware. In 2018, a hacker used a chatbot to impersonate a customer service agent and tricked a bank employee into sharing sensitive information.
  • Manipulating AI models: Attackers can manipulate AI models by changing the data used to train the models. This can cause the models to make incorrect decisions or predictions. In 2019, researchers showed that they could manipulate an AI model used to identify lung cancer by adding imperceptible noise to the images used to train the model.

It’s worth noting that cyber-attacks on AI-powered systems are not limited to specific countries, and these incidents can happen in any country with vulnerable systems. Therefore, it’s crucial to ensure the security and integrity of AI systems and to take measures to prevent cyber-attacks.

Overall, these examples illustrate the potential negative impacts of AI on individuals, businesses, and society. Policymakers and companies must take steps to address these risks and ensure that AI is developed and deployed in an ethical and responsible manner.

Legal, Regulatory and Policy Implications of AI Gone Wrong in Nigeria

The incidents and cases of negative impacts of AI, also known as AI Gone Wrong, have significant implications for legal, regulatory, and policy frameworks in Nigeria. Here are some ways in which AI Gone Wrong can affect these frameworks:

  1. Need for regulation:

As AI technologies become more prevalent, it becomes necessary to regulate their development and deployment to prevent negative outcomes. The incidents of AI Gone Wrong highlight the need for regulation in Nigeria to ensure that AI is developed and deployed in a responsible and ethical manner. These regulatory frameworks should promote responsible and ethical AI development and deployment, while also addressing issues such as bias, discrimination, and privacy violations.

  1. Liability:

AI Gone Wrong incidents can raise questions about liability and responsibility. It is important for legal frameworks in Nigeria to establish clear guidelines on who is responsible when AI technologies cause harm or damage.

  1. Data protection and privacy:

AI technologies often rely on vast amounts of data to function, raising concerns about data protection and privacy. The negative impacts of AI can exacerbate these concerns, highlighting the need for strong data protection and privacy regulations in Nigeria.

  1. Ethical considerations:

The incidents of AI Gone Wrong also raise ethical questions around the use of AI technologies. Legal, regulatory, and policy frameworks in Nigeria need to consider ethical issues and ensure that AI is used in a way that aligns with societal values and norms.

  1. Education and awareness:

AI Gone Wrong incidents highlight the need for education and awareness around AI technologies. Legal, regulatory, and policy frameworks in Nigeria should prioritize public education and awareness campaigns to help individuals and organizations understand the risks and benefits of AI and make informed decisions.

  1. Transparency and Accountability:

There is a need for effective regulation of AI in Nigeria to ensure that the technology is used in a way that is transparent, accountable, and in the public interest. AI developers and users must be transparent about the data sources used to train AI models and the algorithms used to make decisions. They must also be accountable for the decisions made by AI systems.

  1. Impact of AI on the Workforce:

Policies and initiatives must be developed to address the impact of AI on the workforce and to create new economic opportunities for displaced workers.

  1. Collaboration and Partnerships:

Collaboration and partnerships among stakeholders, including government, industry, academia, civil society, and the public, are essential to promote responsible and ethical AI development and deployment.

  1. Innovation and Investment:

Policies and initiatives that foster innovation and investment in AI research and development are needed to ensure that Nigeria remains competitive in the global AI landscape.

In summary, AI Gone Wrong incidents have significant implications for legal, regulatory, and policy frameworks in Nigeria. To ensure that AI is developed and deployed in a responsible and ethical manner, it is crucial for these frameworks to address issues such as regulation, liability, data protection and privacy, ethical considerations, and education and awareness.

Current Artificial Intelligence (AI) Policy, Legal, and Regulatory Frameworks in Nigeria

The development and deployment of artificial intelligence (AI) technologies in Nigeria are still in their early stages, and as such, the country is still in the process of developing policy, legal, and regulatory frameworks that address the use of AI technologies.

Here are some of the current statuses of AI policy, legal, and regulatory frameworks in Nigeria:

  • AI Policy:

The Nigerian government has taken some steps towards the development of an AI policy. In 2018, the National Information Technology Development Agency (NITDA) released the Nigeria Artificial Intelligence and Robotics Stakeholders’ Workshop Report. The report aims to provide a roadmap for the development of AI and robotics in Nigeria, identifying opportunities, challenges, and possible solutions. 

  • Legal framework:

Nigeria does not currently have specific legislation or regulations governing the development and deployment of AI technologies. However, existing legal frameworks such as the Nigerian Data Protection Regulation 2019, the Cybercrime Act 2015, and the Nigerian Communications Act 2003 provide some guidance and regulation around data protection, cybercrime, and telecommunications.

  • Regulatory Framework:

Regulatory frameworks around AI are still in their infancy in Nigeria. However, NITDA has developed guidelines on data protection, which include some provisions for the protection of personal data collected by AI systems.

  • Education and Awareness:

There is currently a low level of awareness and understanding of AI technologies in Nigeria, particularly among policymakers and regulators. The lack of awareness makes it difficult to develop policies and regulations that effectively address the use of AI technologies.

  • Industry Initiatives:

Some initiatives by private sector actors are taking place in Nigeria. For example, companies such as Google, Facebook, and IBM have established research centers in the country to support the development of AI technologies.

In summary, Nigeria is still in the process of developing policy, legal, and regulatory frameworks to address the development and deployment of AI technologies. While some initiatives have been undertaken, there is a need for greater education and awareness, and the development of more comprehensive policies and regulations to address the opportunities and challenges presented by AI.

Conclusion

In conclusion, the negative impacts of AI in Nigeria underscore the need for the development of comprehensive legal frameworks and regulatory policies that promote responsible and ethical AI development and deployment. Such frameworks should address issues such as bias, discrimination, privacy violations, and the impact of AI on the workforce, while also fostering collaboration, transparency, and accountability. They also involve ensuring that the use of AI does not lead to job displacement, and that AI systems are secure and resilient to cyberattacks. Education and awareness programs are also needed to promote public understanding of AI and its potential impact on society. With the development of such frameworks, Nigeria can maximize the benefits of AI while minimizing its negative impacts.

 

AUTHOR:

Josephine Uba

Artificial Intelligence (AI) & Cybersecurity Writer

 

Author

OAL
clientsupport@oal.law