The potential for AI technologies to yield unintended and unfavorable consequences has sparked concern among policymakers and civil liberties groups. Regulations and guidelines surrounding the development and deployment of AI are becoming increasingly common. In order to create AI systems that are transparent, unbiased, and fair, there are a number of ethical considerations that must be evaluated.
Firstly, AI ethics requires a holistic approach. AI ethics must take into consideration not only the technology itself but also the data that is training and shaping the technology. AI is reflective of the data that powers it. If the underlying data is biased, then the resulting technology systems will likely also reflect that bias – which is especially problematic as AI becomes applied to an increasingly diverse set of use cases, including healthcare, criminal justice, and education.
The E.U. is leading the way in terms of developing standardized and enforceable guidelines for AI ethics. An expert panel commissioned by the E.U. published a set of guidelines that is designed for trustworthy AI. Furthermore, Europe’s recent GDPR also contains an article that affirms an individual’s right not to be subject to a decision solely based on automation. Researches have also noted that this article within the GDPR may pave the way for legal regulation that would require technology companies to reveal source code and algorithms.
Additionally, within the E.U., the U.K. House of Lords Select Committee on Artificial Intelligence has suggested an AI code the covers the following five principles:
- AI technologies should be fair and should be developed for the benefit of humanity.
- Every citizen should have the right to be educated at a level that enables them to thrive emotionally, mentally and economically alongside AI technologies in the future of work.
- Restrictions should be imposed on AI technologies that attempt to diminish the privacy or data rights of individuals.
- AI should be deployed with consent being a key consideration – individuals must have the opportunity to offer informed consent prior to their data being captured or utilized.
- Bans should be placed on AI systems that have the potential to deceive, destroy or hurt.
While enforceable ethical guidelines surrounding AI is still nascent, the call for regulation and legal frameworks for ensuring fair and trustworthy AI is becoming louder – and even more necessary as AI becomes applied to systems that impact people’s lives in meaningful ways.
The full article, “The Ethical and Legal Challenges of AI,” can be read on the Association for Intelligent Information Management (AIIM) website. The article is part one of a three-part series, “Ethical Use of Data for Training Machine Learning Technology,” by Andrew Pery, digital transformation expert and consultant for ABBYY.