Machine learning algorithms are everywhere. It is not just Facebook and Google. Companies are using them to provide personalized education services and advanced business intelligence services, to fight cancer and to detect counterfeit goods. From farming to pharmaceuticals. From AI-controlled autonomous vehicles to clinical decision support software.
The technology will make us collectively wealthier and more capable of providing for human welfare, human rights, human justice and the fostering of the virtues we need to live well in communities. We should welcome it and do all that we can to promote it.
As with any new technology, there are ethical challenges. Will the new technologies be fair and transparent? Will the benefits be distributed to all? Will they reinforce existing inequalities?
Organizations that develop and use AI systems need ethical principles to guide them through the challenges that are already upon us and those that lie ahead.
Last year, my trade association, the Software & Information Industry Association, released an issue brief on Ethical Principles for AI and Data Analytics that addresses these challenges. It draws on the classical ethical traditions of rights, welfare, and virtue to enable organizations to examine their data practices carefully.
Companies need to recover their ability to think in ethical terms in business and in particular in their institutional decisions regarding the collection and use of information. These principles are a practical, actionable guide.
SIIA is not the only entity seeking to bring ethical considerations into the world of AI and data analysis. The computer science group Fairness, Accountability and Transparency in Machine Learning (FAT/ML) has drafted its own principles. Another group of computer scientists meeting at Asilomar drafted broader principles. IEEE has proposed principles relating to ethical values in design. ACM recently released a set of principles designed to ensure fairness in the use of AI algorithms. And the Information Accountability Foundation has formulated a very useful set of principles in its report on Artificial Intelligence, Ethics and Enhanced Data Stewardship.
These efforts on AI ethics are also intergovernmental in character
Some of the different ethical approaches to AI were aired at session at the OECD conference in October 2017 on AI: Intelligent Machines, Smart Policies. The need for ethical rules for AI was raised by the Japanese at the 2016 G7 meeting and by the Italians at the 2017 G& meeting. The most recent G7 meeting concluded on March 28, 2018 with a Statement on Artificial Intelligence encouraging research “examining ethical considerations of AI.” The U.S. Administration stepped into the field with its recent announcement that it would is “working with our allies” to “promote trust in” artificial intelligence technologies.
In its recently released Communication on Artificial Intelligence for Europe, the European Commission is proposing to develop “AI ethics guidelines” within the AI Alliance that “build on” this statement published by the European Group of Ethics in Science and New Technologies.
These are all positive developments. But a couple of cautions are needed. Abstract ethical statements will get us only so far. Actionable ethical principles need to consider how AI is used in a particular context. The ethical issues involved in autonomous weapons, for instance, are very different from the ethical issues involved in the use of AI for recidivism scores or employment screening. That’s why SIIA provided specific recommendations on how to apply the general principles of rights, justice, welfare and virtue to the specific case of ensuring algorithmic fairness through the use of disparate impact analyses.
In addition, there are no special ethical principles that apply uniquely to AI, but not to other modes of data analysis and prediction. The ethical demands to respect rights, promote welfare and cultivate human virtues need to be applied and interpreted in the development and implementation of AI applications, and there is plenty of hard conceptual and empirical work needed to do this properly. But that is not the same as seeking out unique normative guidelines for AI.
Some such as Elon Musk have suggested going beyond ethical standards to a regulatory response
There’s a place for some of that – in specific areas where problems are urgent and must be addressed in order to deploy the technology at all. Think of the need to understand liability for autonomous cars or to set a regulatory framework at the Food and Drug Administration for clinical decision support systems.
But just as there are no special ethical principles for AI, there need not be any special regulations or laws applying to AI as such. AI encompasses an indefinitely large range of analytical techniques; it is not a substantive enterprise at all. A general AI regulation implemented by a national agency would be like having a regulatory agency for statistical analysis!
The 2016 report of the One Hundred Year Study on Artificial Intelligence concluded that “attempts to regulate “AI” in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains.”
This does not mean AI is, or should be, unregulated. Current law and regulation still apply. There’s no get out of jail free card for using AI. It is not a defense for violating the law. Companies cannot escape liability under the fair lending or fair housing laws, for example, by explaining that they were using AI technology to discriminate.
Regardless of the state of regulation, organizations need guidance to adapt to the many ethical challenges they will face in bringing this technology to fruition. The principles of beneficence, respect for persons, justice and the fostering of virtues can provide a roadmap and some important guardrails for AI and advanced data analytics.
This article is published as part of the IDG Contributor Network. Want to Join?