As Harvard University economist Jason Furman said when he was Chairman of the Council of Economic Advisors, our biggest worry about AI should be that there might not be enough of it. The United States needs to develop this powerful new technology to its fullest to maintain our economic and technological leadership in the face of increasingly sophisticated competition from China, which has made AI-development a strategic priority.
Of course, AI is fraught with ethical challenges. In the course on AI and Ethics I teach at Georgetown University, I find the students concerned that AI will be a biased, unaccountable force in their lives and that it will be deployed to create joblessness and exacerbate social and economic inequality.
The trade association I work for, the Software & Information Industry Association, addressed these challenges in its recent publication on ethical principles for developers and users of AI. Institutions need to develop policies and procedures to ensure that AI systems promote human welfare, respect human rights and foster the character traits that will enable people to live well in their communities.
In addition, policymakers have an important role to play. They should prepare themselves to address the policy challenges that will accompany the further development and use of AI throughout society and the economy. Here are some areas for policy development.
AI will make it harder to rely on individualized consent to protect people from privacy harms. AI-driven prediction systems will be able to infer new and sometimes sensitive information about people from information they have willingly revealed. It will be harder and harder to keep secrets involving sexual orientation, religious or political beliefs, and health conditions. At the same time, the enormous benefits of these inferences will lead to improvements in delivering public service, public health, and a variety of consumer products and services. We need a privacy regime that allows these beneficial uses of inferred information but encourages trust in the system by restricting information uses that pose a significant risk of harm.
AI-systems will improve consequential decision-making in credit granting, employment, housing and insurance, but they will also create new risks of disparate impact on protected classes. Policymakers should vigorously apply existing rules against discrimination so that the new AI-systems improve the accuracy and fairness of consequential decisions.
The newer AI-based prediction systems use factors and formulas that are not easily understandable to the general public. Exposing the formula or source code involved would be unnecessary and counterproductive. It would dry up the incentive to innovate by exposing the fruits of software R & D to all players in competitive markets, and it will not provide the understanding that leads to accountability. Policymakers must ensure, however, that institutions using these systems for important matters that affect the lives of consumers and citizens provide a coherent story that allows the public to trust that decisions are being made on the basis of factors that are accurate and relevant.
AI-systems are rapidly moving into tasks that used to be the sole province of human beings. The new AI workplace might have less need for human talents and skills, except at the very highest levels of data scientists. Policymakers must work with business to foster AI systems that supplement not replace human skills. A major challenge for our educational institutions will be to train today’s students for the needs of tomorrow’s AI-intensive workplaces. Policymakers should ramp up support for programs that provide life-time retraining and skills development programs for workers who will need to be agile, nimble, and flexible throughout their working life. Now is also the time for policymakers to consider strong safety net programs for people who might not have, or might not be able to develop, the skills for the coming work environments.
We don’t need a Federal Artificial Intelligence Commission to regulate all AI research and deployment. As one panel of industry and academic experts put it, AI is not “any one thing, and the risks and considerations are very different in different domains.”
But policymakers need to be alert to new challenges in their jurisdictions and vigorously apply existing rules to prevent harm from the use of AI-systems. They should assess whether there are gaps in existing law and regulation and seek to fill these gaps with sensible legal guardrails that prevent harm and foster trust. Finally, policymakers should consider ways to encourage human-centered AI at work and to provide training and social welfare programs to ease the transition to what can be a more productive and humane workplace of the future.
This article is published as part of the IDG Contributor Network. Want to Join?