WHY THIS MATTERS IN BRIEF
US Government report lays out guidance for AI use and regulation and puts regulating super AI’s in the too hard bucket.
Artificial Intelligence (AI) research and development is starting to reach critical mass and new breakthroughs are being announced almost every day. Now a new report from the US Office of Science Technology Policy (OSTP), who advises Barak Obama directly on AI matters has prepared a new report on the technology which they see is increasingly poised to reshape the way we live and work.
Titled Preparing for the Future of Artificial Intelligence the report makes 23 policy recommendations on a number of topics concerned with the best way to harness the power of machine learning and algorithm driven intelligence for the benefit of society.
The OSTP position is that government has several roles to play in driving the direction of AI.
Namely, “It should convene conversations about important issues and help to set the agenda for public debate. It should monitor the safety and fairness of applications as they develop, and adapt regulatory frameworks to encourage innovation while protecting the public. It should support basic research and the application of AI to public goods, as well as the development of a skilled, diverse workforce. And government should use AI itself, to serve the public faster, more effectively, and at lower cost.”
The report makes the distinction between narrow AI – which addresses specific application areas such as playing strategic games, language translation, autonomous vehicles, and image recognition – and general AI – a notional future AI system that exhibits apparently intelligent behaviour at least as advanced as a person across the full range of cognitive tasks.
Prominent voices, including those of Elon Musk and Stephen Hawking, have expressed concern about the potential dangers of Artificial General Intelligence (AGI), but the authors of the report don’t share that viewpoint:
So the focus of this report therefore is on narrow AI and its implications, the NSTC Committee on Technology having decided that “the long-term concerns about super-intelligent general AI should have little impact on current policy.”
“Advances in AI technology have opened up new markets and new opportunities for progress in critical areas such as health, education, energy, and the environment,” says John Holdren, assistant to the President for Science and Technology and director, Office of Science and Technology Policy, and Megan Smith, US Chief Technology Officer, in a letter introducing the report.
They continue, “In recent years, machines have surpassed humans in the performance of certain specific tasks, such as some aspects of image recognition. Experts forecast that rapid progress in the field of specialized AI will continue. Although it is very unlikely that machines will exhibit broadly applicable intelligence comparable to or exceeding that of humans in the next 20 years, it is to be expected that machines will reach and exceed human performance on more and more tasks.”
The report might not address the threats of a hostile AI, for which Google is trying to create a kill switch, but identifying and minimizing risk is a key objective and a recurring theme in the reports’ seven topic sections – Applications of AI for Public Good; AI and Regulation; Research and Workforce; Economic Impacts of AI; Fairness, Safety, and Governance; Global Considerations and Security; and Preparing for the Future.
“As AI technologies move toward broader deployment, technical experts, policy analysts, and ethicists have raised concerns about unintended consequences of widespread adoption,” the authors write.
Further, “Use of AI to make consequential decisions about people, often replacing decisions made by human-driven bureaucratic processes, leads to concerns about how to ensure justice, fairness, and accountability.”
On this matter, the authors posit that transparency is needed around algorithms and data and the process of AI decision-making.
This if followed by a dose of common sense:
“Ethics can help practitioners understand their responsibilities to all stakeholders, but ethical training should be augmented with technical tools and methods for putting good intentions into practice by doing the technical work needed to prevent unacceptable outcomes.”
As an example, when it comes to safely transitioning AI tech from the lab to the open world, the authors note that “Experience in building other types of safety-critical systems and infrastructure, such as aircraft, power plants, bridges, and vehicles, has much to teach AI practitioners.”