0

WHY THIS MATTERS IN BRIEF

Decades after AI became a “thing” no government has any real idea of how to regulate its development and that’s an issue.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

In the footsteps of the European Union who recently released their first Artificial Intelligence (AI) regulatory framework which could see companies valuable AI’s deleted if they fail to meet certain expectations this week the Biden administration unveiled a set of far-reaching goals aimed at averting harms caused by the rise of AI systems, including guidelines for how to protect people’s personal data and limit surveillance.

 

RELATED
This creative AI paints like the old masters

 

The Blueprint for an AI Bill of Rights notably does not set out specific enforcement actions, but instead is intended as a White House call to action for the U.S. government to safeguard digital and civil rights in an AI-fuelled world, officials said.

 

The Future of AI, by keynote speaker Matthew Griffin

 

“This is the Biden-Harris administration really saying that we need to work together, not only just across government, but across all sectors, to really put equity at the center and civil rights at the center of the ways that we make and use and govern technologies,” said Alondra Nelson, deputy director for science and society at the White House Office of Science and Technology Policy. “We can and should expect better and demand better from our technologies.”

The office said the white paper represents a major advance in the administration’s agenda to hold technology companies accountable, and highlighted various federal agencies’ commitments to weighing new rules and studying the specific impacts of AI technologies. The document emerged after a year-long consultation with more than two dozen different departments, and also incorporates feedback from civil society groups, technologists, industry researchers and tech companies including Palantir and Microsoft.

 

RELATED
Suspicious nuclear power plants can now be monitored remotely using WATCHMAN

 

It puts forward five core principles that the White House says should be built into AI systems to limit the impacts of algorithmic bias, give users control over their data and ensure that automated systems are used safely and transparently.

The non-binding principles cite academic research, agency studies and news reports that have documented real-world harms from AI-powered tools, including facial recognition tools that contributed to wrongful arrests and an automated system that discriminated against loan seekers who attended a historically black college or university.

The white paper also said parents and social workers alike could benefit from knowing if child welfare agencies were using algorithms to help decide when families should be investigated for maltreatment – something else that’s caused problems after several AI systems automatically declined benefits based on anything from health grounds to ethnicity.

 

RELATED
LG's keynote speaker at CES 2021 was a digital human called Reah

 

Earlier this year, after the publication of an AP review of an algorithmic tool used in a Pennsylvania child welfare system, OSTP staffers reached out to sources quoted in the article to learn more, according to multiple people who participated in the call. AP’s investigation found that the Allegheny County tool in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a “mandatory” neglect investigation, when compared with white children.

In May, sources said Carnegie Mellon University researchers and staffers from the American Civil Liberties Union spoke with OSTP officials about child welfare agencies’ use of algorithms. Nelson said protecting children from technology harms remains an area of concern.

“If a tool or an automated system is disproportionately harming a vulnerable community, there should be, one would hope, that there would be levers and opportunities to address that through some of the specific applications and prescriptive suggestions,” said Nelson, who also serves as deputy assistant to President Joe Biden.

 

RELATED
Facebook launches itself into developing brain reading tech

 

OSTP did not provide additional comment about the May meeting.

Still, because many AI-powered tools are developed, adopted or funded at the state and local level, the federal government has limited oversight regarding their use. The white paper makes no specific mention of how the Biden administration could influence specific policies at state or local levels, but a senior administration official said the administration was exploring how to align federal grants with AI guidance.

The white paper does not have power over tech companies that develop the tools nor does it include any new legislative proposals. Nelson said agencies would continue to use existing rules to prevent automated systems from unfairly disadvantaging people.

The white paper also did not specifically address AI-powered technologies funded through the Department of Justice, whose civil rights division separately has been examining algorithmic harms, bias and discrimination, Nelson said.

 

RELATED
Brain reading gadgets get closer after researchers pack a BMI interface into a pair of glasses

 

Tucked between the calls for greater oversight, the white paper also said when appropriately implemented, AI systems have the power to bring about lasting benefits to society, such as helping farmers grow food more efficiently or identifying diseases.

“Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone. This important progress must not come at the price of civil rights or democratic values,” the document said.

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *