WHY THIS MATTERS IN BRIEF
Most AI companies are creating their own AI guardrails, but Anthropic, the multi-billion dollar AI startup is asking the public to help build its AI Constitution.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
In a bid to democratise the development of AI tools, and find new ways to govern and regulate their development, ChatGPT competitor Anthropic, the company behind Claude – a constitution based AI which recently secured billions of dollars of funding from Amazon and Google – have announced that they’ve teamed with the non-profit Collective Intelligence Project to explore a different course for AI development, surveying 1,000 Americans to draft what’s being called an AI constitution.
“AI is going to have a transformative impact on the world,” said Collective Intelligence Project co-founder Divya Siddarth. “Yet very few people get to decide exactly what that looks like. We wanted to build processes so that wouldn’t be true anymore.”
Launched in January by Siddarth – a political economist and social technologist – and researcher, writer, and engineer, Saffron Huang, the Collective Intelligence Project aims to promote collective intelligence for collective progress. In March, the CIP partnered with Anthropic for the Collective Constitutional AI initiative to develop an AI model designed with outside inputs and to explore how democratic processes can influence AI development.
The Future of AI and Generative AI, by Keynote Matthew Griffin
“We wanted to put together some of the work that’s been done in the democracy space, in the collective intelligence space around what it means to make good decisions for whom and who gets involved,” Siddarth said. “There is a lot of research on that with transformative technology, particularly AI, which is something we’ve both been working on.”
The project, Anthropic said in a blog post, asked 1,000 members of the American public to help the AI developer pick rules for its AI chatbot. The sample size, Anthropic said, aimed to represent US adults across age, gender, income, and location. Participants could choose from a list of rules for the AI or come up with their own.
According to Anthropic, participants contributed 1,117 statements. They cast over 38,000 votes on questions like whether AI should discriminate on race or sexual preference, adding that while there were different opinions, there was a high degree of consensus on most statements.
While the survey attempted to collect a diverse sample size, Siddarth acknowledged that future experiments must go beyond tech-savvy and affluent Americans.
“It’s only people in the US – that’s obviously extremely constrained in terms of everyone in the world,” she said. “If anything, like many people who are developing technology, probably have more values overlap with people in the US and many other places.”
Siddarth said that the Collective Intelligence Project has also worked with rival AI developer OpenAI on other democratic AI projects. The AI Constitution project was done only using Anthropics’ Claude AI models.
“I think that’s why it’s a good comparison because we take the model that [Anthropic] already trained on a constitution that Anthropic researchers wrote and compared it directly to the same model,” Siddarth said. “But with a constitution that was written by the public.”
A surprising finding in the experiment, Siddarth said, was the several statements that also said the AI model should not discriminate based on disability. Those were not included in the Anthropic constitution.
“I don’t think because they intended it not to be there,” she said. “It’s just not something that they ended up highlighting, and that was significantly emphasised.”
Launched in 2021, Anthropic is just one of the developers racing to bring faster and more advanced Artificial Intelligence (AI) to the market. Microsoft, Google, Meta, and Amazon have invested billions into Generative AI (GAI) developers. As a result, 2023 has been a lucrative year for AI developers: in February, Google announced an investment of $400 million in Anthropic. This investment was followed by Amazon’s $4 billion investment in Anthropic in September.
Earlier this year, Anthropic joined with OpenAI, Google, and Microsoft, pledging to work towards developing AI responsibly after a meeting with the Biden Administration. Others joining the pledge included Amazon, Meta, and Inflection. In September, NVIDIA, Scale AI, Cohere, Adobe, IBM, Palantir, Salesforce, and Stability AI also joined the pledge.
Despite limitations in the Constitutional AI experiment, Siddarth is optimistic about the future of the Collective Intelligence Project, democracy in AI, and looking to include more public input into the development of the emerging technology.
“I think the core is let’s do more of these processes; let’s learn from what we’ve already done,” she said. “We’re working with civil society groups, different policymakers, all of these different companies to get that done.”
[…] and Cancer killing nanorobotic swarms, then obviously ChatGPT and new GPT AI models, and the first Constitutional AI models from […]
[…] and Cancer killing nanorobotic swarms, then obviously ChatGPT and new GPT AI models, and the first Constitutional AI models from […]