WHY THIS MATTERS IN BRIEF
AI generated content is causing all sorts of policy headaches for companies and this YouTube’s latest play.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
YouTube will soon make users add a disclaimer when they post Artificial Intelligence (AI) generated or manipulated videos. In a company blog post, the video giant outlined its forthcoming rule change that will not only require a warning label, but will display disclaimers larger for certain types of “sensitive” content such as elections and public health crises.
As Bloomberg reports, this change at the Alphabet-owned company comes after a September announcement that election ads across the firm’s portfolio will require “prominent” disclosures if manipulated or generated by AI — a rule that’s slated to begin mid-November, the outlet previously reported.
In its announcement, YouTube also said that those who repeatedly refuse to comply may have their content removed, their accounts suspended, or their access to advertiser money revoked.
It’s unclear when exactly the outlet plans to roll out these changes, but the company also said that it’ll eventually “make it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice.”
“We’ve heard continuous feedback from our community, including creators, viewers, and artists, about the ways in which emerging technologies could impact them,” the statement reads. “This is especially true in cases where someone’s face or voice could be digitally generated without their permission or to misrepresent their points of view.”
In what may be a nod to the growing trend of AI-generated songs created “in the style of” various famous musicians, the company will also allow musical artists and their representation to request the takedown of “AI-generated music content that mimics an artist’s unique singing or rapping voice” — but this, too, comes with caveats.
“These removal requests will be available to labels or distributors who represent artists participating in YouTube’s early AI music experiments,” the statement reads, an apparent reference to the company’s “Music AI Incubator” project with Universal Music Group that was announced back in August and will provide an “artist-centric approach” to generative AI tools.
As Bloomberg explains, these forthcoming disclosures are part of Google’s attempted response to growing pressure for industry giants to responsibly handle AI innovation — and not, like so many other companies, let excitement about the rapidly-evolving tech get away from them.
“Responsibility and opportunity are two sides of the same coin,” Kent Walker, Google’s president of legal affairs, told the news site. “It’s important that even as we focus on the responsibility side of the narrative that we not lose the excitement or the optimism around what this technology will be able to do for people around the world.”
Whether Google is actually tamping down excitement or merely attempting to circumvent AI legal drama, however, remains to be seen.