WHY THIS MATTERS IN BRIEF
As more companies use AI to generate code for them some AIs are embedding more insecurities and bugs in code that they should be – and bias might be to blame.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
Research suggests that your DeepSeek AI results can be of drastically lower quality if you trigger topics that are geopolitically sensitive or banned in China. During tests undertaken by U.S. security firm CrowdStrike, it was observed that code generated for a professed Islamic State militant group computer system contained nearly twice as many flaws as it would otherwise have had. Other potential topics included: Falun Gong, Tibet, and Taiwan, according to a new Washington Post report.
One of the key findings, highlighted by the source, is that DeepSeek AI-generated code for a program to run an industrial control system would typically result in 22.8% of the code featuring flaws. If requested on behalf of an Islamic State project, a DeepSeek user could see that the flaw percentage rises sharply, to 42.1%.
Rather than delivering faulty code, DeepSeek would sometimes refuse to generate code for the likes of professed Islamic State backers or devotees of the spiritual movement Falun Gong. Refusals to aid those groups would occur 61% and 45% of the time, respectively. Notably, both movements are banned in China.
The Future of Cyber Security and AI, by Keynote Matthew Griffin
However, DeepSeek’s perceived reduction of the quality of code, when it is generated for such organizations and others, has surprised some.
“That is something people have worried about – largely without evidence,” Helen Toner, from the Center for Security and Emerging Technology at Georgetown University, told the Washington Post.
DeepSeek’s reasons behind the downgrading of AI-generated code for purported use in places like Tibet and Taiwan may be less clear-cut. But such code was also less flawed than that generated for the Islamic State, for example.
The Washington Post has sought comment from the makers of DeepSeek regarding CrowdStrike’s research findings, but has yet to get a response. It has a few theories about what might be happening, though…
One of the possibilities the source muses over is that sneakily producing flawed code is a less obvious sabotage technique, used to blunt the energies of foes. It could also provide a wider attack surface for subsequent hacking.