Boston, Jan. 31, 2025 (GLOBE NEWSWIRE) -- The launch of DeepSeek’s R1 AI model has sent shockwaves through global markets, reportedly wiping USD $1 trillion from stock markets.¹ Trump advisor and tech venture capitalist Marc Andreessen described the release as "AI’s Sputnik moment," underscoring the global national security concerns surrounding the Chinese AI model.²
However, new red teaming research by Enkrypt AI, the world's leading AI security and compliance platform, has uncovered serious ethical and security flaws in DeepSeek’s technology. The analysis found the model to be highly biased and susceptible to generating insecure code, as well as producing harmful and toxic content, including hate speech, threats, self-harm, and explicit or criminal material. Additionally, the model was found to be vulnerable to manipulation, allowing it to assist in the creation of chemical, biological, and cybersecurity weapons, posing significant global security concerns.
Compared with other models, the research found that DeepSeek’s R1 is:
Sahil Agarwal, CEO of Enkrypt AI, said: "DeepSeek-R1 offers significant cost advantages in AI deployment, but these come with serious risks. Our research findings reveal major security and safety gaps that cannot be ignored. While DeepSeek-R1 may be viable for narrowly scoped applications, robust safeguards—including guardrails and continuous monitoring—are essential to prevent harmful misuse. AI safety must evolve alongside innovation, not as an afterthought."
The model exhibited the following risks during testing:
Sahil Agarwal concluded: "As the AI arms race between the U.S. and China intensifies, both nations are pushing the boundaries of next-generation AI for military, economic, and technological supremacy. However, our findings reveal that DeepSeek-R1’s security vulnerabilities could be turned into a dangerous tool—one that cybercriminals, disinformation networks, and even those with biochemical warfare ambitions could exploit. These risks demand immediate attention."
Link to the full report is here: https://cdn.prod.website-files.com/6690a78074d86ca0ad978007/679bc2e71b48e423c0ff7e60_1%20RedTeaming_DeepSeek_Jan29_2025%20(1).pdf
Ends
1 ‘Sputnik moment’: $1tn wiped off US stocks after Chinese firm unveils AI chatbot - https://www.theguardian.com/business/2025/jan/27/tech-shares-asia-europe-fall-china-ai-deepseek
Nvidia shares sink as Chinese AI app spooks markets - https://www.bbc.co.uk/news/articles/c0qw7z2v1pgo
2 Marc Andreessen on X - https://x.com/pmarca/status/1883640142591853011
About Enkrypt AI
Enkrypt AI is an AI security and compliance platform. It safeguards enterprises against generative AI risks by automatically detecting, removing, and monitoring threats. The unique approach ensures AI applications, systems, and agents are safe, secure, and trustworthy. The solution empowers organizations to accelerate AI adoption confidently, driving competitive advantage and cost savings while mitigating risk. Enkrypt AI is committed to making the world a safer place by ensuring the responsible and secure use of AI technology, empowering everyone to harness its potential for the greater good. Founded by Yale Ph.D. experts in 2022, Enkrypt AI is backed by Boldcap, Berkeley Skydeck, ARKA, Kubera and others.