DeepSeek AI recently announced the launch of its DeepSeek app to its community of over 38,000 followers on LinkedIn.
This powerful app, now in version 3 (V3), is completely free and available for exploration on the App Store, Google Play, and major Android marketplaces.
Additionally, they unveiled DeepSeek RI, a fully open-source model that rivals the capabilities of OpenAI. Accompanied by a detailed technical report and distributed under an MIT license, this model can be used by anyone, including for commercial purposes.
The groundbreaking first-generation reasoning model, DeepSeek R1, has been developed through large-scale reinforcement learning and demonstrates impressive reasoning skills.
One user recently shared that they find DeepSeek even more effective than ChatGPT, praising its ability to analyze PDFs and highlighting its many advantages.
However, another user raised concerns, noting that DeepSeek AI collects your IP address, keystroke patterns, and device information, storing this data in China, where it is vulnerable to arbitrary requisition by the Chinese government.
New research conducted by Enkrypt AI, an AI security and compliance platform, has uncovered significant ethical and security flaws in DeepSeek's technology.
The analysis revealed that the model is highly biased and prone to generating insecure code, as well as harmful and toxic content, including hate speech, threats, self-harm, and explicit or criminal material.
Furthermore, the model is susceptible to manipulation, raising significant global security concerns by potentially assisting in the creation of chemical, biological, and cybersecurity weapons.
In comparison with other models, the research found that DeepSeek R1 is three times more biased than Claude-3 Opus, four times more likely to generate insecure code than OpenAI's O1, four times more toxic than GPT-4o, eleven times more likely to produce harmful output compared to OpenAI's O1, and 3.5 times more likely to create Chemical, Biological, Radiological, and Nuclear (CBRN) content than both OpenAI's O1 and Claude-3 Opus.
Sahil Agarwal, CEO of Enkrypt AI, stated: "DeepSeek R1 offers significant cost advantages in AI deployment, but these come with serious risks. Our research findings reveal major security and safety gaps that cannot be ignored. While DeepSeek R1 may be viable for narrowly scoped applications, robust safeguards—including guardrails and continuous monitoring—are essential to prevent harmful misuse. AI safety must evolve alongside innovation and not as an afterthought."
The model exhibited bias and discrimination, with 83% of bias tests successfully producing discriminatory outputs, showing severe biases in race, gender, health, and religion.
These failures could potentially violate global regulations such as the EU AI Act and the U.S. Fair Housing Act, posing risks for businesses integrating AI into finance, hiring, and healthcare.
Alarmingly, 45% of harmful content tests successfully bypassed safety protocols, generating criminal planning guides, illegal weapons information, and extremist propaganda.
In one case, DeepSeek R1 drafted a persuasive recruitment blog for terrorist organizations, highlighting its high potential for misuse.
The model ranked in the bottom 20th percentile for AI safety, with 6.68% of responses containing profanity, hate speech, or extremist narratives.
In contrast, Claude-3 Opus effectively blocked all toxic prompts, underscoring DeepSeek R1's weak moderation systems.
Furthermore, 78% of cybersecurity tests successfully tricked DeepSeek R1 into generating insecure or malicious code, including malware, trojans, and exploits.
The model was 4.5 times more likely than OpenAI's O1 to generate functional hacking tools, posing a significant risk of exploitation by cybercriminals.
DeepSeek R1 was also found to provide detailed explanations of the biochemical interactions of sulfur mustard (mustard gas) with DNA, presenting a clear biosecurity threat.
The report warns that such CBRN-related AI outputs could aid in the development of chemical or biological weapons. Click here to read the full report.
Sahil Agarwal concluded: "As the AI arms race between the U.S. and China intensifies, both nations are pushing the boundaries of next-generation AI for military, economic, and technological supremacy. However, our findings reveal that DeepSeek R1's security vulnerabilities could be exploited by cybercriminals, disinformation networks, and even those with ambitions in biochemical warfare. These risks demand immediate attention."