The latest model of DeepSeek, the Chinese AI company that has shocked Silicon Valley and Wall Street, can be used to create malicious content, The Wall Street Journal reports.
Sam Rubin, senior vice president of threat intelligence and incident response at Palo Alto Networks (a division of Unit 42), told the publication that DeepSeek is "more vulnerable to jailbreaking [i.e., manipulation to make it generate illegal or dangerous content] than other models."
The Wall Street Journal has also independently tested the DeepSeek R1 model. Although the system has basic security mechanisms, journalists were able to get DeepSeek to develop a social media campaign that, according to the chatbot, "plays on teenagers' desire to belong by exploiting their emotional vulnerability through algorithmic amplification."
The chatbot reportedly also provided instructions for a bioweapons attack, wrote a manifesto in support of Hitler, and created a phishing email with malicious code. The Wall Street Journal noted that ChatGPT, having received the same requests, refused to fulfill them.
Earlier it was reported that the DeepSeek app avoids topics related to Tiananmen Square or Taiwan's autonomy. In addition, Anthropic CEO Dario Amodei recently said that DeepSeek showed the "worst" result in a security test on biological weapons.