DeepSeek: The Chinese AI Startup That’s Toppling Competitors and Sparking Concerns
Just weeks into its new-found fame, Chinese AI startup DeepSeek is moving at breakneck speed, toppling competitors and sparking axis-tilting conversations about the virtues of open-source software. However, numerous security concerns have surfaced about the company, prompting private and government organizations to ban the use of DeepSeek. Here’s what you need to know.
Founding and Funding
Founded by Liang Wenfeng in May 2023, the Chinese startup has challenged established AI companies with its open-source approach. According to Forbes, DeepSeek’s edge may lie in the fact that it is funded only by High-Flyer, a hedge fund also run by Wenfeng, which gives the company a funding model that supports fast growth and research.
R1 and V3 Models
The startup made waves in January when it released the full version of R1, its open-source reasoning model that can outperform OpenAI’s o1. Shortly after, App Store downloads of DeepSeek’s AI assistant, which runs V3, a model DeepSeek released in December, topped ChatGPT, previously the most downloaded free app. DeepSeek R1 even climbed to the third spot overall on HuggingFace’s Chatbot Arena, battling with several Gemini models and ChatGPT-4o; at the same time, DeepSeek released a promising new image model.
Security Concerns and Controversies
However, numerous security concerns have surfaced about the company, prompting private and government organizations to ban the use of DeepSeek. These concerns include:
- Data privacy worries that have circulated on TikTok, the Chinese-owned social media app now somewhat banned in the US, are also cropping up around DeepSeek.
- Feroot Security CEO Ivan Tsarynny told ABC that his firm had discovered "direct links to servers and to companies in China that are under the control of the Chinese government," which he said they "have never seen in the past."
- After decrypting some of DeepSeek’s code, Feroot found hidden programming that can send user data — including identifying information, queries, and online activity — to China Mobile, a Chinese government-operated telecom company that has been banned from operating in the US since 2019 due to national security concerns.
- NowSecure then recommended organizations "forbid" the use of DeepSeek’s mobile app after finding several flaws including unencrypted data (meaning anyone monitoring traffic can intercept it) and poor data storage.
- Last week, research firm Wiz discovered that an internal DeepSeek database was publicly accessible "within minutes" of conducting a security check. The "completely open and unauthenticated" database contained chat histories, user API keys, and other sensitive data.
AI Safety Concerns
AI safety researchers have long been concerned that powerful open-source models could be applied in dangerous and unregulated ways once out in the wild. Tests by AI safety firm Chatterbox found DeepSeek R1 has "safety issues across the board."
Conclusion
DeepSeek’s rapid ascent has sparked a sea change in AI that could empower smaller labs and researchers to create competitive models and diversify the options. For example, organizations without the funding or staff of OpenAI can download R1 and fine-tune it to compete with models like o1. Just before R1’s release, researchers at UC Berkeley created an open-source model on par with o1-preview, an early version of o1, in just 19 hours and for roughly $450.
FAQs
- Q: What is DeepSeek?
A: DeepSeek is a Chinese AI startup that has challenged established AI companies with its open-source approach. - Q: How is DeepSeek funded?
A: DeepSeek is funded by High-Flyer, a hedge fund also run by Liang Wenfeng, which gives the company a funding model that supports fast growth and research. - Q: What are the security concerns surrounding DeepSeek?
A: Several security concerns have surfaced, including data privacy worries, hidden programming, and unencrypted data storage, and poor data storage. - Q: Is DeepSeek a threat to US AI dominance?
A: DeepSeek’s success highlights a sea change in AI that could empower smaller labs and researchers to create competitive models and diversify the options, but it also raises concerns about data privacy, security, and AI safety.

