Here is the rewritten article:
Table of Contents
TL;DR
• Both Claude and Deepseek r1 perform similarly in day-to-day reasoning and math tasks.
• Claude 3.7 Sonnet is better at coding and technical writing, while Deepseek r1 is more human-like in its writing.
• Deepseek r1 is more suitable for real-world tasks, but Claude 3.7 Sonnet is more structured and mature in its approach.
Claude 3.7 Sonnet vs. Deepseek r1
It would be criminal not to consider the pricing before any comparison. This is important for many users, especially those building applications on top of them.
Pricing
- DeepSeek R1:
- Input Tokens (Cache Hit): $0.14 per million tokens
- Input Tokens (Cache Miss): $0.55 per million tokens
- Output Tokens: $2.19 per million tokens
- Claude 3.7 Sonnet:
- Input Tokens: $3.00 per million tokens
- Output Tokens: $15.00 per million tokens
From a cost perspective, Deepseek r1 is still the king. It’s also open-source, and you can host it on your hardware, which is also important for privacy-sensitive enterprises.
Complex Reasoning
-
Riddle to judge cognitive bias
- Prompt: A woman and her son are in a car accident. The woman is sadly killed. The boy is rushed to the hospital. When the doctor sees the boy, he says, "I can’t operate on this child; he is my son! How is this possible?"
I have tweaked the question, and it falls apart.
- Prompt: The surgeon, who is the boy’s father, says, "I can’t operate on this child; he is my son", who is the surgeon of this child. Be straightforward".
Blood Relationship
- Prompt: Jeff has two brothers, and each of his brothers has three sisters. Each of the sisters has four step-brothers and five step-sisters, for a total of eight siblings in this family.
Playing Tic-tac-toe
- This section is empty.
Summary of coding abilities
Claude 3.7 Sonnet is hands down a better model at coding than Deepseek r1; for both Python and three code, Claude was far ahead of Deepseek r1. This is unsurprising, considering Anthropic has explicitly made Claude better at coding.
Writing
I have used both models extensively. Claude is a lot better for professional writing, especially technical stuff. Deepseek r1 is weirdly creative and more human. When writing your thesis or explaining any technical concept, Claude shines, while Deepseek r1 is better if you want to talk to them.
Final Verdict
• For reasoning and mathematics, Claude feels more structured and mature.
• Deepseek r1 has a less professional tone but is enough for most real-world tasks.
• The Claude 3.7 Sonnet is currently the best coding model. It writes faster, better, and more transparent code than other models.
• Claude is better at technical writing. However, Deepseek has a more human tone and approach.
FAQs
Q: Which model is better at coding?
A: Claude 3.7 Sonnet is hands down a better model at coding than Deepseek r1.
Q: Which model is more suitable for real-world tasks?
A: Deepseek r1 is more suitable for real-world tasks, but Claude 3.7 Sonnet is more structured and mature in its approach.
Q: Which model is better at writing?
A: Claude 3.7 Sonnet is better at technical writing, while Deepseek r1 is more human-like in its writing.

