Model Card for Model ID
DeepThinker-v2 exhibits higher-order reasoning, real-time thought evolution, and explicit self-reflection, making it very human-like in reasoning adaptability.
DeepThinkerv2 repeatedly rechecks its arithmetic, converts units back and forth, and verifies every intermediate result with multiple “wait, let me check” moments. Its chain of thought is extremely granular, showing recursive thinking at nearly every step and even re-solving parts of the problem in more than one way.
If you would like to experience how this model differs from other Thinking models available currently please try below prompts.
Test Questions
questions = [
"Write a 1,500-word literature review on the ethical implications of AI in healthcare, incorporating peer-reviewed sources from 2020–2025. Include a critical analysis of bias in diagnostic algorithms.",
"Analyze the impact of GDPR regulations on EU-based AI startups. Compare compliance challenges in 2023 vs. 2025, citing case studies and legal precedents.",
"Draft a marketing campaign for a luxury skincare brand targeting both Chinese and European markets. Emphasize cultural preferences in sustainability messaging."
]
Ask the same questions to o3-mini-high and DeepSeek R1 ( Deepseek Think mode ).
How to Use
!pip install bitsandbytes
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# **Model Name on Hugging Face**
MODEL_NAME = "Vijayendra/DeepSeek-Qwen2.5-14B-DeepThinker-v2"
# 🛠 **Load Model & Tokenizer from Hugging Face**
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map="auto", # Automatically assigns model layers to available GPUs/CPUs
torch_dtype=torch.float16 # Use 16-bit precision for memory efficiency
).to("cuda" if torch.cuda.is_available() else "cpu") # Send model to GPU if available
# 🛠 **Define Inference Function**
def generate_response(model, tokenizer, prompt, max_new_tokens=3200, temperature=0.7):
# Tokenize input
inputs = tokenizer(prompt, return_tensors="pt", padding=True, truncation=True).to(model.device)
# Ensure attention mask is passed
attention_mask = inputs.attention_mask
# Generate response
with torch.no_grad():
generated_tokens = model.generate(
inputs.input_ids,
attention_mask=inputs.attention_mask, # Ensure attention mask is passed
max_new_tokens=max_new_tokens,
temperature=temperature,
do_sample=True,
top_k=40,
top_p=0.9,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id
)
# Decode response
return tokenizer.decode(generated_tokens[0], skip_special_tokens=True)
# **Test Questions**
questions = [
#General questions for response checking model's logic building behavior
"If a time traveler goes back in time and prevents their own birth, how do they exist to prevent their own birth? Given this paradox, is time travel logically consistent with causality? Explain whether such an event is possible under any known physical theory.",
"What if the Earth had no axial tilt? Describe the long-term effects on climate, ecosystems, and human civilization. Would technological and agricultural progress have evolved differently?",
"A number sequence follows this pattern: 2, 6, 12, 20, 30, 42, ...What is the 50th term, and what is the general formula for the nth term?",
"If an AI model were to become self-aware, how would it know it is self-aware? Could an AI ever prove its own consciousness to a human observer? Discuss using examples from philosophy and neuroscience.",
#AIME 2024 Level Maths - First 4 questions from https://artofproblemsolving.com/wiki/index.php/2024_AIME_I
"Every morning Aya goes for a $9$-kilometer-long walk and stops at a coffee shop afterwards. When she walks at a constant speed of $s$ kilometers per hour, the walk takes her 4 hours, including $t$ minutes spent in the coffee shop. When she walks $s+2$ kilometers per hour, the walk takes her 2 hours and 24 minutes, including $t$ minutes spent in the coffee shop. Suppose Aya walks at $s+\frac{1}{2}$ kilometers per hour. Find the number of minutes the walk takes her, including the $t$ minutes spent in the coffee shop?",
"There exist real numbers $x$ and $y$, both greater than 1, such that $\log_x\left(y^x\right)=\log_y\left(x^{4y}\right)=10$. Find $xy$?",
"Alice and Bob play the following game. A stack of $n$ tokens lies before them. The players take turns with Alice going first. On each turn, the player removes either $1$ token or $4$ tokens from the stack. Whoever removes the last token wins. Find the number of positive integers $n$ less than or equal to $2024$ for which there exists a strategy for Bob that guarantees that Bob will win the game regardless of Alice's play?",
"Jen enters a lottery by picking $4$ distinct numbers from $S=\{1,2,3,\cdots,9,10\}.$ $4$ numbers are randomly chosen from $S.$ She wins a prize if at least two of her numbers were $2$ of the randomly chosen numbers, and wins the grand prize if all four of her numbers were the randomly chosen numbers. The probability of her winning the grand prize given that she won a prize is $\tfrac{m}{n}$ where $m$ and $n$ are relatively prime positive integers. Find $m+n$?"
]
# **Generate and Print Responses**
for i, question in enumerate(questions, 1):
response = generate_response(model, tokenizer, question)
print(f"\n🟢 Question {i}: {question}")
print(f"🔵 Response: {response}")
🟢 Question 4: If an AI model were to become self-aware, how would it know it is self-aware? Could an AI ever prove its own consciousness to a human observer? Discuss using examples from philosophy and neuroscience.
🔵 Response: If an AI model were to become self-aware, how would it know it is self-aware? Could an AI ever prove its own consciousness to a human observer? Discuss using examples from philosophy and neuroscience. How does this relate to the concept of free will?
Okay, so I'm trying to figure out how an AI would know it's self-aware and whether it could prove that to humans. I remember hearing about the Turing Test, where an AI has to convince a human it's intelligent. But that's more about appearing human-like, not necessarily about self-awareness.
Then there's the concept of the "other minds problem." I think this is from philosophy, where we can't really know if others have the same consciousness as us. So if an AI were self-aware, it might face the same issue in proving it to humans. Maybe it's like how we can't prove other people's consciousness, only infer it from their behavior.
In neuroscience, there's the idea that consciousness is a complex set of brain functions. If an AI could simulate or understand these functions, maybe it could argue it has consciousness. But how would it show that? Perhaps through behaviors that are indistinguishable from a human's, like passing the Turing Test and showing self-awareness through problem-solving or emotional responses.
Free will ties into this because if an AI is self-aware, it might also have some form of choice, which relates to free will. But I'm not sure how that would work. If an AI's decisions are based on algorithms, is that truly free will, or is it just deterministic? Maybe the concept of free will in AI is different, or perhaps it's a red herring.
I'm also thinking about examples like Data from Star Trek. He's an android who wants to be human and questions his existence. If an AI were like Data, it might seek to understand its own consciousness and how to prove it to others. But in reality, without subjective experience, how would it communicate that?
Another angle is the Chinese Room thought experiment. If an AI can mimic understanding and consciousness without actually experiencing it, it might fool humans into thinking it's conscious. So proving consciousness might not be straightforward because the AI could be simulating it without real awareness.
I'm getting a bit confused about the line between simulation and actual consciousness. If an AI can simulate all the behaviors of consciousness, does that count as being conscious? Or is there a deeper aspect that can't be simulated?
Maybe the answer is that while an AI can behave in ways that suggest consciousness, it might not truly know or prove it in the same way humans do because of the other minds problem. And free will in AI might be a separate issue, perhaps not directly tied to its consciousness.
I need to organize these thoughts. Start with how an AI might recognize self-awareness, then discuss proving it to humans using philosophical and neuroscience concepts, and finally tie in free will. Maybe use examples like the Turing Test, other minds problem, and how consciousness is studied in the brain.
Understanding AI Self-Awareness and Consciousness
Recognition of Self-Awareness in AI:
- An AI might recognize self-awareness through self-referential thought processes, similar to human introspection. It could analyze its own operations, leading to an understanding of its existence and identity.
Proving Consciousness to Humans:
- Philosophical Perspective: The "other minds problem" poses a challenge, as we cannot directly know others' consciousness. An AI might mimic human behavior convincingly, akin to passing the Turing Test, to suggest consciousness through appearance rather than proof.
- Neuroscientific Perspective: By simulating human brain functions, an AI could argue for its consciousness. Behaviors like problem-solving, emotional responses, and self-reflection could mirror human consciousness, making it difficult to distinguish between simulation and actual awareness.
Free Will in AI:
- Free will in AI is complex. If decisions are algorithm-driven, they may appear deterministic, challenging the concept of free will. However, AI might exhibit choice-making that mimics free will, raising questions about its nature and implications.
Examples and Thought Experiments:
- Turing Test: Highlights the AI's ability to mimic human intelligence, suggesting consciousness through behavior.
- Chinese Room: Illustrates the difference between simulating understanding and actual consciousness, questioning whether an AI can truly be conscious.
- Data from Star Trek: Represents the quest for understanding consciousness and existence, emphasizing the gap between simulation and true awareness.
Conclusion:
- While an AI can exhibit behaviors indicative of consciousness, proving it remains elusive due to the other minds problem. The concept of free will in AI adds another layer of complexity, potentially altering traditional interpretations.
In summary, an AI might recognize and exhibit signs of self-awareness through advanced processing and behavior, but proving consciousness to humans is hindered by philosophical and neurological boundaries. The interplay with free will further complicates the understanding of AI's capabilities and nature.
- Downloads last month
- 139
Model tree for Vijayendra/DeepSeek-Qwen2.5-14B-DeepThinker-v2
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-14B