2025 · Jan 29
Qwen2.5-Max launched as the proprietary top-tier API model, available in Qwen Chat. Positioned as Alibaba's commercial answer to GPT-4o and Claude 3.5 Sonnet, with strong reasoning and coding performance.

The open-source AI family powering the world's fastest-growing models
79
Overall score
30
Heat score
Inputs
Text Prompt, Code, Image, Audio, Video, Document, URL, System Prompt, Function Call
Outputs
Generated Text, Code, Image, Audio, Translation, Summary, Embeddings, Structured JSON, Tool Call Results, Research Report
AI Type
Multimodal
Model Architecture
MoE Transformer
Daily Prompts
N/A
Context Length
1M
Accuracy
86%
Content
83%
Reasoning
88%
Company
Alibaba Cloud (Tongyi Lab)
Founded
2023
HQ
Hangzhou, Zhejiang, China
Employees
N/A
Total Raised / Total Funding
N/A
Revenue
$16.26B
Valuation
$400B
ARR
N/A
CEO
Zhou Jingren
Estimated Paid Users
N/A
Current estimate
Total Earnings Till Date
$16.26B
+11.11% from last month
Market Share
3.8%
Current share
Average Session
28
Per active user
Hallucination Rate
14%
Model quality signal
Growth Rate
+6.67%
Monthly active users
Burn Rate
N/A
Total expenses / years active
Paid User Gain
+30.00%
Monthly paid user trend
Wikipedia: Qwen
•en.wikipedia.org
Alibaba Q2 FY2026 Earnings Press Release
•alibabagroup.com
Alibaba Q3 FY2026 Earnings: Qwen App 300M MAU
•alibabagroup.com
VentureBeat: Alibaba Qwen Leadership Shake-up March 2026
•venturebeat.com
Qwen3 Technical Report (arXiv)
•arxiv.org
Qwen2.5 Technical Report (arXiv)
•arxiv.org
Qwen2 Technical Report (arXiv)
•arxiv.org
Qwen2.5-LLM Blog
•qwenlm.github.io
South China Morning Post: Qwen App fastest growing AI app
•finance.yahoo.com
Alibaba Cloud Q2 FY2026 AI Revenue 34% growth
•constellationr.com
KR Asia: Alibaba Scrambles After Qwen Tech Lead Departure
•kr-asia.com
Alibaba Qwen App Launch Blog
•alibabacloud.com
-$3.1B
Total Loss
$3.8B
Total Profit
$0
Accuracy
86%
Context
83%
Reasoning
88%
Safety
72%
MMLU
86.1%
MMLU
84.2%
GPQA
37.9%
HumanEval
64.6%
GSM8K
89.5%
MMLU-Pro
65.5%
HumanEval
92.7%
MBPP
88.2%
Type: Text
Description: First public Qwen model, bilingual Chinese-English, decoder-only Transformer based on Llama architecture. Sizes: 7B, 14B, 72B.
Context Length: 1M tokens
Architecture: MoE Transformer
Type: Text
Description: Full family from 0.5B to 110B with improved chat alignment and multilingual support across 29 languages. Apache 2.0 licensed.
Context Length: 1M tokens
Architecture: MoE Transformer
Type: Text
Description: Flagship of Qwen2 generation; outperformed Llama-3-70B on MMLU (84.2), GPQA, HumanEval. Apache 2.0 license, became top open-source model at scale.
Context Length: 1M tokens
Architecture: MoE Transformer
Type: Text
Description: Improved knowledge (MMLU 86.1), math (MATH 83.1), and code. Approaches Llama-3-405B performance with 1/5 the parameters.
Context Length: 1M tokens
Architecture: MoE Transformer
Type: Code
Description: Specialized coding model trained on 5.5T tokens, achieves HumanEval 92.7% pass@1, leading open-source coding benchmarks.
Context Length: 1M tokens
Architecture: MoE Transformer
Total Funding
N/A
Rounds
0
No funding rounds available.
Lin Junyang
Technical Lead, Qwen (2023–2026)
Zhou Jingren
CTO, Alibaba Cloud; Head, Tongyi Lab
Liu Dayiheng
Pre-training Lead; Post-training & Coding Lead (from 2026)
No direct competitors available.
2025 · Jan 29
Qwen2.5-Max launched as the proprietary top-tier API model, available in Qwen Chat. Positioned as Alibaba's commercial answer to GPT-4o and Claude 3.5 Sonnet, with strong reasoning and coding performance.
2025 · Apr 28
Qwen3 released with both dense (0.6B–32B) and MoE (30B-A3B, 235B-A22B) models, trained on 36 trillion tokens across 119 languages. Key innovation: unified thinking/non-thinking mode within a single model via /think and /no_think flags, eliminating the need to switch between chat and reasoning models.
2025 · Nov 17
The consumer-facing Qwen App launched on iOS, Android, Web, and PC, powered by Qwen3. Surpassed 10 million downloads in the first week, reaching 30 million MAU by December 2025 and 300 million MAU across all platforms by February 2026 — one of the fastest AI consumer app launches in history.
Industry: Not specified
Compliances: Not specified
Integrations: Hugging Face, Alibaba Cloud Model Studio, GitHub, ModelScope, OpenRouter, Groq, DeepInfra, Together AI, LangChain, LlamaIndex, vLLM, Ollama, OpenAI-compatible API, Taobao, Alibaba Cloud PAI, Quark Browser
Support:email, help center, community forum, enterprise support, documentation
Target audience: AI Developers, Enterprise Engineers, Data Scientists, Researchers, Startups, Students, Content Creators, Multilingual Users, Cloud Architects
Supported languages: English, Chinese, Spanish, French, German, Japanese, Korean, Arabic, Portuguese, Russian, Italian, Dutch, Polish, Turkish, Vietnamese, Thai, Indonesian, Malay, Hindi, Bengali, Swahili, Ukrainian, Czech, Swedish, Norwegian, Danish, Finnish, Romanian, Hungarian, Greek
No acquisition records available.
0 reviews
No reviews yet
Be the first to share how Qwen performs for your workflow.
0.0
Accuracy
0.0
Ease of Use
0.0
Output Quality
0.0
Security
0.0
No social feed available for this tool yet.
Qwen (Tongyi Qianwen) is a family of large language models and multimodal AI models developed by Alibaba Cloud's Tongyi Lab. The family includes text, vision, audio, code, math, and embedding models ranging from 0.6B to 235B parameters, most released under the Apache 2.0 open-source license.
Most Qwen models are open-weight under Apache 2.0, meaning the model weights are freely downloadable and commercially usable. However, training code and training data are not fully documented, so they technically fall short of full Open Source Initiative definitions. The 72B+ variants may carry separate license terms.
You can access Qwen via: (1) the free Qwen App at chat.qwen.ai or iOS/Android; (2) the Alibaba Cloud Model Studio API (pay-as-you-go with a free new-user quota in the Singapore region); (3) third-party providers like OpenRouter, Groq, DeepInfra, and Together AI; or (4) by downloading model weights from Hugging Face or ModelScope for local self-hosting.
Qwen-Max is the top-tier flagship optimized for complex reasoning, agent tasks, and high accuracy — the most expensive. Qwen-Plus is a balanced mid-tier model suitable for moderately complex tasks at lower cost. Qwen-Flash (formerly Turbo) is the fastest and cheapest, designed for simple, high-volume workloads. All support thinking and non-thinking modes in Qwen3.
Context windows vary by model. Standard Qwen3 models support 128K tokens. The Qwen3-235B-A22B flagship supports up to 262K tokens. Long-context variants like Qwen2.5-1M and Qwen3-2507 support up to 1 million tokens, enabling very long document and codebase processing.
Wikipedia: Qwen
•en.wikipedia.org
Alibaba Q2 FY2026 Earnings Press Release
•alibabagroup.com
Alibaba Q3 FY2026 Earnings: Qwen App 300M MAU
•alibabagroup.com
VentureBeat: Alibaba Qwen Leadership Shake-up March 2026
•venturebeat.com
Qwen3 Technical Report (arXiv)
•arxiv.org
Qwen2.5 Technical Report (arXiv)
•arxiv.org
Qwen2 Technical Report (arXiv)
•arxiv.org
Qwen2.5-LLM Blog
•qwenlm.github.io
South China Morning Post: Qwen App fastest growing AI app
•finance.yahoo.com
Alibaba Cloud Q2 FY2026 AI Revenue 34% growth
•constellationr.com
KR Asia: Alibaba Scrambles After Qwen Tech Lead Departure
•kr-asia.com
Alibaba Qwen App Launch Blog
•alibabacloud.com