AD
Qwen logo

Qwen

Freemium🇨🇳HectocornCash Flow Positive

The open-source AI family powering the world's fastest-growing models

79

Overall score

30

Heat score

Pricing

Free (App)$0/month
API Free Quota$0 (new users, Singapore region, 90-day trial)
API Pay-As-You-Go – Qwen-FlashFrom $0.065/1M input tokens
API Pay-As-You-Go – Qwen-Plus$1.56/1M input tokens, $4.60/1M output tokens
API Pay-As-You-Go – Qwen-Max$2.08/1M input tokens, $8.32/1M output tokens
Enterprise / Alibaba CloudCustom pricing

Technical Specs

Inputs

Text Prompt, Code, Image, Audio, Video, Document, URL, System Prompt, Function Call

Outputs

Generated Text, Code, Image, Audio, Translation, Summary, Embeddings, Structured JSON, Tool Call Results, Research Report

AI Type

Multimodal

Model Architecture

MoE Transformer

Daily Prompts

N/A

Context Length

1M

Output Quality

Accuracy

86%

Content

83%

Reasoning

88%

Company Profile

Company

Alibaba Cloud (Tongyi Lab)

Founded

2023

HQ

Hangzhou, Zhejiang, China

Employees

N/A

Total Raised / Total Funding

N/A

Revenue

$16.26B

Valuation

$400B

ARR

N/A

CEO

Zhou Jingren

Overview

Estimated Paid Users

N/A

Current estimate

Total Earnings Till Date

$16.26B

+11.11% from last month

Market Share

3.8%

Current share

Average Session

28

Per active user

Hallucination Rate

14%

Model quality signal

Growth Rate

+6.67%

Monthly active users

Burn Rate

N/A

Total expenses / years active

Paid User Gain

+30.00%

Monthly paid user trend

Profit Analysis

-$3.1B

Total Loss

$3.8B

Total Profit

$0

Performance Metrics

Accuracy

86%

Context

83%

Reasoning

88%

Safety

72%

Benchmarks

MMLU

86.1%

MMLU

84.2%

GPQA

37.9%

HumanEval

64.6%

GSM8K

89.5%

MMLU-Pro

65.5%

HumanEval

92.7%

MBPP

88.2%

Qwen Models

Qwen1 (Tongyi Qianwen 1.0)

Type: Text

Description: First public Qwen model, bilingual Chinese-English, decoder-only Transformer based on Llama architecture. Sizes: 7B, 14B, 72B.

Context Length: 1M tokens

Architecture: MoE Transformer

Qwen1.5

Type: Text

Description: Full family from 0.5B to 110B with improved chat alignment and multilingual support across 29 languages. Apache 2.0 licensed.

Context Length: 1M tokens

Architecture: MoE Transformer

Qwen2-72B

Type: Text

Description: Flagship of Qwen2 generation; outperformed Llama-3-70B on MMLU (84.2), GPQA, HumanEval. Apache 2.0 license, became top open-source model at scale.

Context Length: 1M tokens

Architecture: MoE Transformer

Qwen2.5-72B

Type: Text

Description: Improved knowledge (MMLU 86.1), math (MATH 83.1), and code. Approaches Llama-3-405B performance with 1/5 the parameters.

Context Length: 1M tokens

Architecture: MoE Transformer

Qwen2.5-Coder-32B

Type: Code

Description: Specialized coding model trained on 5.5T tokens, achieves HumanEval 92.7% pass@1, leading open-source coding benchmarks.

Context Length: 1M tokens

Architecture: MoE Transformer

Funding Rounds & Investors

Total Funding

N/A

Rounds

0

No funding rounds available.

Founders/Team

LJ

Lin Junyang

Technical Lead, Qwen (2023–2026)

ZJ

Zhou Jingren

CTO, Alibaba Cloud; Head, Tongyi Lab

LD

Liu Dayiheng

Pre-training Lead; Post-training & Coding Lead (from 2026)

Direct competitors

No direct competitors available.

Change Log / Major Updates

2025 · Jan 29

Qwen2.5-Max launched as the proprietary top-tier API model, available in Qwen Chat. Positioned as Alibaba's commercial answer to GPT-4o and Claude 3.5 Sonnet, with strong reasoning and coding performance.

2025 · Apr 28

Qwen3 released with both dense (0.6B–32B) and MoE (30B-A3B, 235B-A22B) models, trained on 36 trillion tokens across 119 languages. Key innovation: unified thinking/non-thinking mode within a single model via /think and /no_think flags, eliminating the need to switch between chat and reasoning models.

2025 · Nov 17

The consumer-facing Qwen App launched on iOS, Android, Web, and PC, powered by Qwen3. Surpassed 10 million downloads in the first week, reaching 30 million MAU by December 2025 and 300 million MAU across all platforms by February 2026 — one of the fastest AI consumer app launches in history.

Compliance, Integrations & Support

Industry: Not specified

Compliances: Not specified

Integrations: Hugging Face, Alibaba Cloud Model Studio, GitHub, ModelScope, OpenRouter, Groq, DeepInfra, Together AI, LangChain, LlamaIndex, vLLM, Ollama, OpenAI-compatible API, Taobao, Alibaba Cloud PAI, Quark Browser

Support:email, help center, community forum, enterprise support, documentation

Target audience: AI Developers, Enterprise Engineers, Data Scientists, Researchers, Startups, Students, Content Creators, Multilingual Users, Cloud Architects

Supported languages: English, Chinese, Spanish, French, German, Japanese, Korean, Arabic, Portuguese, Russian, Italian, Dutch, Polish, Turkish, Vietnamese, Thai, Indonesian, Malay, Hindi, Bengali, Swahili, Ukrainian, Czech, Swedish, Norwegian, Danish, Finnish, Romanian, Hungarian, Greek

Qwen Acquisitions

No acquisition records available.

AD

Reviews & Rating

0 reviews

No reviews yet

Be the first to share how Qwen performs for your workflow.

0.0

Accuracy

0.0

Ease of Use

0.0

Output Quality

0.0

Security

0.0

More About Qwen

In April 2023, when Alibaba launched Tongyi Qianwen — the model that would become Qwen — very few predicted it would evolve into the world's most downloaded open-source AI family within two years. The bet on open-source at a moment when Chinese tech giants were debating closed vs. open strategies turned out to be the defining decision of Alibaba's AI era.

From Side Project to Strategic Crown Jewel

Qwen (short for Tongyi Qianwen, 通义千问) began as a research initiative under Alibaba Cloud's DAMO Academy before being spun into the newly formed Tongyi Lab in 2023, led by CTO Zhou Jingren. The models were initially closed-source, but by August 2023 the team released Qwen-7B weights publicly — a move that lit the fuse. Developer adoption exploded. By mid-2024 the family had surpassed 70,000 derivative models on Hugging Face; by January 2026, Qwen surpassed one billion cumulative downloads, a milestone no Chinese AI lab had reached before.

What Makes Qwen Different

Where most frontier labs release one or two flagship models, the Qwen team has released 400+ models spanning text, vision, audio, video, code, math, and embeddings — all under the Apache 2.0 license. The Qwen3 generation introduced a unified thinking/non-thinking mode, eliminating the need to switch between a reasoning model and a chat model. The flagship Qwen3-235B-A22B is a Mixture-of-Experts architecture that activates only 22B parameters per token, matching OpenAI o1 and Gemini 2.5 Pro on several reasoning benchmarks while being far cheaper to serve. Context windows stretch to 1 million tokens on long-context variants.

  • 119 languages supported across the Qwen3 family — up from 29 in Qwen2.5
  • Qwen2.5-Coder-32B achieves HumanEval pass@1 of 92.7%, leading most open-source coding models
  • Qwen3-Embedding-8B claimed top MTEB multilingual mean score (70.58) at launch
  • Airbnb's customer service chatbot runs on Qwen models — one of several high-profile enterprise deployments

The consumer-facing Qwen App, launched November 17, 2025, attracted 10 million downloads in its first week and reached 300 million monthly active users across all platforms by February 2026. During the Lunar New Year, users placed nearly 200 million single-sentence task orders through the app — from food delivery to payment processing — signaling a shift from chatbot to autonomous agent for daily life.

"Silicon Valley doesn't want to admit it, but the symptoms are obvious: we're witnessing a full-blown Qwen panic." — Tulsi Soni, marketing analyst, November 2025

Qwen FAQ's

What is Qwen?

Qwen (Tongyi Qianwen) is a family of large language models and multimodal AI models developed by Alibaba Cloud's Tongyi Lab. The family includes text, vision, audio, code, math, and embedding models ranging from 0.6B to 235B parameters, most released under the Apache 2.0 open-source license.

Is Qwen really open source?

Most Qwen models are open-weight under Apache 2.0, meaning the model weights are freely downloadable and commercially usable. However, training code and training data are not fully documented, so they technically fall short of full Open Source Initiative definitions. The 72B+ variants may carry separate license terms.

How do I access Qwen models?

You can access Qwen via: (1) the free Qwen App at chat.qwen.ai or iOS/Android; (2) the Alibaba Cloud Model Studio API (pay-as-you-go with a free new-user quota in the Singapore region); (3) third-party providers like OpenRouter, Groq, DeepInfra, and Together AI; or (4) by downloading model weights from Hugging Face or ModelScope for local self-hosting.

What is the difference between Qwen-Max, Qwen-Plus, and Qwen-Flash?

Qwen-Max is the top-tier flagship optimized for complex reasoning, agent tasks, and high accuracy — the most expensive. Qwen-Plus is a balanced mid-tier model suitable for moderately complex tasks at lower cost. Qwen-Flash (formerly Turbo) is the fastest and cheapest, designed for simple, high-volume workloads. All support thinking and non-thinking modes in Qwen3.

What context window does Qwen support?

Context windows vary by model. Standard Qwen3 models support 128K tokens. The Qwen3-235B-A22B flagship supports up to 262K tokens. Long-context variants like Qwen2.5-1M and Qwen3-2507 support up to 1 million tokens, enabling very long document and codebase processing.