name stringlengths 5 24 | vendor_name stringlengths 3 20 | vendor_country stringclasses 7
values | verdict stringclasses 3
values | verdict_text stringlengths 115 410 | licence stringlengths 3 30 | commercial_use stringclasses 9
values | training_data stringclasses 5
values | origin stringlengths 2 16 | tags stringclasses 4
values | quality_index float64 7 50 ⌀ | output_tokens_per_second float64 0 409 ⌀ | blended_price_per_m float64 0 6 ⌀ | context_tokens null | last_reviewed_at stringdate 2026-04-15 00:00:00 2026-04-17 00:00:00 | llmradar_url stringlengths 32 52 | slug stringlengths 4 24 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Codestral 22B | Mistral AI | France | warn | EU code model trained on 80+ languages. Licensed under Mistral Non-Production License — blocked for any production or commercial deployment without a paid commercial licence. Use Codestral Mamba (Apache 2.0) if you need commercial freedom. | MNPL (non-prod) | Paid licence req. | Partial | EU | euorigin | null | null | null | null | 2026-04-15 | https://llmradar.com/models/codestral | codestral |
Command R+ | Cohere | Canada | warn | Enterprise-focused 104B model, strong at RAG and multilingual tool use. Weights are released under CC-BY-NC 4.0 — non-commercial only. Commercial deployment goes through Cohere's API (see Cohere API entry). | CC-BY-NC 4.0 | API only | Partial | Canada | null | 8 | 0 | 6 | null | 2026-04-15 | https://llmradar.com/models/command-r-plus | command-r-plus |
DBRX Instruct | Databricks | USA | warn | 132B MoE (36B active). Databricks Open Model License is bespoke — allows commercial use with acceptable-use policy and a 700M-MAU-style cap. Read the licence carefully; not Apache 2.0. | Databricks Open | With caps | Undisclosed | USA | commercial | 8 | 0 | 0 | null | 2026-04-15 | https://llmradar.com/models/dbrx | dbrx |
DeepSeek R1 | DeepSeek | China | warn | Frontier reasoning model at o1-class performance. MIT licence makes weights legally clean. Same Chinese-origin alignment/supply-chain considerations as DeepSeek V3. Distilled Qwen/Llama versions inherit their base licence. | MIT | Yes | Undisclosed | China | permissive,commercial | 27 | 0 | 2.362 | null | 2026-04-15 | https://llmradar.com/models/deepseek-r1 | deepseek-r1 |
DeepSeek V3 | DeepSeek | China | warn | MIT licence is maximally permissive. Weights are legally clean to self-host. Same Chinese-origin considerations as Qwen. | MIT | Yes | Undisclosed | China | permissive,commercial | 22 | 0 | 1.25 | null | 2026-04-15 | https://llmradar.com/models/deepseek-v3 | deepseek-v3 |
DeepSeek V3.2 | DeepSeek | China | warn | 685B successor to V3 with DeepSeek Sparse Attention for long context, scalable RL for agentic tasks. Vendor claims parity with GPT-5 (Speciale variant exceeds). MIT licence keeps weights clean; Chinese-origin considerations unchanged. | MIT | Yes | Undisclosed | China | permissive,commercial | 32 | 32.25 | 0.315 | null | 2026-04-15 | https://llmradar.com/models/deepseek-v3-2 | deepseek-v3-2 |
Falcon H1 34B | TII | UAE | warn | TII's hybrid Transformer+Mamba family that supersedes Falcon 180B. 18 languages including Arabic, strong benchmarks (MMLU 84, HumanEval 87). Licence is the Falcon-LLM License (not Apache 2.0) — commercial use permitted with attribution and acceptable-use terms; verify clauses for your deployment. | Falcon-LLM | Yes | Partial | UAE | commercial | null | null | null | null | 2026-04-16 | https://llmradar.com/models/falcon-h1-34b | falcon-h1-34b |
Gemma 4 | Google | USA | ok | Major licence shift from Gemma 2/3: Apache 2.0 across the family. 140+ languages, multimodal (text/image/audio/video on small sizes), 128K-256K context. Strong permissive default for EU deployments that need robust multilingual support. | Apache 2.0 | Yes | Partial | USA | permissive,commercial | 32 | 0 | 0 | null | 2026-04-15 | https://llmradar.com/models/gemma-4 | gemma-4 |
Gemma 4 26B A4B Instruct | Google DeepMind | United States | warn | Based on published licence terms, Gemma 4 26B A4B ships under pure Apache 2.0 with no prohibited-use carve-outs — a departure from prior Gemma generations. The sparse-MoE architecture (25.2B total / 3.8B active) puts it in an ambiguous zone for EU AI Act GPAI systemic-risk classification, and US origin plus image-input... | Apache 2.0 | Unrestricted | Domain-level summary | United States | permissive,commercial | 27 | 0 | 0 | null | 2026-04-17 | https://llmradar.com/models/gemma-4-26b-a4b-it | gemma-4-26b-a4b-it |
Gemma 4 31B Instruct | Google DeepMind | United States | warn | Based on published licence terms, Gemma 4 31B ships under pure Apache 2.0 — a notable break from the Gemma Terms of Use used in prior generations — with no prohibited-use carve-outs. US origin carries Schrems-II and CLOUD-Act exposure, and at 30.7B dense the model likely crosses EU AI Act GPAI systemic-risk thresholds ... | Apache 2.0 | Unrestricted | Domain-level summary | United States | permissive,commercial | 32 | 0 | 0 | null | 2026-04-17 | https://llmradar.com/models/gemma-4-31b-it | gemma-4-31b-it |
Gemma 4 E4B Instruct | Google DeepMind | United States | warn | Based on published licence terms, Gemma 4 E4B is an edge-optimised multimodal variant under pure Apache 2.0 with no prohibited-use carve-outs. Audio input (30s) and on-device deployment push GDPR biometric, AI Act emotion-recognition, and Art. 25 data-protection-by-design obligations entirely onto the integrator with n... | Apache 2.0 | Unrestricted | Domain-level summary | United States | permissive,commercial | 15 | 0 | 0 | null | 2026-04-17 | https://llmradar.com/models/gemma-4-e4b-it | gemma-4-e4b-it |
GLM-4.5 | Zhipu AI | China | warn | MoE flagship from Zhipu under MIT. Strong agentic and coding benchmarks. Same Chinese-origin alignment and geopolitical considerations as DeepSeek / Qwen. | MIT | Yes | Undisclosed | China | permissive,commercial | 26 | 43.63 | 0.843 | null | 2026-04-15 | https://llmradar.com/models/glm-4-5 | glm-4-5 |
GLM-5.1 | Zhipu AI (Z.ai) | China | warn | Per current documentation, GLM-5.1 is released under a verbatim MIT License with no use restrictions, enabling self-hosted commercial deployment. However, training-data opacity, Beijing-based publisher, and Zhipu AI's presence on the US BIS Entity List create EU AI Act transparency and supply-chain screening risks for ... | MIT | Unrestricted | Undisclosed | China (Beijing) | permissive,commercial | 44 | 47.58 | 2.15 | null | 2026-04-17 | https://llmradar.com/models/glm-5-1 | glm-5-1 |
GPT-OSS 120b | OpenAI | USA | ok | OpenAI's first major open-weights release. Apache 2.0, MoE with 5.1B active over 117B total, MXFP4 quantised to fit a single 80GB GPU. Historic shift for a vendor that built its brand on closed weights. | Apache 2.0 | Yes | Undisclosed | USA | permissive,commercial | 33 | 211.59 | 0.263 | null | 2026-04-15 | https://llmradar.com/models/gpt-oss-120b | gpt-oss-120b |
GPT-OSS 20B | OpenAI | USA | warn | Based on published licence terms, GPT-OSS 20B is released under Apache 2.0 with no field-of-use carve-outs in the licence itself; OpenAI publishes a separate non-binding 'gpt-oss usage policy' as guidance. Training-data disclosure is domain-level only, and US origin carries Schrems-II / CLOUD-Act exposure that enterpri... | Apache 2.0 | Unrestricted | Domain-level summary | United States | permissive,commercial | 25 | 293.7 | 0.1 | null | 2026-04-17 | https://llmradar.com/models/gpt-oss-20b | gpt-oss-20b |
IBM Granite 3 | IBM | USA | ok | Enterprise-focused Granite 3 family under Apache 2.0, with unusual-for-the-industry training-data disclosure. IBM provides IP indemnification when used via watsonx. Strong default for regulated enterprise pilots. | Apache 2.0 | Yes | Disclosed | USA | permissive,commercial | 7 | 408.93 | 0.085 | null | 2026-04-15 | https://llmradar.com/models/granite-3 | granite-3 |
Grok-2 | xAI | USA | warn | xAI's first open-weights release. Commercial use allowed under the Grok 2 Community License with xAI's Acceptable Use Policy. Notable restriction: weights cannot be used to train other models (distillation ban). 500GB model, needs 8 GPUs with 40GB+. | Grok 2 Community | Yes (w/ AUP) | Undisclosed | USA | commercial | 14 | 0 | 0 | null | 2026-04-16 | https://llmradar.com/models/grok-2 | grok-2 |
Jamba 1.5 Large | AI21 Labs | Israel | warn | SSM-Transformer hybrid (Mamba) with 256K context. Jamba Open Model License permits commercial use below $50M annual revenue; above that, paid licence required. Israel jurisdiction; EU adequacy decision in place. | Jamba Open | Under $50M rev. | Undisclosed | Israel | commercial | 11 | 0 | 3.5 | null | 2026-04-15 | https://llmradar.com/models/jamba-1-5-large | jamba-1-5-large |
Kimi K2 Instruct | Moonshot AI | China | warn | 1T-parameter MoE (32B active) tuned for agentic and tool-use workflows. Modified MIT permits commercial use. Same Chinese-origin alignment and supply-chain considerations as DeepSeek and Qwen. | Modified MIT | Yes | Undisclosed | China | permissive,commercial | 26 | 34.33 | 1.039 | null | 2026-04-16 | https://llmradar.com/models/kimi-k2 | kimi-k2 |
Llama 3.1 405B | Meta | USA | warn | Frontier-class 405B open model. Self-hosting requires serious compute (8×H100 minimum at FP8). Same Llama community licence caveats as the rest of the family. | Llama community | With caps | Undisclosed | USA | commercial | 17 | 31.07 | 3.688 | null | 2026-04-15 | https://llmradar.com/models/llama-3-1-405b | llama-3-1-405b |
Llama 3.1 8B Instruct | Meta Platforms | United States | warn | Per current documentation, Llama 3.1 8B Instruct is released under the Llama 3.1 Community Licence — a custom source-available licence rather than OSI open source. Commercial deployment is permitted below 700M MAU subject to the Acceptable Use Policy and attribution rules, but training-data opacity and US origin create... | Llama 3.1 Community Licence | Restricted (MAU cap + AUP) | Token count only | United States | commercial | 12 | 160 | 0.1 | null | 2026-04-17 | https://llmradar.com/models/llama-3-1-8b-instruct | llama-3-1-8b-instruct |
Llama 3.3 70B | Meta | USA | warn | Strong 70B model, near-flagship quality at smaller size. Same Llama community licence as Llama 4: 700M MAU cap, acceptable-use policy, 'Built with Llama' attribution required. | Llama community | With caps | Undisclosed | USA | commercial | 15 | 96.83 | 0.675 | null | 2026-04-15 | https://llmradar.com/models/llama-3-3-70b | llama-3-3-70b |
Llama 4 Maverick | Meta | USA | warn | Llama 4 flagship: MoE with 17B active over 128 experts, natively multimodal (text + images). Same Llama community licence as the family: 700M MAU cap, acceptable-use policy, 'Built with Llama' attribution. | Llama community | With caps | Undisclosed | USA | commercial | 18 | 116.04 | 0.5 | null | 2026-04-15 | https://llmradar.com/models/llama-4-maverick | llama-4-maverick |
Llama 4 Scout | Meta | USA | warn | Smaller Llama 4 variant: 17B active over 16 experts, multimodal. More self-hostable than Maverick. Same Llama community licence caveats. | Llama community | With caps | Undisclosed | USA | commercial | 14 | 128.16 | 0.292 | null | 2026-04-15 | https://llmradar.com/models/llama-4-scout | llama-4-scout |
MiniMax M2 | MiniMax | China | warn | 229B agent-focused model from MiniMax, Modified MIT. Strong software-engineering and tool-use benchmarks. Family has iterated fast (M2 / M2.1 / M2.5 / M2.7 across 2025-2026). Same Chinese-origin alignment and supply-chain considerations as DeepSeek, Qwen, Kimi. | Modified MIT | Yes | Undisclosed | China | permissive,commercial | 36 | 72.4 | 0.525 | null | 2026-04-16 | https://llmradar.com/models/minimax-m2 | minimax-m2 |
MiniMax-M2.7 | MiniMax AI | China | ko | Per current documentation, the MiniMax Non-Commercial License prohibits commercial deployment without individually negotiated written authorization from MiniMax, making the weights unsuitable for EU commercial workloads out-of-the-box. Opaque training data and Shanghai-based publisher compound the EU AI Act and data-tr... | MiniMax Non-Commercial License | Non-commercial only | Undisclosed | China (Shanghai) | null | 50 | 45.78 | 0.525 | null | 2026-04-17 | https://llmradar.com/models/minimax-m2-7 | minimax-m2-7 |
Mistral 7B Instruct v0.2 | Mistral AI | France | warn | Based on published licence terms, Mistral 7B Instruct v0.2 is an EU-origin open-weight model under standard Apache 2.0 — commercial deployment and self-hosting are permitted without field-of-use restrictions. Training-data opacity is the primary EU AI Act Art. 53 gap, but the French controller, absence of CLOUD Act exp... | Apache 2.0 | Unrestricted | Undisclosed | EU (France) | permissive,commercial,euorigin | 7 | 192.54 | 0.25 | null | 2026-04-17 | https://llmradar.com/models/mistral-7b-instruct-v0-2 | mistral-7b-instruct-v0-2 |
Mistral Large 2 | Mistral AI | France | warn | Frontier 123B dense model from an EU vendor. Open weights under Mistral Research License — non-commercial by default. Commercial deployment requires a separate paid licence from Mistral AI. | MRL (research) | Paid licence req. | Undisclosed | EU | euorigin | 15 | 37.56 | 3 | null | 2026-04-15 | https://llmradar.com/models/mistral-large-2 | mistral-large-2 |
Mistral Small | Mistral AI | France | ok | Best-in-class permissive licence from an EU vendor. Apache 2.0 means no usage caps, no royalty, no revocation risk. | Apache 2.0 | Yes | Partial | EU | permissive,commercial,euorigin | 13 | 151.59 | 0.15 | null | 2026-04-15 | https://llmradar.com/models/mistral-small | mistral-small |
Mistral Small 4 | Mistral AI | France | ok | Unified model folding Instruct, reasoning (Magistral) and code (Devstral) into a single 119B MoE under Apache 2.0. 6.5B active params, 256K context, 24 languages, toggleable reasoning effort. Strongest permissive EU option at this scale. | Apache 2.0 | Yes | Undisclosed | EU | permissive,commercial,euorigin | 19 | 147.18 | 0.263 | null | 2026-04-15 | https://llmradar.com/models/mistral-small-4 | mistral-small-4 |
Llama 3.1 Nemotron 70B | NVIDIA | USA | warn | NVIDIA's Llama 3.1 fine-tune with custom RLHF. Inherits Llama 3.1 Community License terms. Strong conversational quality; useful default when you want Llama behaviour with NVIDIA's alignment. | Llama community | With caps | Partial | USA | commercial | 13 | 41.68 | 1.2 | null | 2026-04-15 | https://llmradar.com/models/nemotron-70b | nemotron-70b |
OLMo 2 32B | AllenAI | USA | ok | Fully open model: weights, training data (Dolma 2), training code, checkpoints, and logs all published. Apache 2.0 across the board. Strongest choice when AI Act transparency obligations matter. | Apache 2.0 | Yes | Disclosed | USA | permissive,commercial | 11 | 0 | 0 | null | 2026-04-15 | https://llmradar.com/models/olmo-2 | olmo-2 |
Phi-4 | Microsoft | USA | ok | MIT-licensed 14B from Microsoft Research. Heavy use of synthetic training data is disclosed; English-primary (thin multilingual coverage). Strongest small-model option for permissive-licence EU deployments. | MIT | Yes | Partial | USA | permissive,commercial | 10 | 29.01 | 0.219 | null | 2026-04-15 | https://llmradar.com/models/phi-4 | phi-4 |
Qwen 2.5 | Alibaba | China | warn | Legally clean under Apache 2.0, but Chinese origin raises supply-chain and geopolitical questions. Vet carefully for sensitive use cases. | Apache 2.0 | Yes | Undisclosed | China | permissive,commercial | 16 | 54.85 | 0 | null | 2026-04-15 | https://llmradar.com/models/qwen-2-5 | qwen-2-5 |
Qwen 3.5 | Alibaba | China | warn | Hybrid Gated-DeltaNet + MoE flagship (397B total, 17B active) under Apache 2.0. Native vision, 201 languages, 262K context (1M with YaRN). Licence is clean; Chinese-origin alignment and supply-chain considerations persist. | Apache 2.0 | Yes | Undisclosed | China | permissive,commercial | 40 | 52.87 | 1.35 | null | 2026-04-15 | https://llmradar.com/models/qwen-3-5 | qwen-3-5 |
Qwen3-8B | Alibaba Cloud (Qwen) | China | warn | Based on published licence terms, Qwen3-8B is released under standard Apache 2.0 with no field-of-use carve-outs, making self-hosted commercial deployment viable. Training-data disclosure is limited to a token count and Chinese origin creates EU AI Act Art. 53 transparency and data-transfer risks that deployers should ... | Apache 2.0 | Unrestricted | Token count only | China (Hangzhou) | permissive,commercial | 11 | 85.98 | 0.31 | null | 2026-04-17 | https://llmradar.com/models/qwen-3-8b | qwen-3-8b |
Qwen3.6-35B-A3B | Alibaba Cloud (Qwen) | China | warn | Based on published licence terms, Qwen3.6-35B-A3B ships under Apache 2.0 with no use restrictions, making self-hosted commercial deployment viable. However, opaque training-data disclosure and Chinese origin create EU AI Act Art. 53 transparency and data-transfer risks that deployers should document before placing pers... | Apache 2.0 | Unrestricted | Undisclosed | China (Hangzhou) | permissive,commercial | 44 | 237.63 | 0.844 | null | 2026-04-17 | https://llmradar.com/models/qwen3-6-35b-a3b | qwen3-6-35b-a3b |
QwQ-32B | Alibaba | China | warn | 32B dense reasoning model under Apache 2.0. Sweet spot for self-hostable reasoning: 4090-class GPU at 4-bit, single H100 at bf16. Chinese-origin caveats unchanged. | Apache 2.0 | Yes | Undisclosed | China | permissive,commercial | 20 | 32.6 | 0.745 | null | 2026-04-15 | https://llmradar.com/models/qwq-32b | qwq-32b |
SmolLM3 3B | Hugging Face | USA | ok | Fully open small model: Apache 2.0 weights, training data published, engineering blueprint public. 6 native languages (EN/FR/ES/DE/IT/PT) covers major EU markets. 128K context via YARN. Strong default for edge or on-prem EU deployments where transparency matters. | Apache 2.0 | Yes | Disclosed | USA | permissive,commercial | null | null | null | null | 2026-04-16 | https://llmradar.com/models/smollm3-3b | smollm3-3b |
EU-readiness of open-weight LLMs
Curated by LLM Radar — updated 2026-04-25 — 39 models.
A manually-reviewed dataset assessing open-weight Large Language Models (LLMs) on their suitability for EU deployment and commercial use. Each model is evaluated on licence, commercial use, training data, and origin, with quality / speed / price metrics from Artificial Analysis where available.
Primary use cases:
- Selecting open-weight models for self-hosted EU deployment
- Licensing and commercial-use due diligence
- Training classifiers on open-source LLM licensing and provenance
- Tracking the emergence of EU-origin and permissively-licensed models
Fields
| column | description |
|---|---|
name |
Human-readable model name |
vendor_name |
Publisher / research lab |
vendor_country |
HQ country of the vendor |
verdict |
Overall verdict: ok, warn, or ko |
verdict_text |
One-line editorial verdict |
licence |
Licence summary (Apache-2.0, MIT, custom, etc.) |
commercial_use |
Commercial-use posture |
training_data |
Training-data disclosure / provenance |
origin |
Origin / publisher jurisdiction |
tags |
Comma-separated topic tags (permissive, commercial, euorigin) |
quality_index |
Artificial Analysis Quality Index (0–100, higher is better) |
output_tokens_per_second |
Median output speed (tokens/s) |
blended_price_per_m |
Blended input+output price per million tokens (USD) |
context_tokens |
Advertised context window |
last_reviewed_at |
ISO date of last manual review |
llmradar_url |
Canonical URL of the full model page |
slug |
Stable identifier on LLM Radar |
Performance metrics (quality / speed / price) are sourced from Artificial Analysis under their public API and are attributed there; see their terms for reuse.
Methodology
Each entry is reviewed manually against public sources (provider docs, terms of service, data-protection agreements, sub-processor lists, regulator guidance, model cards, and vendor statements). Badges use a three-tier traffic-light scheme:
- green — meets the EU/GDPR/AI-Act criterion without material caveats
- amber — partial fit, conditional, or requires customer action to qualify
- red — does not meet the criterion, or requires data transfer outside the EEA
See the full methodology at https://llmradar.com/methodology.
License & attribution
Released under CC BY 4.0. You may share and adapt the dataset for any purpose, including commercial use, provided you give appropriate credit and link back to the source:
Data from LLM Radar — https://llmradar.com — licensed under CC BY 4.0.
Updates
This dataset is regenerated on a weekly cadence from the LLM Radar
editorial database. Each row carries a last_reviewed_at date so you can
filter for recency.
Citation
@misc{llmradar_eu-open-weight-models,
title = { EU-readiness of open-weight LLMs },
author = { {LLM Radar} },
year = { 2026 },
howpublished = { \url{https://huggingface.co/datasets/llmradar/eu-open-weight-models} },
note = { CC BY 4.0 }
}
Report corrections
Found an inaccuracy? Submit a correction or exercise your right of reply at https://llmradar.com/corrections or https://llmradar.com/right-of-reply.
- Downloads last month
- 32