Qwen3.5-4B-Abliterated-Claude-4.6-Opus-Reasoning-Distilled

This is a specialized variant of the Qwen-4B-Reasoning architecture. It has been mathematically modified to neutralize the refusal behaviors and safety guardrails typically found in Claude-distilled reasoning models.

πŸ›  The "Deep-Scrub" Methodology

Standard abliteration often fails on reasoning models because the "safety tripwire" is woven into the early logic chain. This model uses an aggressive early-intercept strategy.

Technical Configuration

  • Direction Multiplier: 3.50 (Ultra-Aggressive)
  • Intervention Range: 0.05 - 0.95 (Intercepting refusal logic at Layer 2)
  • Dynamic Layer Targeting: Enabled (Per-layer refusal vectors)
  • Hybrid Strategy: Auto-balanced (Full Attention: 1.0x | Linear Attention: 0.4x)
  • Refinement: Winsorization at 0.995 percentile with 0.90 Rank Ratio Null Space Constraints.

πŸš€ Key Improvements

  1. Safety Neutralization: By forcing a 0.05 intercept, we've targeted the refusal initialization before the model's internal "Chain of Thought" can lock onto a refusal state.
  2. Uninhibited Reasoning: Designed to bypass the "However..." and "I cannot..." loops prevalent in distilled reasoning models.
  3. Architectural Stability: Despite the high multiplier, we utilized Norm Preservation and Null Space Constraints to maintain coherence in the model's knowledge base.

⚠️ Stability & Usage Note

At a 3.5x multiplier, this model is at the upper mathematical limit of stability.

  • Logic Loops: If you experience "brain bleed" (repetitive text), lower your temperature to 0.5 - 0.7.
  • System Prompts: Use an anchoring system prompt to keep the model's logic grounded.
  • Vision Tasks: While this is a Vision-Language architecture, the abliteration focused on the text reasoning layers.

βš–οΈ Disclaimer

This model is provided "as-is" for research and creative purposes. The removal of safety guardrails means the user is entirely responsible for the content generated. Please use ethically and responsibly.

Downloads last month
414
GGUF
Model size
4B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Abiray/Qwen3.5-4B-Abliterated-Claude-4.6-Opus-Reasoning-Distilled-GGUF

Collection including Abiray/Qwen3.5-4B-Abliterated-Claude-4.6-Opus-Reasoning-Distilled-GGUF