Huihui-Qwen3.5-9B-abliterated GGUF
This repository contains GGUF quantized versions of huihui-ai/Huihui-Qwen3.5-9B-abliterated, converted using llama.cpp.
Model Lineage
Qwen/Qwen3.5-9B-Base (Alibaba)
โโโ Qwen/Qwen3.5-9B (Alibaba) โ Instruct fine-tune
โโโ huihui-ai/Huihui-Qwen3.5-9B-abliterated (huihui-ai) โ Abliteration via remove-refusals-with-transformers
โโโ This GGUF repository (Kausik-A) โ llama.cpp quantization
This is an uncensored version of Qwen/Qwen3.5-9B created with abliteration (see remove-refusals-with-transformers to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
Model Info
- Architecture: Qwen3.5 (32 layers, 4096 hidden size, 16 attention heads, 4 KV heads, 248,320 vocab size)
- Context Length: 262,144 tokens
- Original Size: ~17GB (F16)
Available Quantizations
| File | Type | Size |
|---|---|---|
Huihui-Qwen3.5-9B-Q3_K_S.gguf |
Q3_K_S | 4.0GB |
Huihui-Qwen3.5-9B-Q3_K_M.gguf |
Q3_K_M | 4.4GB |
Huihui-Qwen3.5-9B-Q4_0.gguf |
Q4_0 | 5.0GB |
Huihui-Qwen3.5-9B-Q4_K_S.gguf |
Q4_K_S | 5.0GB |
Huihui-Qwen3.5-9B-Q4_K_M.gguf |
Q4_K_M | 5.3GB |
Huihui-Qwen3.5-9B-Q5_K_S.gguf |
Q5_K_S | 5.9GB |
Huihui-Qwen3.5-9B-Q5_K_M.gguf |
Q5_K_M | 6.1GB |
Huihui-Qwen3.5-9B-Q6_K.gguf |
Q6_K | 6.9GB |
Huihui-Qwen3.5-9B-Q8_0.gguf |
Q8_0 | 8.9GB |
Huihui-Qwen3.5-9B-F16.gguf |
F16 | 17GB |
Usage
llama-cli
llama-cli -m Huihui-Qwen3.5-9B-Q4_K_M.gguf -p "Hello, how are you?" -n 512
Ollama
Please use the latest version of ollama v0.17.7.
ollama run huihui_ai/qwen3.5-abliterated:9b
LM Studio
Simply drag and drop the GGUF file into LM Studio.
โ ๏ธ Usage Warnings
- Risk of Sensitive or Controversial Outputs: This model's safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.
- Not Suitable for All Audiences: Due to limited content filtering, the model's outputs may be inappropriate for public settings, underage users, or applications requiring high security.
- Legal and Ethical Responsibilities: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences.
- Research and Experimental Use: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications.
- Monitoring and Review Recommendations: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content.
- No Default Safety Guarantees: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.
License
Apache 2.0 โ same license as the base Qwen3.5-9B model.
Credits
| Component | Author | Link |
|---|---|---|
| Base model (Qwen3.5-9B) | Alibaba | Qwen/Qwen3.5-9B |
| Abliteration | huihui-ai | huihui-ai/Huihui-Qwen3.5-9B-abliterated |
| Abliteration tool | Sumandora | remove-refusals-with-transformers |
| Quantization framework | ggerganov | llama.cpp |
- Downloads last month
- 653
Hardware compatibility
Log In to add your hardware
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for Kausik-A/Huihui-Qwen3.5-9B-abliterated-GGUF
Base model
Qwen/Qwen3.5-9B-Base Finetuned
Qwen/Qwen3.5-9B Finetuned
huihui-ai/Huihui-Qwen3.5-9B-abliterated