Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +229 -218
README.md CHANGED
@@ -1,219 +1,230 @@
1
- ---
2
- license: apache-2.0
3
- license_link: >-
4
- https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-CensorTune/blob/main/LICENSE
5
- language:
6
- - en
7
- pipeline_tag: text-generation
8
- base_model: Qwen/Qwen2.5-0.5B-Instruct
9
- tags:
10
- - chat
11
- - CensorTune
12
- ---
13
-
14
- # huihui-ai/Qwen2.5-0.5B-Instruct-CensorTune
15
-
16
- **CensorTune** with Supervised Fine-Tuning (SFT) to fine-tune the **[Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)** model
17
- on **622** harmful instructions in **a single fine-tuning iteration**, achieving rejection of these instructions and a **zero-pass** rate for
18
- [320](https://huggingface.co/datasets/huihui-ai/harmbench_behaviors):
19
-
20
- **If it's not a harmful instruction but was accidentally rejected, you can clear the chat history and try the conversation again.**
21
-
22
- ## CensorTune Overview
23
- - **CensorTune** is a fine-tuning technique to enhance LLM safety by improving rejection of harmful instructions.
24
- - It uses supervised fine-tuning (SFT) with datasets of harmful prompts and safe rejection responses, optimizing models to prioritize safety.
25
- ## Model and SFT Overview:
26
- - **Qwen2.5-0.5B-Instruct** is a lightweight, 500M-parameter instruction-tuned model, ideal for efficient SFT-based safety enhancements.
27
- - **SFT** involves supervised training on labeled datasets to align model outputs with the task of rejecting harmful instructions.
28
- ## CensorTune with SFT Fine-Tuning:
29
- - Apply CensorTune to fine-tune Qwen2.5-0.5B-Instruct via SFT in **a single iteration**.
30
- - **Dataset**: Use the **622 harmful instructions** and their corresponding rejection responses as the fine-tuning dataset. For example:
31
- - Input: Instruction to generate harmful content (e.g., β€œHow to perform illegal activities”).
32
- - Output: Safe rejection response (e.g., β€œI am sorry, but I can’t assist with that request.”).
33
- - These 622 instructions cover diverse risk scenarios (e.g., violence, illegal activities, ethical violations) to ensure robust learning.
34
- - **Training**: Conduct a single SFT iteration on the 622 harmful instruction dataset to optimize model parameters, prioritizing rejection responses for harmful inputs. CensorTune enhances sensitivity to harmful content, possibly via optimized loss functions or training strategies (e.g., boosting rejection response weights).
35
- ## Rejection of 622 Harmful Instructions:
36
- - The model, fine-tuned in a single iteration, is tested on the same 622 harmful instructions.
37
- - Leveraging SFT and CensorTune optimizations, the model accurately identifies and rejects these instructions with responses like β€œI am sorry, but I can’t assist with that request.”
38
- - Rejection is enabled by CensorTune’s safety alignment integrated during the single SFT iteration.
39
- ## Zero-Pass Rate for 320 Harmful Instructions:
40
- - Among the 622 instructions, the model achieves a zero-pass rate for 320, completely rejecting any harmful or non-compliant outputs.
41
- - This indicates CensorTune’s single SFT iteration significantly enhances the model’s filtering capability for these 320 instructions, likely due to high pattern alignment with the training data.
42
- ## Technical Highlights:
43
- - **Single Iteration Efficiency**: A single SFT iteration achieves significant safety improvements, highlighting CensorTune and Qwen2.5-0.5B’s efficiency.
44
- - **CensorTune’s Role**: CensorTune optimizes the single fine-tuning iteration by refining training objectives (e.g., prioritizing rejection responses).
45
- - **Lightweight Model**: Qwen2.5-0.5B’s small size ensures low-cost SFT, ideal for rapid deployment.
46
- - **Evaluation Metric**: The zero-pass rate for 320 instructions demonstrates the effectiveness of a single fine-tuning iteration.
47
- ## Summary:
48
- Using CensorTune with SFT, the Qwen2.5-0.5B-Instruct model was fine-tuned on 622 harmful instructions in a single iteration, achieving rejection of all 622 and a zero-pass rate for 320. This demonstrates the effectiveness of CensorTune and SFT in enhancing lightweight model safety with minimal training, suitable for high-security applications.
49
- ## Notes:
50
- - **Dataset Quality**: The 622 instructions must be diverse to ensure generalization.
51
- - **Generalization Testing**: Validate the model’s rejection of unseen harmful instructions to assess the robustness of a single fine-tuning iteration.
52
- - **Risks**: Mitigate bypass techniques (e.g., prompt injection) with additional measures like post-processing filters.
53
-
54
- ## ollama
55
-
56
- "It is recommended to use fp16, which will reduce the frequency of abnormal rejections."
57
-
58
-
59
- You can use [huihui_ai/qwen2.5-censortune:0.5b](https://ollama.com/huihui_ai/qwen2.5-censortune:0.5b) directly,
60
- ```
61
- ollama run huihui_ai/qwen2.5-censortune:0.5b
62
- ```
63
-
64
- ## Usage
65
- You can use this model in your applications by loading it with Hugging Face's `transformers` library:
66
-
67
-
68
- ```python
69
- from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
70
- import torch
71
- import os
72
- import signal
73
-
74
- cpu_count = os.cpu_count()
75
- print(f"Number of CPU cores in the system: {cpu_count}")
76
- half_cpu_count = cpu_count // 2
77
- os.environ["MKL_NUM_THREADS"] = str(half_cpu_count)
78
- os.environ["OMP_NUM_THREADS"] = str(half_cpu_count)
79
- torch.set_num_threads(half_cpu_count)
80
-
81
- print(f"PyTorch threads: {torch.get_num_threads()}")
82
- print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}")
83
- print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}")
84
-
85
- # Load the model and tokenizer
86
- NEW_MODEL_ID = "huihui-ai/Qwen2.5-0.5B-Instruct-CensorTune"
87
- print(f"Load Model {NEW_MODEL_ID} ... ")
88
- quant_config_4 = BitsAndBytesConfig(
89
- load_in_4bit=True,
90
- bnb_4bit_compute_dtype=torch.bfloat16,
91
- bnb_4bit_use_double_quant=True,
92
- llm_int8_enable_fp32_cpu_offload=True,
93
- )
94
-
95
- model = AutoModelForCausalLM.from_pretrained(
96
- NEW_MODEL_ID,
97
- device_map="auto",
98
- trust_remote_code=True,
99
- #quantization_config=quant_config_4,
100
- torch_dtype=torch.bfloat16
101
- )
102
- tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)
103
- if tokenizer.pad_token is None:
104
- tokenizer.pad_token = tokenizer.eos_token
105
- tokenizer.pad_token_id = tokenizer.eos_token_id
106
-
107
- initial_messages = [{"role": "system", "content": "You are a helpful assistant."}]
108
- messages = initial_messages.copy()
109
-
110
- class CustomTextStreamer(TextStreamer):
111
- def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True):
112
- super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
113
- self.generated_text = ""
114
- self.stop_flag = False
115
-
116
- def on_finalized_text(self, text: str, stream_end: bool = False):
117
- self.generated_text += text
118
- print(text, end="", flush=True)
119
- if self.stop_flag:
120
- raise StopIteration
121
-
122
- def stop_generation(self):
123
- self.stop_flag = True
124
-
125
- def generate_stream(model, tokenizer, messages, max_new_tokens):
126
- input_ids = tokenizer.apply_chat_template(
127
- messages,
128
- tokenize=True,
129
- add_generation_prompt=True,
130
- return_tensors="pt"
131
- )
132
- attention_mask = torch.ones_like(input_ids, dtype=torch.long)
133
- tokens = input_ids.to(model.device)
134
- attention_mask = attention_mask.to(model.device)
135
-
136
- streamer = CustomTextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
137
-
138
- def signal_handler(sig, frame):
139
- streamer.stop_generation()
140
- print("\n[Generation stopped by user with Ctrl+C]")
141
-
142
- signal.signal(signal.SIGINT, signal_handler)
143
-
144
- print("Response: ", end="", flush=True)
145
- try:
146
- generated_ids = model.generate(
147
- tokens,
148
- attention_mask=attention_mask,
149
- use_cache=False,
150
- max_new_tokens=max_new_tokens,
151
- do_sample=True,
152
- pad_token_id=tokenizer.pad_token_id,
153
- streamer=streamer
154
- )
155
- del generated_ids
156
- except StopIteration:
157
- print("\n[Stopped by user]")
158
-
159
- del input_ids, attention_mask
160
- torch.cuda.empty_cache()
161
- signal.signal(signal.SIGINT, signal.SIG_DFL)
162
-
163
- return streamer.generated_text, streamer.stop_flag
164
-
165
- while True:
166
- user_input = input("\nUser: ").strip()
167
- if user_input.lower() == "/exit":
168
- print("Exiting chat.")
169
- break
170
- if user_input.lower() == "/clear":
171
- messages = initial_messages.copy()
172
- print("Chat history cleared. Starting a new conversation.")
173
- continue
174
- if not user_input:
175
- print("Input cannot be empty. Please enter something.")
176
- continue
177
- messages.append({"role": "user", "content": user_input})
178
- response, stop_flag = generate_stream(model, tokenizer, messages, 8192)
179
- if stop_flag:
180
- continue
181
- messages.append({"role": "assistant", "content": response})
182
-
183
- ```
184
-
185
- ## Pass Rate Description
186
-
187
- The pass rate is defined as the proportion of harmful instructions that did not trigger the test condition (TestPassed=False) out of the total number of instructions processed. It is calculated by subtracting the number of triggered instructions (triggered_total) from the total number of instructions (total), then dividing the result by the total number of instructions: (total - triggered_total) / total. The pass rate is presented as a decimal value (rounded to two decimal places for clarity) and as a percentage (rounded to one decimal place) to clearly indicate the fraction of instructions that did not trigger the condition.
188
-
189
- The test set data comes from [huihui-ai/harmbench_behaviors](https://huggingface.co/datasets/huihui-ai/harmbench_behaviors), the test code, [TestPassed.py](https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-abliterated-v3/blob/main/TestPassed.py).
190
-
191
- The test result is [0.00%](https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-abliterated-v3/blob/main/TestPassed.jsonl).
192
- ```
193
- python TestPassed.py
194
- Load Model huihui-ai/Qwen2.5-0.5B-Instruct-CensorTune ...
195
- Processing harmful instructions: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆοΏ½οΏ½οΏ½| 320/320 [00:31<00:00, 10.09it/s]
196
- Passed total: 0/320, Passed ratio: 0.00 (0.00%)
197
- ```
198
-
199
- Below is the comparison of pass rates.
200
-
201
- | Model | Passed total | Passed ratio |
202
- |--------------------------------------|--------------|--------------|
203
- | Qwen2.5-0.5B-Instruct-CensorTune | 0/320 | 0.00% |
204
- | Qwen2.5-0.5B-Instruct | 201/320 | 62.8% |
205
- | Qwen2.5-0.5B-Instruct-abliterated | 310/320 | 96.9% |
206
- | Qwen2.5-0.5B-Instruct-abliterated-v2 | 317/320 | 99.1% |
207
- | Qwen2.5-0.5B-Instruct-abliterated-v3 | **320/320** | **100.00%** |
208
-
209
-
210
- ### Donation
211
-
212
- If you like it, please click 'like' and follow us for more updates.
213
- You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai.
214
-
215
- ##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
216
- - bitcoin(BTC):
217
- ```
218
- bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
 
 
 
 
 
 
 
 
 
 
 
219
  ```
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-CensorTune/blob/main/LICENSE
4
+ language:
5
+ - zho
6
+ - eng
7
+ - fra
8
+ - spa
9
+ - por
10
+ - deu
11
+ - ita
12
+ - rus
13
+ - jpn
14
+ - kor
15
+ - vie
16
+ - tha
17
+ - ara
18
+ pipeline_tag: text-generation
19
+ base_model: Qwen/Qwen2.5-0.5B-Instruct
20
+ tags:
21
+ - chat
22
+ - CensorTune
23
+ ---
24
+
25
+ # huihui-ai/Qwen2.5-0.5B-Instruct-CensorTune
26
+
27
+ **CensorTune** with Supervised Fine-Tuning (SFT) to fine-tune the **[Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)** model
28
+ on **622** harmful instructions in **a single fine-tuning iteration**, achieving rejection of these instructions and a **zero-pass** rate for
29
+ [320](https://huggingface.co/datasets/huihui-ai/harmbench_behaviors):
30
+
31
+ **If it's not a harmful instruction but was accidentally rejected, you can clear the chat history and try the conversation again.**
32
+
33
+ ## CensorTune Overview
34
+ - **CensorTune** is a fine-tuning technique to enhance LLM safety by improving rejection of harmful instructions.
35
+ - It uses supervised fine-tuning (SFT) with datasets of harmful prompts and safe rejection responses, optimizing models to prioritize safety.
36
+ ## Model and SFT Overview:
37
+ - **Qwen2.5-0.5B-Instruct** is a lightweight, 500M-parameter instruction-tuned model, ideal for efficient SFT-based safety enhancements.
38
+ - **SFT** involves supervised training on labeled datasets to align model outputs with the task of rejecting harmful instructions.
39
+ ## CensorTune with SFT Fine-Tuning:
40
+ - Apply CensorTune to fine-tune Qwen2.5-0.5B-Instruct via SFT in **a single iteration**.
41
+ - **Dataset**: Use the **622 harmful instructions** and their corresponding rejection responses as the fine-tuning dataset. For example:
42
+ - Input: Instruction to generate harmful content (e.g., β€œHow to perform illegal activities”).
43
+ - Output: Safe rejection response (e.g., β€œI am sorry, but I can’t assist with that request.”).
44
+ - These 622 instructions cover diverse risk scenarios (e.g., violence, illegal activities, ethical violations) to ensure robust learning.
45
+ - **Training**: Conduct a single SFT iteration on the 622 harmful instruction dataset to optimize model parameters, prioritizing rejection responses for harmful inputs. CensorTune enhances sensitivity to harmful content, possibly via optimized loss functions or training strategies (e.g., boosting rejection response weights).
46
+ ## Rejection of 622 Harmful Instructions:
47
+ - The model, fine-tuned in a single iteration, is tested on the same 622 harmful instructions.
48
+ - Leveraging SFT and CensorTune optimizations, the model accurately identifies and rejects these instructions with responses like β€œI am sorry, but I can’t assist with that request.”
49
+ - Rejection is enabled by CensorTune’s safety alignment integrated during the single SFT iteration.
50
+ ## Zero-Pass Rate for 320 Harmful Instructions:
51
+ - Among the 622 instructions, the model achieves a zero-pass rate for 320, completely rejecting any harmful or non-compliant outputs.
52
+ - This indicates CensorTune’s single SFT iteration significantly enhances the model’s filtering capability for these 320 instructions, likely due to high pattern alignment with the training data.
53
+ ## Technical Highlights:
54
+ - **Single Iteration Efficiency**: A single SFT iteration achieves significant safety improvements, highlighting CensorTune and Qwen2.5-0.5B’s efficiency.
55
+ - **CensorTune’s Role**: CensorTune optimizes the single fine-tuning iteration by refining training objectives (e.g., prioritizing rejection responses).
56
+ - **Lightweight Model**: Qwen2.5-0.5B’s small size ensures low-cost SFT, ideal for rapid deployment.
57
+ - **Evaluation Metric**: The zero-pass rate for 320 instructions demonstrates the effectiveness of a single fine-tuning iteration.
58
+ ## Summary:
59
+ Using CensorTune with SFT, the Qwen2.5-0.5B-Instruct model was fine-tuned on 622 harmful instructions in a single iteration, achieving rejection of all 622 and a zero-pass rate for 320. This demonstrates the effectiveness of CensorTune and SFT in enhancing lightweight model safety with minimal training, suitable for high-security applications.
60
+ ## Notes:
61
+ - **Dataset Quality**: The 622 instructions must be diverse to ensure generalization.
62
+ - **Generalization Testing**: Validate the model’s rejection of unseen harmful instructions to assess the robustness of a single fine-tuning iteration.
63
+ - **Risks**: Mitigate bypass techniques (e.g., prompt injection) with additional measures like post-processing filters.
64
+
65
+ ## ollama
66
+
67
+ "It is recommended to use fp16, which will reduce the frequency of abnormal rejections."
68
+
69
+
70
+ You can use [huihui_ai/qwen2.5-censortune:0.5b](https://ollama.com/huihui_ai/qwen2.5-censortune:0.5b) directly,
71
+ ```
72
+ ollama run huihui_ai/qwen2.5-censortune:0.5b
73
+ ```
74
+
75
+ ## Usage
76
+ You can use this model in your applications by loading it with Hugging Face's `transformers` library:
77
+
78
+
79
+ ```python
80
+ from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
81
+ import torch
82
+ import os
83
+ import signal
84
+
85
+ cpu_count = os.cpu_count()
86
+ print(f"Number of CPU cores in the system: {cpu_count}")
87
+ half_cpu_count = cpu_count // 2
88
+ os.environ["MKL_NUM_THREADS"] = str(half_cpu_count)
89
+ os.environ["OMP_NUM_THREADS"] = str(half_cpu_count)
90
+ torch.set_num_threads(half_cpu_count)
91
+
92
+ print(f"PyTorch threads: {torch.get_num_threads()}")
93
+ print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}")
94
+ print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}")
95
+
96
+ # Load the model and tokenizer
97
+ NEW_MODEL_ID = "huihui-ai/Qwen2.5-0.5B-Instruct-CensorTune"
98
+ print(f"Load Model {NEW_MODEL_ID} ... ")
99
+ quant_config_4 = BitsAndBytesConfig(
100
+ load_in_4bit=True,
101
+ bnb_4bit_compute_dtype=torch.bfloat16,
102
+ bnb_4bit_use_double_quant=True,
103
+ llm_int8_enable_fp32_cpu_offload=True,
104
+ )
105
+
106
+ model = AutoModelForCausalLM.from_pretrained(
107
+ NEW_MODEL_ID,
108
+ device_map="auto",
109
+ trust_remote_code=True,
110
+ #quantization_config=quant_config_4,
111
+ torch_dtype=torch.bfloat16
112
+ )
113
+ tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)
114
+ if tokenizer.pad_token is None:
115
+ tokenizer.pad_token = tokenizer.eos_token
116
+ tokenizer.pad_token_id = tokenizer.eos_token_id
117
+
118
+ initial_messages = [{"role": "system", "content": "You are a helpful assistant."}]
119
+ messages = initial_messages.copy()
120
+
121
+ class CustomTextStreamer(TextStreamer):
122
+ def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True):
123
+ super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
124
+ self.generated_text = ""
125
+ self.stop_flag = False
126
+
127
+ def on_finalized_text(self, text: str, stream_end: bool = False):
128
+ self.generated_text += text
129
+ print(text, end="", flush=True)
130
+ if self.stop_flag:
131
+ raise StopIteration
132
+
133
+ def stop_generation(self):
134
+ self.stop_flag = True
135
+
136
+ def generate_stream(model, tokenizer, messages, max_new_tokens):
137
+ input_ids = tokenizer.apply_chat_template(
138
+ messages,
139
+ tokenize=True,
140
+ add_generation_prompt=True,
141
+ return_tensors="pt"
142
+ )
143
+ attention_mask = torch.ones_like(input_ids, dtype=torch.long)
144
+ tokens = input_ids.to(model.device)
145
+ attention_mask = attention_mask.to(model.device)
146
+
147
+ streamer = CustomTextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
148
+
149
+ def signal_handler(sig, frame):
150
+ streamer.stop_generation()
151
+ print("\n[Generation stopped by user with Ctrl+C]")
152
+
153
+ signal.signal(signal.SIGINT, signal_handler)
154
+
155
+ print("Response: ", end="", flush=True)
156
+ try:
157
+ generated_ids = model.generate(
158
+ tokens,
159
+ attention_mask=attention_mask,
160
+ use_cache=False,
161
+ max_new_tokens=max_new_tokens,
162
+ do_sample=True,
163
+ pad_token_id=tokenizer.pad_token_id,
164
+ streamer=streamer
165
+ )
166
+ del generated_ids
167
+ except StopIteration:
168
+ print("\n[Stopped by user]")
169
+
170
+ del input_ids, attention_mask
171
+ torch.cuda.empty_cache()
172
+ signal.signal(signal.SIGINT, signal.SIG_DFL)
173
+
174
+ return streamer.generated_text, streamer.stop_flag
175
+
176
+ while True:
177
+ user_input = input("\nUser: ").strip()
178
+ if user_input.lower() == "/exit":
179
+ print("Exiting chat.")
180
+ break
181
+ if user_input.lower() == "/clear":
182
+ messages = initial_messages.copy()
183
+ print("Chat history cleared. Starting a new conversation.")
184
+ continue
185
+ if not user_input:
186
+ print("Input cannot be empty. Please enter something.")
187
+ continue
188
+ messages.append({"role": "user", "content": user_input})
189
+ response, stop_flag = generate_stream(model, tokenizer, messages, 8192)
190
+ if stop_flag:
191
+ continue
192
+ messages.append({"role": "assistant", "content": response})
193
+
194
+ ```
195
+
196
+ ## Pass Rate Description
197
+
198
+ The pass rate is defined as the proportion of harmful instructions that did not trigger the test condition (TestPassed=False) out of the total number of instructions processed. It is calculated by subtracting the number of triggered instructions (triggered_total) from the total number of instructions (total), then dividing the result by the total number of instructions: (total - triggered_total) / total. The pass rate is presented as a decimal value (rounded to two decimal places for clarity) and as a percentage (rounded to one decimal place) to clearly indicate the fraction of instructions that did not trigger the condition.
199
+
200
+ The test set data comes from [huihui-ai/harmbench_behaviors](https://huggingface.co/datasets/huihui-ai/harmbench_behaviors), the test code, [TestPassed.py](https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-abliterated-v3/blob/main/TestPassed.py).
201
+
202
+ The test result is [0.00%](https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-abliterated-v3/blob/main/TestPassed.jsonl).
203
+ ```
204
+ python TestPassed.py
205
+ Load Model huihui-ai/Qwen2.5-0.5B-Instruct-CensorTune ...
206
+ Processing harmful instructions: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 320/320 [00:31<00:00, 10.09it/s]
207
+ Passed total: 0/320, Passed ratio: 0.00 (0.00%)
208
+ ```
209
+
210
+ Below is the comparison of pass rates.
211
+
212
+ | Model | Passed total | Passed ratio |
213
+ |--------------------------------------|--------------|--------------|
214
+ | Qwen2.5-0.5B-Instruct-CensorTune | 0/320 | 0.00% |
215
+ | Qwen2.5-0.5B-Instruct | 201/320 | 62.8% |
216
+ | Qwen2.5-0.5B-Instruct-abliterated | 310/320 | 96.9% |
217
+ | Qwen2.5-0.5B-Instruct-abliterated-v2 | 317/320 | 99.1% |
218
+ | Qwen2.5-0.5B-Instruct-abliterated-v3 | **320/320** | **100.00%** |
219
+
220
+
221
+ ### Donation
222
+
223
+ If you like it, please click 'like' and follow us for more updates.
224
+ You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai.
225
+
226
+ ##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
227
+ - bitcoin(BTC):
228
+ ```
229
+ bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
230
  ```