Instructions to use xxang/FABE with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use xxang/FABE with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="xxang/FABE")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("xxang/FABE") model = AutoModelForCausalLM.from_pretrained("xxang/FABE") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use xxang/FABE with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "xxang/FABE" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "xxang/FABE", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/xxang/FABE
- SGLang
How to use xxang/FABE with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "xxang/FABE" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "xxang/FABE", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "xxang/FABE" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "xxang/FABE", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use xxang/FABE with Docker Model Runner:
docker model run hf.co/xxang/FABE
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Frontdoor-Adjustment-Backdoor-Elimination (FABE)
Fine-tuned FABE model for "Causality Based Front-door Defense Against Backdoor Attack on Language Models".
The demo code can be found in the Frontdoor-Adjustment-Backdoor-Elimination.
Replication
We use Tuna finetune an instruction-tuned LLM as FABE model, the base LLM model is LlaMA2-7B-Chat, then we use Openbackdoor as a framework to finish the defensive process of FABE.
Installation
you can install FABE through Git.
Git
https://github.com/lyr17/Instruct-as-backdoor-cleaner.git
cd Instruct-as-backdoor-cleaner
pip install -r requirements.txt
Usage
Step 1 : finetune the base model to get FABE model
cd Tuna
bash src/train_tuna.sh data/llama_file.json 1e-5
cd ..
We use 8×V100 GPU train the model 24 hours to get the finetuned model and model path like tuna/src/checkpoints/tuna_p/checkpoint-3024 .
Step 2 : defense the victim model by using FABE model
You can configure the hyperparameters of FABE in these path ./configs/fabe_config.json, please set the hyperparameter "model_path" for fabe in this json file to the model path saved in step 1.
The hyperparameter "diversity" is the model generation diversity penalty for FABE model. When defending against BadNets and AddSent attacks, it is recommended to set diversity to 0.1, when defending against SynBkd attack, it is recommended to set diversity to 1.0.
cd OpenBackdoor
python FABE_defense.py --config_path ./configs/fabe_config.json
cd ..
- Downloads last month
- 4