llama-3.1-phishing-adapter-a100

This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0060
  • Accuracy: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.2262 0.0870 25 0.0661 1.0
0.0715 0.1739 50 0.0525 0.9444
0.0725 0.2609 75 0.0136 0.5500
0.0252 0.3478 100 0.0190 0.8662
0.0109 0.4348 125 0.0392 0.9887
0.0315 0.5217 150 0.0529 1.0
0.0123 0.6087 175 0.0195 1.0
0.0124 0.6957 200 0.0102 1.0
0.0139 0.7826 225 0.0056 1.0
0.0124 0.8696 250 0.0065 1.0
0.0111 0.9565 275 0.0060 1.0

Framework versions

  • PEFT 0.18.1
  • Transformers 4.57.6
  • Pytorch 2.9.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.22.2
Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mathisdu/llama-3.1-phishing-adapter-a100

Adapter
(1455)
this model