Meta-Llama-3.1-8B-Instruct TensorRT-LLM checkpoint (W8A8 SmoothQuant + INT8 KV)

TensorRT-LLM checkpoint for Meta-Llama-3.1-8B-Instruct, with W8A8 SmoothQuant quantization for model compute and INT8 KV cache. Use with trtllm-build to produce an engine for inference.

Model details

Item Value
Base model Meta-Llama-3.1-8B-Instruct
Framework TensorRT-LLM (checkpoint format)
Weight/activation quantization W8A8 SmoothQuant (W8A8_SQ_PER_CHANNEL_PER_TOKEN_PLUGIN)
KV cache INT8
Producer TensorRT-LLM convert_checkpoint.py
Key conversion flags --smoothquant 0.5 --per_token --per_channel --int8_kv_cache
Calibration size 512 samples (--calib_size 512)
Architecture LlamaForCausalLM (decoder-only)

Build (how to produce this checkpoint)

This checkpoint is produced using the TensorRT-LLM Llama converter with SmoothQuant and INT8 KV cache enabled:

python TensorRT-LLM/examples/models/core/llama/convert_checkpoint.py \
  --model_dir /path/to/Meta-Llama-3.1-8B-Instruct \
  --output_dir ./llama-3.1-8b-instruct-trtllm-ckpt-wq_w8a8sq-kv_int8 \
  --dtype float16 \
  --tp_size 1 \
  --smoothquant 0.5 \
  --per_token \
  --per_channel \
  --int8_kv_cache \
  --calib_size 512

Environment note

In this environment, loading the slow tokenizer path (use_fast=False) returns an invalid object for this model. During generation, tokenizer loading is forced to use_fast=True at runtime. This only affects tokenizer loading compatibility in the conversion process and does not change the target quantization configuration.

Output

After conversion, --output_dir contains:

  • config.json - TensorRT-LLM checkpoint config
  • rank0.safetensors - rank-0 checkpoint weights (single-GPU)

Upload (how to upload to Hugging Face)

cd ./llama-3.1-8b-instruct-trtllm-ckpt-wq_w8a8sq-kv_int8

huggingface-cli repo create rungalileo/llama-3.1-8b-instruct-trtllm-ckpt-wq_w8a8sq-kv_int8 --repo-type model
huggingface-cli upload rungalileo/llama-3.1-8b-instruct-trtllm-ckpt-wq_w8a8sq-kv_int8 . --repo-type model

How to use

1. Build engine

git clone https://huggingface.co/rungalileo/llama-3.1-8b-instruct-trtllm-ckpt-wq_w8a8sq-kv_int8
cd llama-3.1-8b-instruct-trtllm-ckpt-wq_w8a8sq-kv_int8

trtllm-build --checkpoint_dir . --output_dir ./engine \
  --max_batch_size 1 --max_input_len 512 --max_seq_len 1024

2. Run inference

Use a tokenizer from the base model:

trtllm-serve ./engine --tokenizer meta-llama/Meta-Llama-3.1-8B-Instruct --port 8000
# OpenAI-compatible API: http://localhost:8000/v1/completions

References

  • TensorRT-LLM
  • Meta-Llama-3.1-8B-Instruct
Downloads last month
18
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rungalileo/llama-3.1-8b-instruct-trtllm-ckpt-wq_w8a8sq-kv_int8

Finetuned
(2438)
this model