Llama 3.1 70B - Terraform Principal Architect
This is a fine-tuned LoRA adapter for Llama 3.1 70B Instruct, specialized in generating high-quality, production-ready Google Cloud Terraform code.
⚠️ Note on Model Size
This is the 70B parameter version of the architect. It requires significant VRAM (40GB+) to run, even with 4-bit quantization.
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = "meta-llama/Llama-3.1-70B-Instruct"
adapter_id = "AdarshRL/Llama-3.1-70B-Terraform-Architect"
model = AutoModelForCausalLM.from_pretrained(base_model, device_map="auto", load_in_4bit=True)
model = PeftModel.from_pretrained(model, adapter_id)
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for AdarshRL/llama-3.1-70b-terraform-architect-adapter
Base model
meta-llama/Llama-3.1-70B Finetuned
meta-llama/Llama-3.1-70B-Instruct