--- license: apache-2.0 library_name: transformers pipeline_tag: text-generation task: text-generation inference: parameters: max_new_tokens: 400 temperature: 0.7 top_p: 0.9 do_sample: true language: - en tags: - llama-3.1 - personal-assistant - book-advisor - merged-lora base_model: meta-llama/Llama-3.1-8B-Instruct --- # 📚 Jacob's Personal Book Advisor (Merged Model) This is a **merged model** combining Llama-3.1-8B-Instruct with a LoRA adapter trained on Jacob's personal book library. **✅ Ready for Inference API** - This merged model works directly with HuggingFace Inference API. ## Features - Personalized book recommendations from Jacob's library - Content questions and summaries - Reading advice based on actual book collection ## Usage ### With Inference API ```python from huggingface_hub import InferenceClient client = InferenceClient(model="jacobpmeyer/book-advisor-merged") response = client.text_generation( "### Instruction:\nRecommend a science fiction book\n\n### Response:\n", max_new_tokens=300, temperature=0.7 ) print(response) ``` ### Direct Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("jacobpmeyer/book-advisor-merged") model = AutoModelForCausalLM.from_pretrained("jacobpmeyer/book-advisor-merged") prompt = "### Instruction:\nWhat's a good book for vacation reading?\n\n### Response:\n" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.7) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## Training Details - **Base Model**: meta-llama/Llama-3.1-8B-Instruct - **Method**: LoRA fine-tuning + merging - **Training Data**: Personal epub book collection - **Format**: Instruction-following (Alpaca style) ## Model Performance This merged model combines the instruction-following capabilities of Llama-3.1-8B-Instruct with personalized knowledge from Jacob's book library, providing relevant and personalized book recommendations and insights.