alexgusevski commited on
Commit
4e92611
·
verified ·
1 Parent(s): 9445ab2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +78 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - TeichAI/claude-4.5-opus-high-reasoning-250x
5
+ base_model: DavidAU/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning
6
+ language:
7
+ - en
8
+ - fr
9
+ - de
10
+ - es
11
+ - it
12
+ - pt
13
+ - zh
14
+ - ja
15
+ - ru
16
+ - ko
17
+ tags:
18
+ - thinking
19
+ - reasoning
20
+ - instruct
21
+ - Claude4.5-Opus
22
+ - creative
23
+ - creative writing
24
+ - fiction writing
25
+ - plot generation
26
+ - sub-plot generation
27
+ - story generation
28
+ - scene continue
29
+ - storytelling
30
+ - fiction story
31
+ - science fiction
32
+ - romance
33
+ - all genres
34
+ - story
35
+ - writing
36
+ - vivid prosing
37
+ - vivid writing
38
+ - fiction
39
+ - roleplaying
40
+ - bfloat16
41
+ - role play
42
+ - 128k context
43
+ - llama3.3
44
+ - llama-3
45
+ - llama-3.3
46
+ - unsloth
47
+ - finetune
48
+ - mlx
49
+ - mlx-my-repo
50
+ pipeline_tag: text-generation
51
+ library_name: transformers
52
+ ---
53
+
54
+ # alexgusevski/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning-mlx-3Bit
55
+
56
+ The Model [alexgusevski/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning-mlx-3Bit](https://huggingface.co/alexgusevski/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning-mlx-3Bit) was converted to MLX format from [DavidAU/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning](https://huggingface.co/DavidAU/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning) using mlx-lm version **0.29.1**.
57
+
58
+ ## Use with mlx
59
+
60
+ ```bash
61
+ pip install mlx-lm
62
+ ```
63
+
64
+ ```python
65
+ from mlx_lm import load, generate
66
+
67
+ model, tokenizer = load("alexgusevski/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning-mlx-3Bit")
68
+
69
+ prompt="hello"
70
+
71
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
72
+ messages = [{"role": "user", "content": prompt}]
73
+ prompt = tokenizer.apply_chat_template(
74
+ messages, tokenize=False, add_generation_prompt=True
75
+ )
76
+
77
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
78
+ ```