noclip84/Qwen3.6-27B-heretic-ARA
#2272
by T1Keria - opened
hm, something went wrong with the previous run, and it was yesterday..
Qwen3.6-27B-heretic-ARA WARNING:hf-to-gguf:
Qwen3.6-27B-heretic-ARA
Qwen3.6-27B-heretic-ARA WARNING:hf-to-gguf:**************************************************************************************
Qwen3.6-27B-heretic-ARA WARNING:hf-to-gguf:** WARNING: The BPE pre-tokenizer was not recognized!
Qwen3.6-27B-heretic-ARA WARNING:hf-to-gguf:** There are 2 possible reasons for this:
Qwen3.6-27B-heretic-ARA WARNING:hf-to-gguf:** - the model has not been added to convert_hf_to_gguf_update.py yet
Qwen3.6-27B-heretic-ARA WARNING:hf-to-gguf:** - the pre-tokenization config has changed upstream
Qwen3.6-27B-heretic-ARA WARNING:hf-to-gguf:** Check your model files and convert_hf_to_gguf_update.py and update them accordingly.
Qwen3.6-27B-heretic-ARA WARNING:hf-to-gguf:** ref: https://github.com/ggml-org/llama.cpp/pull/6920
Qwen3.6-27B-heretic-ARA WARNING:hf-to-gguf:**
Qwen3.6-27B-heretic-ARA WARNING:hf-to-gguf:** chkhsh: 1444df51289cfa8063b96f0e62b1125440111bc79a52003ea14b6eac7016fd5f
Qwen3.6-27B-heretic-ARA WARNING:hf-to-gguf:**************************************************************************************
Qwen3.6-27B-heretic-ARA WARNING:hf-to-gguf:
Qwen3.6-27B-heretic-ARA
Qwen3.6-27B-heretic-ARA Traceback (most recent call last):
Qwen3.6-27B-heretic-ARA File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 13488, in <module>
Qwen3.6-27B-heretic-ARA main()
Qwen3.6-27B-heretic-ARA File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 13482, in main
Qwen3.6-27B-heretic-ARA model_instance.write()
Qwen3.6-27B-heretic-ARA File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 945, in write
Qwen3.6-27B-heretic-ARA self.prepare_metadata(vocab_only=False)
Qwen3.6-27B-heretic-ARA File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 1089, in prepare_metadata
Qwen3.6-27B-heretic-ARA self.set_vocab()
Qwen3.6-27B-heretic-ARA File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 1061, in set_vocab
Qwen3.6-27B-heretic-ARA self._set_vocab_gpt2()
Qwen3.6-27B-heretic-ARA File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 1578, in _set_vocab_gpt2
Qwen3.6-27B-heretic-ARA tokens, toktypes, tokpre = self.get_vocab_base()
Qwen3.6-27B-heretic-ARA File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 1245, in get_vocab_base
Qwen3.6-27B-heretic-ARA tokpre = self.get_vocab_base_pre(tokenizer)
Qwen3.6-27B-heretic-ARA File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 1566, in get_vocab_base_pre
Qwen3.6-27B-heretic-ARA raise NotImplementedError("BPE pre-tokenizer was not recognized - update get_vocab_base_pre()")
Qwen3.6-27B-heretic-ARA NotImplementedError: BPE pre-tokenizer was not recognized - update get_vocab_base_pre()
Qwen3.6-27B-heretic-ARA yes: standard output: Broken pipe
perhaps something is wrong with the tokenizer ? Please remind me in a couple of dadys to queue it again, as we will upgrade llamacpp in a few coming days !