- Libraries
- sentence-transformers
How to use aisquared/bolt-embedding-small-gguf with sentence-transformers:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("aisquared/bolt-embedding-small-gguf")
sentences = [
"I'm trying to write a PHP script which reads SIP (session initiation protocol) signals from a hardware switch to gets specific details and then return some data back to the switch.\nBeing a complete newbie to this SIP thing I don't know how to interact with the switch sending SIP signal. Do we need to send some message to the switch to get response?\nI googled SIP but got only general info regarding what SIP is all about but nothing programmatic.\nCan any one provide any pointers to any tutorials which show how interact with a SIP signal programmatically?\nAre there any free online services that simulate SIP signals for testing purpose?\n",
"Lake Okahumpka is a freshwater lake in Wildwood, Florida, United States. Lake Okahumpka Park is along part of its shoreline. In 1980, the United States Geological Survey reported on the hydrology of Lake Okahumpka and Lake Deaton area.\n\nThe lake is east of Wildwood on the south side of State Road 44. The lake has been treated for hydrilla. Ring neck ducks have been hunted from its shores.\n\nSee also\nOkahumpka, Florida\n\nReferences\n\nBodies of water of Sumter County, Florida\nOkahumpka",
"Because of different regional setting on different machines. To have date time output in the same format you ahve to specify format string explciitly:\ndate.ToString(\"yyyy-MM-dd HH:mm:ss\");\n\nAlso as John recommeded in comments below if you want having date time output in the same format on different machines despite local regional settings you can use InvariantCulture format provider:\ndate.ToString(CultureInfo.InvariantCulture);\n\nMSDN:\n\nThe invariant culture is culture-insensitive; it is associated with\n the English language but not with any country/region\n\nMSDN:\n\nStandard Date and Time Format Strings\nCustom Date and Time Format Strings\n\n",
"The President of India plays a ceremonial role in foreign affairs, appointing ambassadors and ratifying treaties, but the day‑to‑day conduct of diplomacy is handled by the Ministry of External Affairs and the Prime Minister's Office."
]
embeddings = model.encode(sentences)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [4, 4] - llama-cpp-python
How to use aisquared/bolt-embedding-small-gguf with llama-cpp-python:
# !pip install llama-cpp-python
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="aisquared/bolt-embedding-small-gguf",
filename="bolt-embedding-small-GGUF.gguf",
)
output = llm(
"Once upon a time,",
max_tokens=512,
echo=True
)
print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use aisquared/bolt-embedding-small-gguf with llama.cpp:
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf aisquared/bolt-embedding-small-gguf
# Run inference directly in the terminal:
llama-cli -hf aisquared/bolt-embedding-small-gguf
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf aisquared/bolt-embedding-small-gguf
# Run inference directly in the terminal:
llama-cli -hf aisquared/bolt-embedding-small-gguf
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf aisquared/bolt-embedding-small-gguf
# Run inference directly in the terminal:
./llama-cli -hf aisquared/bolt-embedding-small-gguf
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf aisquared/bolt-embedding-small-gguf
# Run inference directly in the terminal:
./build/bin/llama-cli -hf aisquared/bolt-embedding-small-gguf
Use Docker
docker model run hf.co/aisquared/bolt-embedding-small-gguf
- LM Studio
- Jan
- Ollama
How to use aisquared/bolt-embedding-small-gguf with Ollama:
ollama run hf.co/aisquared/bolt-embedding-small-gguf
- Unsloth Studio new
How to use aisquared/bolt-embedding-small-gguf with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for aisquared/bolt-embedding-small-gguf to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for aisquared/bolt-embedding-small-gguf to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required
# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for aisquared/bolt-embedding-small-gguf to start chatting
- Docker Model Runner
How to use aisquared/bolt-embedding-small-gguf with Docker Model Runner:
docker model run hf.co/aisquared/bolt-embedding-small-gguf
- Lemonade
How to use aisquared/bolt-embedding-small-gguf with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/
lemonade pull aisquared/bolt-embedding-small-gguf
Run and chat with the model
lemonade run user.bolt-embedding-small-gguf-{{QUANT_TAG}}List all available models
lemonade list