Instructions to use nomic-ai/CodeRankEmbed with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use nomic-ai/CodeRankEmbed with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("nomic-ai/CodeRankEmbed", trust_remote_code=True) sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Notebooks
- Google Colab
- Kaggle
fix: Missing get_input_embeddings / set_input_embeddings on NomicBertModel
#3
by Pringled - opened
NomicBertModel doesn't implement get_input_embeddings() or set_input_embeddings(), so the transformers fallback in EmbeddingAccessMixi tries to resolve them automatically. This fails because:
_input_embed_layerdefaults toembed_tokens, but NomicBERT usesword_embeddingsbase_model_prefix = "model", but__init__createsself.embeddings, notself.model
Adding these two methods fixes this.