Instructions to use ProdicusII/ZeroShotBioNER with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ProdicusII/ZeroShotBioNER with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="ProdicusII/ZeroShotBioNER")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("ProdicusII/ZeroShotBioNER") model = AutoModelForTokenClassification.from_pretrained("ProdicusII/ZeroShotBioNER") - Notebooks
- Google Colab
- Kaggle
Restucture
#1
by nikolamilosevic - opened
README.md
CHANGED
|
@@ -1,5 +1,10 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
---
|
| 4 |
-
#
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
---
|
| 4 |
+
# Zero- and Few-shot NER model
|
| 5 |
+
|
| 6 |
+
## Model description
|
| 7 |
+
Model takes as input two strings. String1 is NER label. String1 must be phrase for entity. String2 is short text where String1 is searched for semantically.
|
| 8 |
+
Model outputs list of zeros and ones corresponding to the occurance of NER and corresponing to tokens(tokens given by transformer tokenizer) of the Sring2, not to words.
|
| 9 |
+
|
| 10 |
+
## Citation
|