Instructions to use microsoft/graphcodebert-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use microsoft/graphcodebert-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="microsoft/graphcodebert-base")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("microsoft/graphcodebert-base") model = AutoModelForMaskedLM.from_pretrained("microsoft/graphcodebert-base") - Inference
- Notebooks
- Google Colab
- Kaggle
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
GraphCodeBERT model
GraphCodeBERT is a graph-based pre-trained model based on the Transformer architecture for programming language, which also considers data-flow information along with code sequences. GraphCodeBERT consists of 12 layers, 768 dimensional hidden states, and 12 attention heads. The maximum sequence length for the model is 512. The model is trained on the CodeSearchNet dataset, which includes 2.3M functions with document pairs for six programming languages.
More details can be found in the paper by Guo et. al.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face community members.
- Downloads last month
- 386,681