Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

pratikdoshi
/
sparse-autoencoders-1

SAELens
Model card Files Files and versions
xet
Community

Instructions to use pratikdoshi/sparse-autoencoders-1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • SAELens

    How to use pratikdoshi/sparse-autoencoders-1 with SAELens:

    # pip install sae-lens
    from sae_lens import SAE
    
    sae, cfg_dict, sparsity = SAE.from_pretrained(
        release = "RELEASE_ID", # e.g., "gpt2-small-res-jb". See other options in https://github.com/jbloomAus/SAELens/blob/main/sae_lens/pretrained_saes.yaml
        sae_id = "SAE_ID", # e.g., "blocks.8.hook_resid_pre". Won't always be a hook point
    )
  • Notebooks
  • Google Colab
  • Kaggle
sparse-autoencoders-1
134 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 3 commits
pratikdoshi's picture
pratikdoshi
Add README.md
b189a97 verified over 1 year ago
  • blocks.0.hook_mlp_out
    Upload SAE blocks.0.hook_mlp_out over 1 year ago
  • .gitattributes
    1.52 kB
    initial commit over 1 year ago
  • README.md
    316 Bytes
    Add README.md over 1 year ago