Whisper-Small: Optimized for Qualcomm Devices
HuggingFace Whisper-Small ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. This model is based on the transformer architecture and has been optimized for edge inference by replacing Multi-Head Attention (MHA) with Single-Head Attention (SHA) and linear layers with convolutional (conv) layers. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a max decoded length specified below.
This is based on the implementation of Whisper-Small found here. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.
Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.
Getting Started
There are two ways to deploy this model on your device:
Option 1: Download Pre-Exported Models
Below are pre-exported model assets ready for deployment.
| Runtime | Precision | Chipset | SDK Versions | Download |
|---|---|---|---|---|
| PRECOMPILED_QNN_ONNX | float | Snapdragon® X Elite | QAIRT 2.37, ONNX Runtime 1.23.0 | Download |
| PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Gen 3 Mobile | QAIRT 2.37, ONNX Runtime 1.23.0 | Download |
| PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS8550 (Proxy) | QAIRT 2.37, ONNX Runtime 1.23.0 | Download |
| PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | QAIRT 2.37, ONNX Runtime 1.23.0 | Download |
| PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | QAIRT 2.37, ONNX Runtime 1.23.0 | Download |
| PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS9075 | QAIRT 2.37, ONNX Runtime 1.23.0 | Download |
| QNN_CONTEXT_BINARY | float | Snapdragon® X Elite | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Snapdragon® 8 Gen 3 Mobile | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Qualcomm® QCS8275 (Proxy) | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Qualcomm® QCS8550 (Proxy) | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Qualcomm® SA8775P | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite For Galaxy Mobile | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite Gen 5 Mobile | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Qualcomm® SA7255P | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Qualcomm® SA8295P | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Qualcomm® QCS9075 | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Qualcomm® QCS8450 (Proxy) | QAIRT 2.42 | Download |
For more device-specific assets and performance metrics, visit Whisper-Small on Qualcomm® AI Hub.
Option 2: Export with Custom Configurations
Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:
- Custom weights (e.g., fine-tuned checkpoints)
- Custom input shapes
- Target device and runtime configurations
This option is ideal if you need to customize the model beyond the default configuration provided here.
See our repository for Whisper-Small on GitHub for usage instructions.
Model Details
Model Type: Model_use_case.speech_recognition
Model Stats:
- Model checkpoint: openai/whisper-small
- Input resolution: 80x3000 (30 seconds audio)
- Max decoded sequence length: 200 tokens
- Number of parameters (HfWhisperEncoder): 102M
- Model size (HfWhisperEncoder) (float): 391 MB
- Number of parameters (HfWhisperDecoder): 139M
- Model size (HfWhisperDecoder) (float): 533 MB
Performance Summary
| Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit |
|---|---|---|---|---|---|---|
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® X Elite | 10.402 ms | 286 - 286 MB | NPU |
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Gen 3 Mobile | 10.066 ms | 56 - 64 MB | NPU |
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS8550 (Proxy) | 12.842 ms | 62 - 63 MB | NPU |
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS9075 | 13.707 ms | 60 - 122 MB | NPU |
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 8.503 ms | 59 - 70 MB | NPU |
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 7.462 ms | 75 - 86 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Snapdragon® X Elite | 9.994 ms | 60 - 60 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Gen 3 Mobile | 9.674 ms | 60 - 68 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8275 (Proxy) | 18.527 ms | 54 - 63 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8550 (Proxy) | 11.89 ms | 29 - 31 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8775P | 13.382 ms | 55 - 64 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS9075 | 13.216 ms | 60 - 128 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8450 (Proxy) | 17.65 ms | 39 - 47 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA7255P | 18.527 ms | 54 - 63 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8295P | 15.025 ms | 54 - 60 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite For Galaxy Mobile | 8.262 ms | 0 - 9 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite Gen 5 Mobile | 7.206 ms | 60 - 71 MB | NPU |
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® X Elite | 130.719 ms | 226 - 226 MB | NPU |
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Gen 3 Mobile | 105.285 ms | 109 - 117 MB | NPU |
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS8550 (Proxy) | 135.287 ms | 1 - 258 MB | NPU |
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS9075 | 156.352 ms | 127 - 131 MB | NPU |
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 79.543 ms | 129 - 141 MB | NPU |
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 62.455 ms | 126 - 136 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Snapdragon® X Elite | 114.572 ms | 0 - 0 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Gen 3 Mobile | 82.377 ms | 1 - 8 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8275 (Proxy) | 404.688 ms | 0 - 8 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8550 (Proxy) | 113.951 ms | 0 - 4 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8775P | 135.906 ms | 0 - 9 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS9075 | 136.123 ms | 0 - 55 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8450 (Proxy) | 280.226 ms | 1 - 10 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA7255P | 404.688 ms | 0 - 8 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8295P | 204.47 ms | 1 - 6 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite For Galaxy Mobile | 59.713 ms | 1 - 14 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite Gen 5 Mobile | 45.361 ms | 1 - 10 MB | NPU |
License
- The license for the original implementation of Whisper-Small can be found here.
References
Community
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.
