Update README: Add model card metadata, ImageNet-1k metrics, and LiteRT usage example

#2
Files changed (1) hide show
  1. README.md +156 -0
README.md CHANGED
@@ -1,8 +1,164 @@
1
  ---
2
  library_name: litert
 
3
  tags:
4
  - vision
5
  - image-classification
6
  datasets:
7
  - imagenet-1k
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: litert
3
+ pipeline_tag: image-classification
4
  tags:
5
  - vision
6
  - image-classification
7
  datasets:
8
  - imagenet-1k
9
+ model-index:
10
+ - name: squeezenet1_0
11
+ results:
12
+ - task:
13
+ type: image-classification
14
+ name: Image Classification
15
+ dataset:
16
+ name: ImageNet-1k
17
+ type: imagenet-1k
18
+ config: default
19
+ split: validation
20
+ metrics:
21
+ - name: Top 1 Accuracy (Full Precision)
22
+ type: accuracy
23
+ value: 0.5811
24
+ - name: Top 5 Accuracy (Full Precision)
25
+ type: accuracy
26
+ value: 0.8044
27
  ---
28
+
29
+ # Squeezenet1_0
30
+
31
+ SqueezeNet 1.0 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in [SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size](https://arxiv.org/abs/1602.07360) by Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, and Kurt Keutzer.
32
+
33
+
34
+ ## Model description
35
+
36
+ The model was converted from a checkpoint from PyTorch Vision (`SqueezeNet1_0_Weights.IMAGENET1K_V1`).
37
+
38
+ The original model has:
39
+ acc@1 (on ImageNet-1K): 58.092%
40
+ acc@5 (on ImageNet-1K): 80.420%
41
+ num_params: 1,248,424
42
+
43
+ This model is released under the BSD 3-Clause License, inheriting the license of the `torchvision` repository from which it was converted.
44
+
45
+
46
+ ## Intended uses & limitations
47
+
48
+ The model files were converted from pretrained weights from PyTorch Vision. The models may have their own licenses or terms and conditions derived from PyTorch Vision and the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.
49
+
50
+ The preprocessing script below has been adjusted to handle standard ImageNet resize (256) and central crop (224) requirements. It explicitly strips away the legacy PyTorch `(B, C, H, W)` layout and adds the required Batch dimension to match the LiteRT **`(B, H, W, C)`** (NHWC) runtime expectation.
51
+
52
+ ## How to Use
53
+
54
+ ​​**1. Install Dependencies**
55
+
56
+ Ensure your Python environment is set up with the required libraries. Run the following command in your terminal
57
+
58
+ ```bash
59
+ pip install numpy Pillow huggingface_hub ai-edge-litert
60
+ ```
61
+
62
+ **2. Prepare Your Image**
63
+
64
+ The script expects an image file to analyze. Make sure you have an image (e.g., cat.jpg or car.png) saved in the same working directory as your script.
65
+
66
+
67
+ **3. Save the Script**
68
+
69
+ Create a new file named `classify.py`, paste the script below into it, and save the file:
70
+
71
+ ```python
72
+ #!/usr/bin/env python3
73
+ import argparse
74
+ import json
75
+ import numpy as np
76
+ from PIL import Image
77
+ from huggingface_hub import hf_hub_download
78
+ from ai_edge_litert.compiled_model import CompiledModel
79
+
80
+ def preprocess(img: Image.Image) -> np.ndarray:
81
+ img = img.convert("RGB")
82
+ w, h = img.size
83
+
84
+ # Resize shortest edge to 256
85
+ s = 256
86
+ if w < h:
87
+ img = img.resize((s, int(round(h * s / w))), Image.BILINEAR)
88
+ else:
89
+ img = img.resize((int(round(w * s / h)), s), Image.BILINEAR)
90
+
91
+ # Central crop to 224x224
92
+ left = (img.size[0] - 224) // 2
93
+ top = (img.size[1] - 224) // 2
94
+ img = img.crop((left, top, left + 224, top + 224))
95
+
96
+ # Rescale to [0.0, 1.0] and Normalize
97
+ x = np.asarray(img, dtype=np.float32) / 255.0
98
+ x = (x - np.array([0.485, 0.456, 0.406], dtype=np.float32)) / np.array(
99
+ [0.229, 0.224, 0.225], dtype=np.float32
100
+ )
101
+
102
+ # Expand dimensions to create NHWC 4D tensor: (1, 224, 224, 3)
103
+ x = np.expand_dims(x, axis=0)
104
+
105
+ return x
106
+
107
+ def main():
108
+ ap = argparse.ArgumentParser()
109
+ ap.add_argument("--image", required=True, help="Path to the input image")
110
+ args = ap.parse_args()
111
+
112
+ # Download the TFLite model and labels
113
+ model_path = hf_hub_download("litert-community/squeezenet1_0", "squeezenet1_0.tflite")
114
+ labels_path = hf_hub_download(
115
+ "huggingface/label-files", "imagenet-1k-id2label.json", repo_type="dataset"
116
+ )
117
+
118
+ with open(labels_path, "r", encoding="utf-8") as f:
119
+ id2label = {int(k): v for k, v in json.load(f).items()}
120
+
121
+ img = Image.open(args.image)
122
+ x = preprocess(img)
123
+
124
+ model = CompiledModel.from_file(model_path)
125
+ inp = model.create_input_buffers(0)
126
+ out = model.create_output_buffers(0)
127
+
128
+ inp[0].write(x)
129
+ model.run_by_index(0, inp, out)
130
+
131
+ req = model.get_output_buffer_requirements(0, 0)
132
+ y = out[0].read(req["buffer_size"] // np.dtype(np.float32).itemsize, np.float32)
133
+
134
+ pred = int(np.argmax(y))
135
+ label = id2label.get(pred, f"class_{pred}")
136
+
137
+ print(f"Top-1 class index: {pred}")
138
+ print(f"Top-1 label: {label}")
139
+
140
+ if __name__ == "__main__":
141
+ main()
142
+ ```
143
+
144
+ **4. Execute the Python Script**
145
+
146
+ Run the below command:
147
+
148
+ ```bash
149
+ python classify.py --image cat.jpg
150
+ ```
151
+
152
+ ### BibTeX entry and citation info
153
+
154
+ ```bibtex
155
+ @misc{iandola2016squeezenetalexnetlevelaccuracy50x,
156
+ title={SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size},
157
+ author={Forrest N. Iandola and Song Han and Matthew W. Moskewicz and Khalid Ashraf and William J. Dally and Kurt Keutzer},
158
+ year={2016},
159
+ eprint={1602.07360},
160
+ archivePrefix={arXiv},
161
+ primaryClass={cs.CV},
162
+ url={https://arxiv.org/abs/1602.07360},
163
+ }
164
+ ```