Basic image classification
An example that performs image classification with a single photo.
Languages: Python, C++
With the Coral Edge TPU™, you can run an image classification model directly on your device, using real-time video at almost 400 frames per second. You can even run additional models concurrently on the same Edge TPU while maintaining a high frame rate.
This page provides several trained models that are compiled for the Edge TPU, example code to run them, plus information about how to train your own model with TensorFlow.
These models are trained and compiled for the Edge TPU.
Model name | Detections/Dataset | Input size | Depth mul. | TF ver. | Latency 1 | Accuracy | Micro 2 | Model size | Downloads |
---|---|---|---|---|---|---|---|---|---|
EfficientNet-EdgeTpu (L)* |
1,000 objects |
300x300x3 | N/A | 1 | 21.3 ms | |
Yes | 12.8 MB | |
EfficientNet-EdgeTpu (M)* |
1,000 objects |
240x240x3 | N/A | 1 | 7.3 ms | |
Yes | 8.7 MB | |
EfficientNet-EdgeTpu (S)* |
1,000 objects |
224x224x3 | N/A | 1 | 5.0 ms | |
Yes | 6.8 MB | |
Inception V1 |
1,000 objects |
224x224x3 | N/A | 1 | 3.4 ms | |
Yes | 7.0 MB | |
Inception V3 |
1,000 objects |
224x224x3 | N/A | 1 | 13.4 ms | |
Yes | 12.0 MB | |
Inception V3 |
1,000 objects |
299x299x3 | N/A | 1 | 42.8 ms | |
No | 23.9 MB | |
Inception V4 |
1,000 objects |
299x299x3 | N/A | 1 | 84.7 ms | |
No | 42.9 MB | |
MobileNet V1 |
1,000 objects |
128x128x3 | 0.25 | 1 | 0.9 ms | |
Yes | 0.7 MB | |
MobileNet V1 |
1,000 objects |
160x160x3 | 0.5 | 1 | 1.4 ms | |
Yes | 1.6 MB | |
MobileNet V1 |
1,000 objects |
192x192x3 | 0.75 | 1 | 1.8 ms | |
Yes | 2.8 MB | |
MobileNet V1 |
1,000 objects |
224x224x3 | 1.0 | 1 | 2.8 ms | |
Yes | 4.4 MB | |
MobileNet V2 |
900+ birds |
224x224x3 | 1.0 | 1 | 2.6 ms | N/A | Yes | 4.1 MB | |
MobileNet V2 |
1000+ insects |
224x224x3 | 1.0 | 1 | 2.7 ms | N/A | Yes | 4.1 MB | |
MobileNet V2 |
2000+ plants |
224x224x3 | 1.0 | 1 | 2.6 ms | N/A | Yes | 5.5 MB | |
MobileNet V2 |
1,000 objects |
224x224x3 | 1.0 | 1 | 2.9 ms | |
Yes | 4.0 MB | |
MobileNet V1 |
1,000 objects |
224x224x3 | 1.0 | 2 | 2.8 ms | |
Yes | 4.5 MB | |
MobileNet V2 |
1,000 objects |
224x224x3 | 1.0 | 2 | 3.0 ms | |
Yes | 4.1 MB | |
MobileNet V3 |
1,000 objects |
224x224x3 | 1.0 | 2 | 3.0 ms | |
Yes | 4.9 MB | |
ResNet-50 |
1,000 objects |
224x224x3 | N/A | 2 | 42.2 ms | |
No | 25.0 MB | |
Popular Products V1 |
100,000 popular |
224x224x3 | N/A | 1 | 7.0 ms | N/A | Yes | 9.8 MB |
1 Latency is the time to perform one inference, as measured with a Coral USB Accelerator on a desktop CPU. Latency varies between systems, so this is primarily intended for comparison between models. For more comparisons, see the Performance Benchmarks.
2 Indicates compatibility with the Dev Board Micro. Some models are not compatible because they require a CPU-bound op that is not supported by TensorFlow Lite for Microcontrollers or they require more memory than available on the board. (All models are compatible with all other Coral boards.)
* Beware that the EfficientNet family of models have unique input quantization values (scale and zero-point) that you must use when preprocessing your input. For example preprocessing code, see the classify_image.py or classify_image.cc examples.
These models are designed for compatibility with the on-device transfer learning APIs provided with PyCoral and libcoral.
The "backpropagation" models are embedding extractor models, compiled
with the last fully-connected layer removed. They do not perform
classification on their own, and must be paired with the SoftmaxRegression
API, which allows you to perform on-device backpropagation to train the
classification layer.
The "weight imprinting" models are modified to include an L2-normalization layer and other changes to be compatible with the ImprintingEngine
API, which performs weight imprinting to retrain classifications.
Model name | Training style | Base dataset | Input size | TF ver. | Micro 1 | Model size | Downloads |
---|---|---|---|---|---|---|---|
EfficientNet-EdgeTpu (L) |
Backpropagation | 1,000 objects |
300x300x3 | 1 | Yes | 11.7 MB | |
EfficientNet-EdgeTpu (M) |
Backpropagation | 1,000 objects |
240x240x3 | 1 | Yes | 7.6 MB | |
EfficientNet-EdgeTpu (S) |
Backpropagation | 1,000 objects |
224x224x3 | 1 | Yes | 5.7 MB | |
MobileNet V1 |
Backpropagation | 1,000 objects |
224x224x3 | 1 | Yes | 3.5 MB | |
MobileNet V1 |
Weight imprinting | 1,000 objects |
224x224x3 | 1 | No | 5.3 MB |
1 Indicates compatibility with the Dev Board Micro. Some models are not compatible because they require a CPU-bound op that is not supported by TensorFlow Lite for Microcontrollers or they require more memory than available on the board. (All models are compatible with all other Coral boards.)
Basic image classification
An example that performs image classification with a single photo.
Languages: Python, C++
Image recognition with video
Multiple examples showing how to stream images from a camera and run classification or detection models with the TensorFlow Lite API. Each example uses a different camera library, such as GStreamer, OpenCV, PyGame, and PiCamera.
Languages: Python, C++
Pipelined image classification
An example showing how to pipeline a model across multiple Edge TPUs, allowing you to significantly increase throughput for large models such as Inception.
Languages: Python, C++
If you’d like to train a classification model to recognize new objects, try the following tutorials: