All Categories
Get it between 2025-07-24 to 2025-07-31. Additional 3 business days for provincial shipping.
Performs high-speed ML inferencing: High-speed TensorFlow Lite inferencing with low power, small footprint, local inferencing
Supports all major platforms: Connects via USB 3.0 Type-C to any system running Debian Linux (including Raspberry Pi), macOS, or Windows 10
Supports TensorFlow Lite: no need to build models from the ground up. Tensorflow Lite models can be compiled to run on the edge TPE
Supports AutoML Vision Edge: easily build and deploy fast, high-accuracy custom image classification models at the edge.
Compatible with Google Cloud
The on-board Edge TPU is a small ASIC designed by Google that accelerates TensorFlow Lite models in a power-efficient manner: it's capable of performing 4 trillion operations per second (4 TOPS), using 2 watts of power—that's 2 TOPS per watt. For example, one Edge TPU can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 frames per second. This on-device ML processing reduces latency, increases data privacy, and removes the need for a constant internet connection. This allows you to add fast ML inferencing to your embedded AI devices in a power-efficient and privacy-preserving way. Models can be developed in TensorFlow Lite and then compiled to run on the USB Accelerator. A computer with one of the following operating systems: Linux Debian 10, or a derivative thereof (such as Ubuntu 18.04), and system architecture of either x86-64, Armv7 (32-bit), or Armv8 (64-bit) (Raspberry Pi is supported, but we have only tested Raspberry Pi 3 Model B+ and Raspberry Pi 4) macOS 10.15, with either MacPorts or Homebrew installed Windows 10 One available USB port (for the best performance, use a USB 3.0 port) Python 3.5, 3.6, or 3.7