X
Coral USB Accelerator Accelerator coprocessor for
Coral USB Accelerator Accelerator coprocessor for
Coral USB Accelerator Accelerator coprocessor for
Coral USB Accelerator Accelerator coprocessor for
Coral USB Accelerator Accelerator coprocessor for
Coral USB Accelerator Accelerator coprocessor for
Coral USB Accelerator Accelerator coprocessor for
Coral USB Accelerator Accelerator coprocessor for
Coral USB Accelerator Accelerator coprocessor for
Coral USB Accelerator Accelerator coprocessor for

Coral USB Accelerator Accelerator coprocessor for Raspberry Pi and Other Embedded Single Board Computers

Product ID : 49422168


Galleon Product ID 49422168
Shipping Weight 0.2 lbs
I think this is wrong?
Model
Manufacturer Seeed Studio
Shipping Dimension 5.43 x 4.06 x 1.26 inches
I think this is wrong?
-
7,163

*Price and Stocks may change without prior notice
*Packaging of actual item may differ from photo shown
  • Electrical items MAY be 110 volts.
  • 7 Day Return Policy
  • All products are genuine and original
  • Cash On Delivery/Cash Upon Pickup Available

Pay with

Coral USB Accelerator Accelerator coprocessor for Features

  • Performs high-speed ML inferencing: High-speed TensorFlow Lite inferencing with low power, small footprint, local inferencing

  • Supports all major platforms: Connects via USB 3.0 Type-C to any system running Debian Linux (including Raspberry Pi), macOS, or Windows 10

  • Supports TensorFlow Lite: no need to build models from the ground up. Tensorflow Lite models can be compiled to run on the edge TPE

  • Supports AutoML Vision Edge: easily build and deploy fast, high-accuracy custom image classification models at the edge.

  • Compatible with Google Cloud


About Coral USB Accelerator Accelerator Coprocessor For

The on-board Edge TPU is a small ASIC designed by Google that accelerates TensorFlow Lite models in a power-efficient manner: it's capable of performing 4 trillion operations per second (4 TOPS), using 2 watts of power—that's 2 TOPS per watt. For example, one Edge TPU can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 frames per second. This on-device ML processing reduces latency, increases data privacy, and removes the need for a constant internet connection. This allows you to add fast ML inferencing to your embedded AI devices in a power-efficient and privacy-preserving way. Models can be developed in TensorFlow Lite and then compiled to run on the USB Accelerator. A computer with one of the following operating systems: Linux Debian 10, or a derivative thereof (such as Ubuntu 18.04), and system architecture of either x86-64, Armv7 (32-bit), or Armv8 (64-bit) (Raspberry Pi is supported, but we have only tested Raspberry Pi 3 Model B+ and Raspberry Pi 4) macOS 10.15, with either MacPorts or Homebrew installed Windows 10 One available USB port (for the best performance, use a USB 3.0 port) Python 3.5, 3.6, or 3.7