اردیبهشت ۲۷, ۱۴۰۱ — Posted by Scott Main, AIY Projects and Coral Back in 2017, we began AIY Projects to make do-it-yourself artificial intelligence projects accessible to anybody. Our first project was the AIY Voice Kit, which allows you to build your own intelligent device that responds to voice commands. Then we released the AIY Vision Kit, which can recognize objects seen by its camera using on-device TensorFlow…
Posted by Scott Main, AIY Projects and Coral
Back in 2017, we began AIY Projects to make do-it-yourself artificial intelligence projects accessible to anybody. Our first project was the AIY Voice Kit, which allows you to build your own intelligent device that responds to voice commands. Then we released the AIY Vision Kit, which can recognize objects seen by its camera using on-device TensorFlow models. We were amazed by the projects people built with these kits and thrilled to see educational programs use them to introduce young engineers to the possibilities of computer science and machine learning (ML). So I'm excited to continue our mission to bring machine learning to everyone with the more powerful and more customizable AIY Maker Kit.Making ML accessible to all
The Voice Kit and Vision Kit are a lot of fun to put together and they include great programs that demonstrate the possibilities of ML on a small device. However, they don't provide the tools or procedures to help beginners achieve their own ML project ideas. When we released those kits in 2017, it was actually quite difficult to train an ML model, and getting a model to run on a device like a Raspberry Pi was even more challenging. Nowadays, if you have some experience with ML and know where to look for help, it's not so surprising that you can train an object detection model in your web browser in less than an hour, or that you can run a pose detection model on a battery-powered device. But if you don't have any experience, it can be difficult to discover the latest ML tools, let alone get started with them.
We intend to solve that with the Maker Kit. With this kit, we're not offering any new hardware or ML tools; we're offering a simplified workflow and a series of tutorials that use the latest tools to train TensorFlow Lite models and execute them on small devices. So it's all existing technology, but better packaged so beginners can stop searching and start building incredible things right away.
Simplified tools for success
The material we've collected and created for the Maker Kit offers an end-to-end experience that's ideal for educational programs and users who just want to make something with ML as fast as possible.
The hardware setup requires a Raspberry Pi, a Pi Camera, a USB microphone, and a Coral USB Accelerator so you can execute advanced vision models at high speed on the Coral Edge TPU. If you want your hardware in a case, we offer two DIY options: a 3D-printed case design or a cardboard case you can build using materials at home.
Once it's booted up with our Maker Kit system image, just run some of our code examples and follow our coding tutorials. You'll quickly discover how easy it is to accomplish amazing things with ML that were recently considered accessible only to experts, including object detection, pose classification, and speech recognition.
Our code examples use some pre-trained models and you can get more models that are accelerated on the Edge TPU from the Coral models library. However, training your own models allows you to explore all new project ideas. So the Maker Kit also offers step-by-step tutorials that show you how to collect your own datasets and train your own vision and audio models.
Last but not least, we want you to spend nearly all your time writing the code that's unique to your project. So we created a Python library that reduces the amount of code needed to perform an inference down to a tiny part of your project. For example, this is how you can run an object detection model and draw labeled bounding boxes on a live camera feed:
from aiymakerkit import vision
from aiymakerkit import utils
import models
detector = vision.Detector(models.OBJECT_DETECTION_MODEL)
labels = utils.read_labels_from_metadata(models.OBJECT_DETECTION_MODEL)
for frame in vision.get_frames():
objects = detector.get_objects(frame, threshold=0.4)
vision.draw_objects(frame, objects, labels)
Our intent is to hide the code you don't absolutely need. You still have access to structured inference results and program flow, but without any boilerplate code to handle the model.
This aiymakerkit library is built upon TensorFlow Lite and it's available on GitHub, so we invite you to explore the innards and extend the Maker Kit API for your projects.
Getting started
We created the Maker Kit to be fully customizable for your projects. So rather than provide all the materials in a box with a predetermined design, we designed it with hardware that's already available in stores (listed on our website) and with optional instructions to build your own case.
To get started, visit our website at g.co/aiy/maker, gather the required materials, flash our system image, and follow our programming tutorials to start exploring the possibilities. With this head start toward building smart applications that run entirely on an embedded system, we can't wait to see what you will create.
اردیبهشت ۲۷, ۱۴۰۱ — Posted by Scott Main, AIY Projects and Coral Back in 2017, we began AIY Projects to make do-it-yourself artificial intelligence projects accessible to anybody. Our first project was the AIY Voice Kit, which allows you to build your own intelligent device that responds to voice commands. Then we released the AIY Vision Kit, which can recognize objects seen by its camera using on-device TensorFlow…