Get the top HN stories in your inbox every day.
xyzzy_plugh
Macuyiko
v1 used a very limited (albeit very easy and already quite impressive) form of transfer learning, e.g. take a pretrained network's 1000dim vector outputs given a bunch of images belonging to three sets (since the original was trained on Imagenet), and then just use K-NN to predict what a set "new" image falls into.
v2 does actually finetune weights of a pretrained network. At the time, it was a nice showcase how fast fast JS ML libraries were evolving.
andreaschandra
I found old videos 6 years ago about techable machine https://www.youtube.com/watch?v=3BhkeY974Rg&ab_channel=Googl...
and in 2019, Google released v2 https://blog.google/technology/ai/teachable-machine/
The tasks are limited, good for kick starter. I think the platform is not fastly developed (?)
timuthang
This is rad. Perfect snow day activity for the kids.
dwrodri
I've done my share of research on MediaPipe[1], but had never heard of Teachable Machine. I'm curious if these efforts are related, as these products looks like they were almost intended to be used together.
I am definitely excited to see that Google is investing into more "ML at the edge" use cases, especially in the browser. If you've never heard of MediaPipe before, but this caught your eye, definitely check it out. It has seen large uptake in the VTubing community especially as it has a very performant implementation of body + face + hand pose tracking driven by BlazePose.
thorum
FYI, this is not a new project. Here’s an HN discussion from 6 years ago: https://news.ycombinator.com/item?id=15399132
Pikamander2
The new link mentions "the first version from 2017", so I'm assuming this release is what Google considers version 2.
nmstoker
Yes I agree - the (2017) tag in the title doesn't seem right here given the update.
undefined
dang
Discussed at the time:
Teachable Machine: Teach a machine using your camera, live in the browser - https://news.ycombinator.com/item?id=15399132 - Oct 2017 (90 comments)
krm01
This was a fun redesign attempt from years ago
tsunamifury
Isn’t this basically what multimodal LLM does as well… it can do anything on the fly it can understand.
What’s different here?
navanchauhan
These are smaller scale model that you can export and run anywhere.
Even the smallest multimodal LLM would be wayyyyyyy bigger than an exported model from this
jatins
How small are these models? can I export a model here and embed it in an Android/ios app?
jzombie
The website says they can be embedded in a web app, and export to a format called TensorFlow Lite. I am sure you could embed it.
vineyardmike
You can supposedly embed them for Arduinos, so an app should be no problem.
dbish
The teaching part is what matters, it’s training (tuning in this case) a model, not just using a model already trained for inference (which is what I assume you mean). You’re providing new data that is used to update the model. Inference across an existing multimodal model doesn’t change how it classifies in any way.
jzombie
I think this is more like fine tuning an existing model to recognize features you specifically intend it to, and be light enough to run locally in a browser.
jerbear4328
It's not even fine tuning, it's creating a model from scratch. This isn't like our modern huge models either, these tiny single-purpose models have been around for ages and are quite versatile. They're so small you can't just easily run them in the browser, but train them effectively, which is what this project lets you play around with! Super cool stuff.
tsunamifury
Thanks!
ahsmha_
do we have any other self hostable, open source alternatives to this?
panarky
It is self-hostable.
It runs locally in your browser, without sending your training data to any servers.
Unless you choose to save it to Google Drive.
If you choose to host the model with Google, they get a copy of your weights, but they still don't see your training data.
Or you can host it yourself with tensorflow.js
And you can also download everything in a zip file, training data and weights, and Google never sees any of it.
If you want the source, it's here -> https://github.com/googlecreativelab/teachablemachine-commun...
pona-a
Note that the source code seems not to be the web UI itself, but rather a collection of samples/helpers to use exported models.
Wowfunhappy
It looks like the first version of Teachable Machine really is fully open source, but maybe not the new one?
throwaway14356
[flagged]
juujian
[flagged]
coder543
Teachable Machine has existed for years: https://www.theverge.com/tldr/2017/10/9/16447006/google-teac...
Its last real update (AFAIK) was in 2019: https://www.theverge.com/2019/11/7/20953095/google-ai-traine...
Like a number of Google projects, this one lives on without any clear direction. It probably will get axed some day, but the technology in Teachable Machine today is so “old school” already that I don’t think it would be that hard for someone to recreate or improve upon.
jatins
oh wow, I thought this was new. Given that this has received no attention in 3 years I'd assume this is largely abandoned internally inside company.
endisneigh
Just like act like it doesn’t exist to begin with, so you’re not disappointed.
anarticle
Who owns the model? Does google get to reuse them?
Get the top HN stories in your inbox every day.
Wow I actually have a perfect use case for this in a hobby project. Great timing.
I considered the older version but it's very limited:
> The original Teachable Machine only let you train 3 classes, whereas now you can add as many classes as you like.
I'm curious to see how far this scales, for example can I have a few hundred thousand classes? If so, what are the consequences, if any?