What an unexpectedly cool post, I clicked the link thinking it would be "typical dumb", but it ended up being atypically dumb in the greatest way! Fascinating. The author overcame many challenges and wrote about them in a style as if he solved the hardest parts with only a little fiddling. Maybe he's already seasoned in the ML and robotics domains? So much fun to read.
Regarding the Video Object Detection:
Why does inference need to be done via Roboflow SaaS?
Is it because the Pi is too underpowered to run a fully on-device solution such as Frigate [0] or DOODS [1]? And presumably a Coral TPU wasn't considered because the author mostly used stuff he happened to have laying around.
Can anyone comment contrasting experience with Roboflow? Does it perform better than Frigate and DOODS?
Asking for a friend. I totally don't have announcement speakers throughout my house that I want to say "Mom approaching the property", "Package delivered", "Dog spotted on a walk", "Dog owner spotted not picking up after their beast", and so on. That last one will be tricky to pull off. Ah well :)
You are hereby put on notice that the undersigned intends to and henceforth will appropriate for his own further use without attribution to you the phrase “atypically dumb in the greatest way,” and furthermore that the undersigned may modify said phrase by replacing “greatest” with “best.” Any objection by you to said appropriation and/or modification by said undersigned will be and thereby is deemed waived by you, provided you do not respond to this notice within 48 hours. Please redirect your reply, if any, to /dev/null. Thank you.
FWIW you can use roboflow models on-device as well. detect.roboflow.com is just a hosted version of our inference server (if you run the docker somewhere you can swap out that URL for localhost or wherever your self-hosted one is running). Behind the scenes it’s an http interface for our inference[1] Python package which you can run natively if your app is in Python as well.
Pi inference is pretty slow (probably ~1 fps without an accelerator). Usually folks are using CUDA acceleration with a Jetson for these types of projects if they want to run faster locally.
Some benefits are that there are over 100k pre-trained models others have already published to Roboflow Universe[2] you can start from, supports many of the latest SOTA models (with an extensive library[3] of custom training notebooks), tight integration with the dataset/annotation tools that are at the core of Roboflow for creating custom models, and good support for common downstream tasks via supervision[4].
Regarding the Video Object Detection:
Why does inference need to be done via Roboflow SaaS?
Is it because the Pi is too underpowered to run a fully on-device solution such as Frigate [0] or DOODS [1]? And presumably a Coral TPU wasn't considered because the author mostly used stuff he happened to have laying around.Can anyone comment contrasting experience with Roboflow? Does it perform better than Frigate and DOODS?
Asking for a friend. I totally don't have announcement speakers throughout my house that I want to say "Mom approaching the property", "Package delivered", "Dog spotted on a walk", "Dog owner spotted not picking up after their beast", and so on. That last one will be tricky to pull off. Ah well :)
[0] https://github.com/blakeblackshear/frigate/pkgs/container/fr...
[1] https://github.com/snowzach/doods2