"worth the money" is hard to say, especially for devices like these where the value is not really so much on the plain features as much as a more subjective factors like the design and the UI. I would say that purely based on features - probably not, especially with post-covid pricing. There are more powerful iphone or android apps for much less. Behringer, and to some degree Korg and Roland offer lower-end devices for not much more that ultimately might be more useful and usable. But, I do own a couple of these little guys and they're fun. I wouldn't call them "jokes", but calling them "toys" - in the good and bad sense - would probably not be a stretch, even if you can get some nice sounds out of them. I used to keep a couple on my desk and just jam a little with them as a distraction.
Garageband was never even half as much fun as three POs plugged together - at least for me. Same way the all the guitar effect models running on my iPad aren't nearly as much fun as stomping on the Rat Distortion or Boss Chorus pedals I bought 35 years ago.
I don't know if people younger than me, who grew up with touch screen devices, have that same affinity for physical controls over touchscreens?
There's a fun podcast by Arman Bohn/Distropolis (who himself has made some cool small-batch hardware synths) where he interviews makers of small hardware synths, https://open.spotify.com/show/30USGHPeGQ9ZyWQDyRnfcv might be of interest.
were you using single cycle waveforms or longer samples? in the former case, I guess there's not much to it, you just cycle through the waveform (in which case the waveform you choose would usually start and end at a zero crossing by construction, like sines, triangles, etc - or, if it does not, well, that will just create extra harmonics that might or not be desirable).
For the second case, it's more subtle. Most samplers that implement this feature will assume correct loop points (usually, but not necessarily at a zero crossing) are chosen manually by the user. Some of them implement cross-fading at the looping point to make that more forgiving, but that may be CPU/RAM intensive for some devices. If you're referring to small clicks you may get at the start and stop of sample playback, it's fairly common to use very short (ms or less) fade-in/fade-out to avoid that. There's a lot of books out there, but the main one I've read and enjoyed is this one, that happens to be free: https://cs.gmu.edu/~sean/book/synthesis/. it's more of a textbook than a cookbook.
Thanks! Reading the code again, looks like I was filling up buffer of 256 x 16 bit samples.
I think the issue with looping at arbitrary points was word alignment. You need to give it a whole buffer. So you'd have to do some nasty bit-shifting.
Per other reply, I think doing it live is probably easisest!
Thanks for the recommendation. If I ever get back into this I'll take a look.
Whenever I feel like doing that I just use "uv pip" and pretty much do the same things I'd do when using pip to messily install things in a typical virtual environment.
There's definitely a "rich get richer" effect for academic papers. A highly cited paper becomes a "landmark paper" that people are more likely to read, and hence cite - but also, at a certain point it can also become a default "safe" or "default" paper to cite in a literature review for a certain topic or technique, so out of expediency people may cite it just to cover that base, even if there's a more relevant related paper out there. This applies especially in cases where researchers might not know an area very well, so it's easy to assume a highly cited paper is a relevant one. At least for conferences, there's a deadline and researchers might just copy paste what they have in their bibtex file, and unfortunately the literature review is often an afterthought, at least from my experience in CV/ML.
Another related "rich get richer" effect is also that a famous author or institution is a noisy but easy "quality" signal. If a researcher doesn't know much about a certain area and is not well equipped to judge a paper on its own merits, then they might heuristically assume the paper is relevant or interesting due to the notoriety of the author/institution. You can see this easily at conferences - posters from well known authors or institutions will pretty much automatically attract a lot more visitors, even if they have no idea what they're looking at.
Hi, I am currently considering a Lekiwi build but I am intrigued by Mars. Outside of the need for external compute, what issues did you find with SO101 and Kiwi?
Also I am curious about a couple of the parts, if you don't mind sharing - are those wheels the direct drive wheels from waveshare? And what is the RGBD camera? (Fwiw, even if it's hefty the MARS price tag seems fair to me).
There's several things but for example, there is no LiDAR on it nor even a good place to put one. If you're going to navigate around, without a LiDAR or good compute for VSLAM (which is very hard to setup and VERY demanding in compute), you will very quickly get lost. At this point the Kiwi is only for very local navigation (and you will still have IMU drift).
There is also a possibility for it to tip the base if the arm is fully extended. And the SO-101 has quite poor repeatability.
The base is also slow to move, and depending on which surface you are the omniwheels can get dirt in quickly.
Finally, external compute means you need in particular to teleoperate from your computer, so you have to be far from the robot and not necessarily in the same orientation than it which is very, very uncomfortable. This app system we made is one of the things people love the most about MARS.
Ah, and RGBD really does matter for navigation AND for learning (augmenting ACT with depth yields better results).
The wheels are indeed these ones, and the camera on the video is a luxonis oak-d wide, pretty expensive but comfortable to work with. However, the version we're shipping includes a much cheaper stereo-depth camera that we calibrate ourselves - I can't get you the reference right right now cause it's late at night but feel free to reach out on discord
Ah, so that's why the camera seemed familiar, I have a couple of the luxonis cameras around the office :). Re: kiwi, those are good points. Thank you for the answer!
Seems like a corporate version of the "buy vs build" question. If it's true that the goal is to become more approachable to students and hobbyists (which personally I think would be a good idea) - then Qualcomm must've evaluated both options and decided "buy".
I feel like the raspberry pi pico is more of a competitor to the arduino than the raspberry pi - there's quite a few applications where having a whole linux operating system is a hindrance compared to running on bare metal, especially anything that needs real time control of signals. (Although you can get around this on the pi by connecting peripherals via USB/serial/i2c which themselves might use MCUs).
Then again, one of the more accessible (IMO) ways of using pi picos is with the arduino environment, or its cousin platformio. I do think that even if in some ways the arduino abstractions can be limiting in some ways, in practice it's often a big timesaver for more casual (and not so casual) applications. It gives you easy access to a large ecosystem of libraries across a lot of hardware platforms.
This is a great book - learned a lot from the first edition back in the day, and got the second edition as soon as it came out. It's always fun to just leaf through a random chapter.
Depends on who is doing the "careers valuing" and how closely they're looking. At a coarse level, especially for jobs in industry, venue is a pretty simple (but obviously imperfect) indicator for quality. If you've managed to publish one or more papers at the most selective venues (esp. as main author), then I would assume there's a decent chance you are good at research, even if I don't know anything about the subfield you work on. As a further indicator, the number of citations is also a noisy but easy to check proxy for "impact".
But for academic or other high-level research jobs, whoever is doing the valuing is going to look at a lot more than just the venue.
> But for academic or other high-level research jobs, whoever is doing the valuing is going to look at a lot more than just the venue.
Depends on where. In some countries (e.g. mine, Spain), the notion that evaluation should be "objetive" leads to it degenerating into a pure bean-counting exercise: a first-quartile JCR indexed journal paper is worth 10 points, a top-tier (according to a specific ranking) conference paper is worth 8 points, etc. In some calls/contexts there is some leeway for evaluators to actually look at the content and e.g. subtract points for salami slicing or for publishing in journals that are known to be crap in spite of good quartile, but in others it's not even allowed to do that (you would face an appeal for not following the official scoring scale).
Yeah, that's a good point. I was thinking in the US context, but I also have some experience for the academic evaluation process in Chile, and there were similar issues to what you were describing. The "bean counting" part of it was an issue for academics in CS, because the rules were the same across departments, even where it didn't really make sense. So for example CS profs got no credit (towards promotions) for publishing in conferences, even if they were highly selective ones like ICCV or N(eur)IPS.
Yes, the issue you mention is also typical in many countries with subpar systems. In Spain it used to be exactly the same. Lately, top conferences are getting recognition in some contexts, but there are still some calls where they don't count and it's better to have a crappy journal paper. I mostly publish in conferences but always have to make sure to have enough indexed journal papers per 6-year period to feed the system.