As a kid in high school I built an FM transmitter including etching the PCB at home and everything. Even thought it was not super high-powered (you're not supposed to build a high-power FM transmitter, especially back then when radio was not digital) it was super fun to be able to speak to this little thing I built and hear it come out from the radio.
As an adult, I built a clone of a Fender Champ guitar amp, completely handwired with an eyelet board like they did back in the day. It sounds amazing for such a simple circuit and is actually an object I enjoy using as opposed to a toy like the FM transmitter was.
Maybe do it as Roll20 does: use labels for known systems but allow field to be free if entry does not match any of the labels. It makes search a bit more complicated, but nothing crazy.
For articles on open-source software, I would recommend JOSS [1]. The organization that publishes JOSS has other journals [2], but nothing in the lines of your study.
I've used a Chrome OS device for a long time as my primary device. I'm a researcher and never do any "real" development besides some web stuff or writing code locally, though (most of my code runs on clusters). However, using just web stuff like Cloud9 was not for me, so one of the first things I did with my Chromebook was to install crouton.
If you need to run any specific stuff locally, get a Chromebook that has an Intel CPU, not an ARM. Some stuff (especially proprietary binaries) are not available for ARM. If you get a device with a touchscreen, Android apps sometime will be good enough for things that do not work well on Linux (e.g. Skype).
Independent Subspace ICA models can be applied to more signals than sources problems. It's also possible to use different decomposition methods, or subtracting already detected signals.
Hi, I'm one of the authors. In broad lines, we pretrained one model (the "Reader") to learn to read text and output vocoder variables, and another model (SampleRNN) to go from these vocoder variables to an audio waveform. Then, we finetuned both models together to be able to go from text to speech, end-to-end. The "end product" is a text-to-speech system, but without the need of having to extract tons of hand-engineered features from the text to be able to generate speech. We also expect that with more training this will be able to overcome the usual vocoder speech "unnaturalness" issues.
I think the model just got tired of reading text and decided to mock us :) Just kidding. The attention mechanism got stuck somehow for this sample. This does not happen very often, though. It's important to note the samples we posted were not cherry-picked: they are just the first 10 sentences from our test set.
Regarding the truncation at the end, that was a bug in our sampling code that we just fixed. We will update the samples soon!
Is there any way to artificially induce that failure? I'm an artist and I've been trying to get a handle on ML stuff, and being able to feed speech through this to give it the flat affect of the phoneme-mode samples, or insert attention failures at specific points, would be extremely useful for a number of projects I have in mind.
I think there is no PCIe passthrough for Windows hosts on Virtualbox, and that's required for using CUDA on a VM. You can get graphics acceleration on a VM under any kind of host, but it uses a virtual graphics card from Virtualbox instead of your actual card, so it does not work with CUDA.
At the current state of things, AMD is definitely not a viable route, but that might change in the future with the "Boltzmann Initiative" [1]. Performance with OpenCL is not comparable with CUDA on NVIDIA GPUs at the moment, and support is lacking in most deep learning frameworks.
Thanks, your statement about performance is very helpful. Basically even if my tech stack supported OpenCL I would still be better of with a CUDA-compatible card.
SciRuby is not on par with alternative such as NumPy/SciPy and Octave, but NMatrix (its linear algebra lib) seems to be nice: https://github.com/SciRuby/nmatrix
As an adult, I built a clone of a Fender Champ guitar amp, completely handwired with an eyelet board like they did back in the day. It sounds amazing for such a simple circuit and is actually an object I enjoy using as opposed to a toy like the FM transmitter was.