Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The issue that 'rests are so common, we need to remove them or the algorithm would just predict rests all the time' shows the flaw with this approach.

If there is some pattern in your data, and your algorithm, rather than replicating something similar to the pattern, just outputs the most likley value at any point in time, then it is never going to work as you hope. Rests are a symptom of this, and fixing them doesn't fix the underlying issue.

There are a bunch of solutions to this, but adversarial models do a good job of approximating a probability distribution like this.



> There are a bunch of solutions to this, but adversarial models do a good job of approximating a probability distribution like this.

The problem is GANs on sequence data still stink compared to max-likelihood: they train far more slowly, more unstably, and still don't generate decent sequences compared to a char-rnn with a bit of temperature tuning & beam search. They should be better for precisely the reason you say, but they aren't.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: