Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Re sonnet problem: we can use GPT-2 to generate 10k sonnets, then choose the best one (say by popular vote, or expert opinion, etc), it's quite likely to be "compelling" or at least on par with an average published sonnet. Do you agree? If yes, then with some further deep learning research, more training data, and bigger models, we will probably be able to eventually shrink the output space to 1k, 100, and eventually maybe just 10 sonnets to choose from, to get similar quality. Would this be considered "progress for that problem" in your opinion?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: