Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As I mentioned, I don't think any single prompt can demonstrate the presence of true reasoning. If the prompt isn't shown to broadly generalize, it might just be doing a text match to something that was said before on the depths of the internet. You can see this in the next section; Kevin Lacker gets GPT-3 to demonstrate it knows some basic trivia questions, but it "knows" any prompt with the same textual structure as a basic trivia question, even if the prompt is nonsense. This strongly suggests that it's parsing out key words and doing a lookup on them rather than accessing a consistent internal model.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: