Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A lot of people don't really get that our brains are a bunch of specialized subcomponents that work in concert (Your pre-frontal cortex just cannot beat your heart, not matter how optimized it gets). This is unsurprising, as our brains are one of the most complex/hard to monitor things on earth.

When an artificial tool that is really a point solution "tricks" us into thinking it has replicated a task that requires complex multi-component functioning within our brain, we assume the tool is acting like our brain is acting.

The joke of course being that if you maliciously edited GPT's index for translating vectors to words, it would produce gibberish and we wouldn't care (despite being the exact same core model).

We are only impressed by the complex sequence to sequence strings it makes because the tokens happen to be words (arguable the most important things in our lives).

EDIT: a great historic metaphor for this is how we thought about 'computer vision' and CNN's. They do great at identifying things in images, but notice that we still use image-based captcha's (Even on OpenAI sites no less!)?

That's because it turns out optical illusions and context-heavy images are things that CNN's really struggle at (since the problem space is bigger than 'how are these pixels arranged')



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: