Hacker Newsnew | past | comments | ask | show | jobs | submit | soroushjp's commentslogin

Jim Fan from NVIDIA explained how flawed the methodology of this benchmark is. I wouldn't put any weight on it.

https://www.linkedin.com/posts/drjimfan_please-see-update-be...


There's a fact in the article discussing this exact point around relative rates of antimicrobial use by country:

> Australia ranks seventh-highest in the developed world for antimicrobial community prescribing rates. Australia’s hospital antimicrobial use is estimated to be nearly three times that of the European country with the lowest use, the Netherlands.

So we are top 10 but not the single worse offender, at least by this metric.


I'd say that Geoffrey Hinton and Yoshua Bengio, two outspoken proponents of existential AI safety, are pretty familiar with how AI works.


Super cool resource for middle and high school students learning to code! I wish this existed when I was in school. Will definitely be sharing it with kids and even older folks wanting to learn the basics of code :)


In my own experiments with OpenAI's GPT-4 API with temperature set to zero, I was still not getting deterministic outputs, with some small variations between completions. Not sure why, and I haven't had a chance to dig further or talk to their team about why and how this happens.


Non-determinism in GPT-4 is caused by Sparse MoE [0]

[0] https://news.ycombinator.com/item?id=37006224


Amazing concrete, thorough advice -- thank you so much for taking the time to share it with me and others in the same boat!


Super helpful take, thanks!


There are a lot of critiques here and elsewhere of the statement and the motivations of its signatories. I don't think they are right and I think they take away from the very serious existential risks we face. I've written up my detailed views, see specifically "Signing the statement purely for personal benefit":

https://www.soroushjp.com/2023/06/01/yes-avoiding-extinction...


There are a lot of critiques here and elsewhere of the statement and the motivations of its signatories. I don't think they are right and I think they take away from the very serious existential risks we face. I've written up my detailed views, see specifically "Signing the statement purely for personal benefit":

https://www.soroushjp.com/2023/06/01/yes-avoiding-extinction...


Very little by way of arguments for why these chat bots are or aren't sentient. This article has an assumed point of view (current AI bots aren't sentient) and then describes & judges user's reactions to chat bots in light of that. I don't think it adds very much new to the societal conversation.


Agreed.

I generally agree with the "stochastic parrot" classification, but I also think we'll continue to use that argument well past the point where it's correct.

I'd rather be the person who overempathizes than the person who ridicules other people's empathy. I tried to write a bit about this here: https://superbowl.substack.com/p/who-will-be-first-ai-to-ear...


"Stochastic parrot" just strikes me a typical motte-and-bailey. The narrow interpretation ("LLMs only learns simple statistical patterns") is obviously wrong given the capabilities of ChatGPT, while the broad interpretation ("LLMs are text predictors") is trivial and says nothing of worth.


Congratulations both on being the rare example of someone anxious to err on the side of decency, and on gaining a subscriber.


> but I also think we'll continue to use that argument well past the point where it's correct

Well, it's pretty by those AIs construction that they are parrots, and there are other designs that pretty obviously aren't (but don't get any impressive result like that). The border is not perfectly binary, but I don't think any knowing person will overdo on that argument.

But yeah, eventually a lot of people will (ironically) parrot the argument even when it obviously doesn't apply. That's normal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: