There's a fact in the article discussing this exact point around relative rates of antimicrobial use by country:
> Australia ranks seventh-highest in the developed world for antimicrobial community prescribing rates. Australia’s hospital antimicrobial use is estimated to be nearly three times that of the European country with the lowest use, the Netherlands.
So we are top 10 but not the single worse offender, at least by this metric.
Super cool resource for middle and high school students learning to code! I wish this existed when I was in school. Will definitely be sharing it with kids and even older folks wanting to learn the basics of code :)
In my own experiments with OpenAI's GPT-4 API with temperature set to zero, I was still not getting deterministic outputs, with some small variations between completions. Not sure why, and I haven't had a chance to dig further or talk to their team about why and how this happens.
There are a lot of critiques here and elsewhere of the statement and the motivations of its signatories. I don't think they are right and I think they take away from the very serious existential risks we face. I've written up my detailed views, see specifically "Signing the statement purely for personal benefit":
There are a lot of critiques here and elsewhere of the statement and the motivations of its signatories. I don't think they are right and I think they take away from the very serious existential risks we face. I've written up my detailed views, see specifically "Signing the statement purely for personal benefit":
Very little by way of arguments for why these chat bots are or aren't sentient. This article has an assumed point of view (current AI bots aren't sentient) and then describes & judges user's reactions to chat bots in light of that. I don't think it adds very much new to the societal conversation.
I generally agree with the "stochastic parrot" classification, but I also think we'll continue to use that argument well past the point where it's correct.
"Stochastic parrot" just strikes me a typical motte-and-bailey. The narrow interpretation ("LLMs only learns simple statistical patterns") is obviously wrong given the capabilities of ChatGPT, while the broad interpretation ("LLMs are text predictors") is trivial and says nothing of worth.
> but I also think we'll continue to use that argument well past the point where it's correct
Well, it's pretty by those AIs construction that they are parrots, and there are other designs that pretty obviously aren't (but don't get any impressive result like that). The border is not perfectly binary, but I don't think any knowing person will overdo on that argument.
But yeah, eventually a lot of people will (ironically) parrot the argument even when it obviously doesn't apply. That's normal.
https://www.linkedin.com/posts/drjimfan_please-see-update-be...