This is frankly a bad and cavalier take on an extremely important subject. Many on the list are Academics outside AI/ML and/or leaders of AI orgs at the very top of the field that have no need to catch up to or slow down OpenAI to benefit themselves. Risks from AI are very real and Sam Altman himself has said so numerous times. He also in fact advocated for slowing down AI progress on Lex Fridman's podcast this month.
How do we reconcile Sam Altman's position as CEO of OpenAI with his repeated calls to slow down AI progress? Is the expectation that his conscience, sense of ethics, and concern for his own company's impact on society will temper the opposing urge/pressure to maintain OpenAI's lead in the AI market?
I'm generally not a big fan of Altman or OpenAI but their corporate structure ensures limited upside to Altman and the employees so other than recognition/fame which I think as the head of YC for many years Altman already had plenty of there isn't a huge incentive for them to maintain their lead.
Short of like a binding UN resolution or something similar we don't have a sliver of hope to slow down global AI progress which is a major factor in the doomer argument.