Hacker Newsnew | past | comments | ask | show | jobs | submit | KyleLewis's commentslogin

Cant wait for the "multimodal" version that can take a written description and generate meshes


When a model "reasons through" a problem its just outputting text that is statistically likely to appear in the context of "reasoning through" things. There is no intent, consideration of the options available, the implications, possible outcomes.

However, the result often looks the same, which is neat


If you find yourself enjoying cable management or software optimization at any level you should be careful before getting sucked into this. Its a great game


Really happy this is happening! There's no reason we shouldn't be able to freely read research funded by NIH, NSF, etc., and there's a ton of high impact work there.


NIH research at least already had a public access requirement


But that was after an embargo period of 12 months, during which a journal could paywall it. This forces immediate availability, which is a good thing.


I could be wrong but I think they might have been talking about a hypothetical, arbitrarily complex software. As a limiting case, if software were simulating a mind down to the quarks, it becomes unclear what the difference would be.

I agree with your point of course, what we have today is certainly not like a human mind


The problem with the hypothetical arbitrarily complex software is that there is no particular reason to believe it could exist, never mind that it will (at least not for meaningful definitions of "software"). A computer so powerful and a programmer so smart that they can represent the behaviour of the constituent parts of a human brain at the subatomic level as a state machine programmable to achieve different thought processes is at least as much of an imaginary construct as the metaphysical dualist mind it's supposed to be a counter argument to.

And you don't need to think that your brain is anything other than an immensely complex state machine to think that some of the core parts of what we consider to be self-awareness (emotions... or chemical responses to certain stimuli which have over billions of years helped the brain parts of biological organisms make more optimal eating and fighting and fucking decisions for the survival of the gene code) are an altogether different level of problem to train an AI on than solving maths problems or generating text. Not least because if you want AIs to write love letters to each other, you can get very pleasing results quickly with a Chinese room without the inconvenience of having to simulate all the intractable chemistry of desire.


I hope more disciplines can shift the publishing culture towards the norm found in physics, where arxiv is the go to. I'm not sure why that is the case but it's pretty great.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: