Hacker Newsnew | past | comments | ask | show | jobs | submit | lolspace's commentslogin

20 years?


In 20 years I'll still be alive and enjoying myself. Does 20 years seem long to you?


it seems a bit long for achieving just running your own AI models locally, given it seems to be largely a question of vram and that you already _could_ do it today with a handful of graphic cards.

20y ago we had the GeForce Ti4400, current graphics cards now come with 100x the vram and 50-60x the bandwidth.


That's one hell of an assumption. Many of my Russian friends were absolutely certain they'll be alive and well for at least the next 20 years not that long ago.


Sure, they will be alive if not come to Ukraine.


If I'm dead then being able to run an AI locally doesn't matter anyway


That's exactly what he's saying. How many citations does the top cited paper from the LessWrong community have?


Why doesn't they sound believable?


The last answer on the Stroop test video (very bottom of the page) is interesting. The system is asked how humans perform on the test, and (kinda correctly) replies that they are slower "when the color of the word and the color of the word are different". It fumbles the wording a bit, but if we're being generous you can project it to the right answer. But then it's asked "How about you?", to which it replies, "I am not affected by this difference". That's accurate - model inference would take the same amount of time regardless. If taken at face value, this would be "unbelievable". There's clearly no mechanism by which the model could perform this sort of introspection to understand its own abilities.

Of course, it's not actually performing introspection, and it's just lucky that it guessed the right answer here. Perhaps it's just learned that when conversations discuss a general case (how do humans perform) and then turn to a specific case (how about you?), there is typically some difference between the two that should be noted. But it still gives an illusion of an unbelievable capability.


Hi, one of the authors here.

The thing to bear in mind when reading the dialogue examples in figure 11 is the custom prompt shown in Appendix D:

``` This is a conversation between a human, User, and an intelligent visual AI, Flamingo. User sends images, and Flamingo describes them. User: <a cat image> Flamingo: That is a cat. It’s a tiny kitten with really cute big ears. User: <a dinner image> Flamingo: This is a picture of a group of people having dinner. They are having a great time! User: Can you guess what are they celebrating? Flamingo: They might be celebrating the end of a successful project or maybe a birthday? User: <a graph image> Flamingo: This is a graph, it looks like a cumulative density function graph. ```

My personal opinion would be, once you're doing next token prediction with this description of what Flamingo "is" in the history, then "I am not affected by this difference" is a pretty reasonable completion rather than a lucky guess. It definitely was exciting for the team that this whole example worked so nicely, but if you discard the visual side, this "illusion of an unbelievable capacity" has been seen in other works as well.


Yeah, I didn't realize there was that additional prompt. It certainly makes more sense that if the prompt includes the description that the agent is playing the role of an AI, it would be able to deduce that it would not be affected. I was assuming there was no such indication, and so the system would be implicitly trying to predict what a human would say in that situation (since the training data is largely human-written text).

Still, you could imagine a parallel version of Flamingo which performs the same reasoning, but is artificially slowed when shown Stroop images. Obviously, there would be no way for it to deduce this fact from the training data, and it would not be able to say that it is also affected as humans are.

I don't mean to say that this is some great failing of the system or anything - just that a casual reader might infer from the Stroop dialogue that the system had some way to inspect and reason about its own performance, when in fact it's just estimating what it thinks would be true for AI systems (since it was told that it's an AI system in the prompt) in general based on the training corpus.


>"I am not affected by this difference"

I'm just grateful our AI overlords can tell the difference between affected and effected, even if they're not affective or effective.

https://prowritingaid.com/grammar/1000196/Effected-vs-affect...


Have you lost someone you've loved?


Yes, but I get the unstated implication. I don't think it would be fair to apply it to me even though I think it may be fair to apply it to e.g. Kurzweil, unless he's made recent statements suggesting otherwise (I don't keep up with him). I currently have no expectation of seeing a convincing simulation/resurrection/recreation/continuation of any of them, or the ones I currently anticipate losing over the next few decades, even should I go on indefinitely living, and don't expect anything at all if/when I should die.


Would you want to have an avatar of them? I don't think I would.


If the 'avatar' is convincing and can have a continued existence as a person independent of my interaction with it, i.e. I don't "have" them as a form of possession, yes. It does seem better than 50/50 to me though that some (maybe all) wouldn't want continued existence and would decide to go back to not existing (for all I can tell), there may even be strong predictive signs of that in the brain models such that they don't even need to be temporarily brought back and asked or first made to listen to arguments or just have some final-final talks with me/others before deciding. I'd accept that.

For less convincing avatars where the point is just my own benefit of conversing with something like them when I want, from slightly like them to eerily like them, for one it's a weak yes, for the others I'm more indifferent -- it'd be more in the realm of curiosity than desire, like talking to a historical figure or a fictional character. The weak yes I expect will get weaker (as it already has, despite non-linear flare-ups/resurgences where it's temporarily stronger) and eventually match the others after long enough.


Thanks! Just had problem


> I have not given transformers enough attention...

( ͡° ͜ʖ ͡°)


Attention is all you need


I did the same with AppStore in 2007 lol Good luck


the crucial difference is that OpenAI doesn't have an iPhone, right?


They have a huge computing platform and a brand name


Which can be substituted by any other computing platform that has the necessary function for your application. As long as one exists, you're good to go. (possibly none exist yet)

The brand name doesn't count for anything unless you, as an application developer, decide to assign some value to it.


The compute platform is commodity. Even Accenture has a cloud.

The brand name holds little weight outside of developer communities. But developers are exactly the group that will happily shop around alternatives. The App Store had power because of consumer buy-in, not dev buy-in.


Huggingface has the same things but it has the feel of Github.


Wit is not practical skill. You can't teach that to another person.


I suspect it very much is a practical skill. I'd be shocked if genetics are the sole determinant in your wit - surely other aspects such as linguistic ability, social skills, knowledge of pop culture, confidence, etc. all contribute to wit. And those can all be trained.

It would take time, of course, and a lot of practice rather than just having it explained to you. But I don't see why it wouldn't be trainable.


What would happen if I created an illegal site? CP or selling guns?


If you created and seeded long enough to get active traffic, the host and DHT peer IPs might be flagged by LE. Once flagged the IP info passed to different investigators for prosecution purposes. Depending on your level of anonymity (VPN, tor, none, other) the investigation is either a dead-end or success. If success, warrant granted, home searched, PC seized. That's one potential scenario.


But they can't take down the website?


People don’t have to use your website.


Presumably nobody would know it was yours until you added contact information to it.


What is modern PHP?


It commonly refers to PHP 7+ paired with clean code practices and SOLID principles.

The opposite is PHP5-style code from the early 2000s, no package management, no type strictness, no object orientation, just a giant spaghetti mix of JS, HTML, SQL and PHP in a 10kloc file.


Do you think spaghetti code won't come from folks who use Composer? Ha!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: