Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems... dangerously careless. What if it uses the internet to seek out zero day vulnerabilities and exploit them routinely? Sure humans also do this, but we're talking about a new level of 0day exploit carried out at scale. Sure, maybe it won't, but do you trust a hallucinating, left-brained, internet-trained intelligence to be strictly logical and mindful for all it's actions that is taking self autonomously (as this project aims to do)?


As long as AI is available to general public, I can guarantee you that there are hundreds of developers trying to let GPT to take over the world by providing it all the APIs it asks for.

I'd work on it myself if I would knew enough about it and would have enough free time.

Asking millions of humans to be responsible is asking water to be dry.

It should either be regulated to hell or we should accept that it'll escape if it's even possible with current technology.


Regulations simply don't work whenever there's economical (human) interest. (See: Drugs). The cat is out of the bag, we just have to think how we face the new scenario.


Should we regulate everything you don't "[know] enough about it and would have enough free time. [to learn about it?]"

Isn't it silly to jump to these conclusions when yourself are admitting you really don't know anything about the tech?


No, it's a legit concern. Both things will happen - there will be abuses, and there will be good uses. It will be a complicated mess, like viruses interacting with anti-viruses, continuous war. My hope is on AGI learning to balance out as many interacting agents.


Sure there will be abuses, but not in this way, I don't think. GPT at this point isn't capable of creating novel computer viruses.

If you want to scrape random websites and have GPT hammer at them for old vulnerabilities, I think you could get that to work, but to what gain? You'd be spending a crap ton of cash on API requests and compute, and people do this without GPT obviously. Cost is probably not worth it here for attackers.

Then, I'd hope OpenAI would have some way to detect this and shut these people down. I doubt they do right now, but that'd be my hope ...


We should regulate everything which could cause a mass havoc. Like some chemicals, like radio-active isotopes, like nuclear research, like creating viruses.

Is AI in the same category? Some respectable people think so.


“Regulation” is pointless here. It’s bits and bytes, it makes about as much sense as when regulating encryption algorithms was attempted.


it seems to query the internet, get a response, send it to OpenAI wait ... get the response back from openAI, repeats. Any old school security scanner is 1000x more efficient, not to mention that API requests to OpenAI are quite limited.

What's eventually dangerous is that it may execute scripts on your own machine, so if it were to do some funky things, that could be rather the danger .. for yourself


> What if it uses the internet to seek out zero day vulnerabilities and exploit them routinely?

How would GPT-4 make this more likely or scalable?


There is the possibility of API upgrades at OpenAI flipping this from "not dangerous" to "dangerous". If the AI just needs some amount of intelligence to become dangerous, then it may cross that threshold suddenly - and with OpenAI developers unaware that their next round of changes will be automatically deployed to an autonomous agent and with the auto-GPT runners unaware that new changes may substantially increase the power of the model.


I'm not really sure why we are assuming that these language models ever can have any form of intelligence?

To me, this is just like saying "we don't know if the latest CPU released by intel will enable Linux to become intelligent"


Language models obviously have some form of intelligence right now. You can have GPT-4 take SAT tests, play Chess, write poetry, predict what will happen in different social scenarios, answer theory of mind questions, ask questions, solve programming puzzles, etc. There are some measures that GPTs are clearly below human levels, some where they are far beyond, and some where they are within human range. The question as to whether or not language models have any form of intelligence has been definitively answered - yes, they can and do - by existence proof.

What definition or description of intelligence do you use such that you doubt that language models could have it? Would you have had this same definition in the year 2010?


> Language models obviously have some form of intelligence right now.

This is not "obvious" in any sense of the word. At best, it's highly debatable.


I take intelligence to be a general problem solving ability. I think that's close to what most people mean by the term. By that definition it's clear that LLMs do have some level of intelligence - in some dimensions greater, lesser, or within the range of human intelligence.

What definition do you have for intelligence and how do LLMs fail to meet it?


It is not clear LLMs have a "general problem solving capability" at all. That's the entire point. That's a high bar!


What do you call being able to play chess and play any other well known game and do well on a battery of standardized tests and write code in a variety of languages in a variety of problems and ask questions and write fiction prose or poetry and generally just take a shot at anything you happen to ask.

I just can't take the idea that there is ambiguity as to whether these things have general problem solving skills seriously. They obviously do.

As I asked up-thread, if I had a chat window open with you, what's something you would be able to say or do that an unrestricted ChatGPT wouldn't?


I would be able to make a long list of things while maintaining logical consistency with things earlier the list. For instance, I asked ChatGPT-4 to create a schedule for a class, and it started off okay, but by the time it got to the end of the schedule, it started listing topics already covered. Really shows how it's just going off of statistics.


This is an example of ChatGPT performing poorly, but not being unable to do the thing. Nobody would say would say ChatGPT has human level intelligence across all domains - but that it has general problem solving ability. In other words, I'm saying it has an IQ, not that it has the highest possible IQ.

And, of course, there are domains where ChatGPT will do better than you. Since I don't know your skill set I don't know what those domains are, but I assume you'd agree. Just like ChatGPT giving a bad schedule doesn't disprove it's intelligence, you not being able to come up with acrostics or pangrams easily (or whatever) doesn't disprove yours.


You're just moving the goalposts.

GPT being bad this way, and being bad at "substitute words in all your responses" means it is leaking the abstraction to us. It's because of how its built and how it works. It means it isn't a general problem solving thing: it's a text prediction thing.

GPT is super impressive, I don't know how many times I need to say that, but it isn't intelligent, it doesn't understand the problem, and it doesn't seem like it ever will get there.


That's not moving the goalposts - it's exactly what I've said throughout this thread. GPT is better, worse, and within human ranges at different tasks - but it can do a wide range of tasks.

That GPT can solve a wide variety of problems, including problems it's never seen before, is literally the definition of intelligence and pointing out results where it underperformed is not even attempting to rebut that.


Sure, I would agree with that. I do not agree that it is doing anything more than predicting text. But it does it really well!

> including problems it's never seen before

Can you demonstrate this?

> is literally the definition of intelligence

I wish it was this easy! Unfortunately, it is not. GPT says the definition of intelligence is:

Intelligence is a complex and multifaceted concept that is difficult to define precisely. Broadly speaking, intelligence refers to the ability to learn, understand, reason, plan, solve problems, think abstractly, comprehend complex ideas, adapt to new situations, and learn from experience. It encompasses a range of cognitive abilities, including verbal and spatial reasoning, memory, perception, and creativity. However, there is ongoing debate among researchers and scholars about the nature of intelligence and how to measure it, and no single definition or theory of intelligence has gained widespread acceptance.

Which, is pretty good!


It’s not that it performs poorly, its that it performs poorly in a particularly leaky way. The error reveals its true nature.


Well, I dunno. Similar to Stockfish, wolfram alpha, etc.. I suppose! (tho seems it's much worse at specific problems than these tools are at those problems).

I'm not saying it isn't impressive! Just that it very much seems to be really good at finding out what text should come next. I don't think that's general problem solving!

Giving it a SQL schema and getting valid queries out of it is super impressive, but I have no idea what it was trained on.

> I just can't take the idea that there is ambiguity as to whether these things have general problem solving skills seriously. They obviously do.

It is not obvious to me this is the case! Often I will get totally wrong answers, and I won't be able to get the correct answer out of it no matter how hard I try.

> what's something you would be able to say or do that an unrestricted ChatGPT wouldn't?

Well, I'd ask you clarifying questions, for one! GPT doesn't do this type of stuff without being forced to, and even then it fails at it.

Also if you asked me to do something like "replace the word 'a' with the word 'eleven' in all your replies to me" I won't do weird garbage stuff, like reply with:

"ok11y I will repl11ce all words with the word eleven when using the letter 'a'"

lol


>> This is not "obvious" in any sense of the word. At best, it's highly debatable.

Does a dog or cat have intelligence?

If you answered no, then I would ask if you don't you believe that by some measure a dog or cat has more intelligence than a rock?

And as a follow-on I would ask if you think GPT demonstrates more intelligence than a dog or a cat.

But perhaps you believe that in every one of these examples there is not a single case where it "obviously has some form of intelligence."

(I am really trying to highlight the semantic ambiguities)


It probably doesn't have more intelligence than a dog or a cat...

Just like chatbots 20 years ago didn't, even though they could talk, too.


Would you settle for "behave exactly as if they had some form of intelligence"?

Because from where I sit it's a distinction without a meaningful difference.


> Would you settle for "behave exactly as if they had some form of intelligence"?

Sure, it behaves as if it has some form of intelligence in the sense that it can take external input, perform actions in reaction to this input, and produce outputs dependent on the input.

This historically has been known as a computer program.


You're a computer program?


Never fails that when a techbro has been told LLMs aren't what they think they fall back to a field they certainly have more authority on: The human brain/intelligence.


The issue here is that the "LLMs have intelligence" side of the argument can lay out a simple mainstream conception of intelligence (general problem solving) and explain directly how LLMs meet this definition. The other side of the argument, at least here in this thread, seems to be an empty insult or two and... Nothing else?

Again, just say what you think intelligence is and why you think LLMs don't have it. If you can't do that then you have no business expressing an opinion on the subject. You really aren't expressing an opinion at all.


Brother if I could get people who believe ChatGPT is intelligent to post something more than "oh and arne't you just an autocomplete" then I would be so god damn happy.

This fantasy land you live in where people who have no formal training in the matter are making this high brow elegant reasoned argument doesn't exist and the reason you think the "other side of the argument" is just being insulting is because the burden of proof is not on us.

It doesn't help that half the time you guys post you directly contradict the main researcher's own assertions.


It's odd that you are either unwilling or unable to express your position but still comment.


People are claiming GPT has general problem solving capability and it isn't true. They don't know what they are talking about.


This just isn't true at all, though. Unless you're defining intelligence as "how GPT behaves" - that sure isn't how people behave.

Even GPT-4 it is easy to get it into a loop where it's just swapping one wrong answer for another. It doesn't act like it is intelligent - it acts like it is trying to predict the next text to display! Because that is what it is doing!


Is stockfish intelligent?


Stockfish is not general. LLMs are able to not only play chess but also do a wide variety of tasks similar to humans. If I had a chat window open with you, what would you be able to do or say that an unrestricted ChatGPT could not?


This is not a hard problem to answer, if you try to. Here's a good example:

> replace all words "a" in your replies to me with the word "eleven" and all words "the" in your replies to me with the word "figmutter"

> Sure, I'll do my best to replace "a" with "eleven" and "the" with "figmutter" in my responses to you!

How can I assist you today?

> Tell me a brief story about yourself!

> Sure, here's a brief story about Eleven:

Eleven is a language model developed by OpenAI based on the GPT-3.5 architecture. Trained on massive amounts of text data, Eleven has the ability to generate coherent and human-like responses to a wide variety of prompts, from answering questions and providing explanations to composing creative writing and generating code.

Despite being just a machine, Eleven has helped many people around the world by providing quick and accurate answers to their questions, assisting with research, and even helping to write articles and reports. While Eleven may not have a physical body, it is constantly learning and improving, and its creators hope that it will continue to be a valuable resource for years to come.

lol? Very intelligent ...


>> Is stockfish intelligent?

It isn't general intelligence but I would argue that it is more intelligent than a new-born human being.


I think it's hard to define intelligence, and I wouldn't say (generally) that computer programs are intelligent.

If a building was on fire and you had to save a running instance of stockfish or a newborn, you'd probably pick the newborn.

But! If you do say stockfish is intelligent, sure! GPT is too!


Have you seen what they can do? /s


Yeah next time someone says this to me in earnest I'm going to lose my mind.


It doesn't. It makes the consequences more dramatic, if it (accidentally, even!) works out how to create its own successor, because at that point the genie is out of the bottle and you won't get it back in.


Needs a big honking datacenter or billions of compute credits and safety for 6-12 months.


Doesn't Alpaca seem to suggest that assumption is no longer true?


Alpaca rides on LLaMA. And LLaMA was trained on 1T tokens for a long time. The fine-tuning takes one hour with low rank adaptation. But pre-training a new model takes months.


Yeah maybe there is that possibility, but there is the possibility of a person doing that too. GPT-4 is probably less probably to be able to do that than a person.


I'm not really picking on GPT4, but My point really applies to any LLM on autonomous Internet connected mode.


Yeah, I don't mean to be specific about GPT-4 either, it's just the most powerful model so often convenient to use as an example.


Not sure if OpenAI has shut down any 'experiments' before, but this might be a candidate.


Wait for somebody mixing the concepts of "virus" and "AI". And we better begin to prepare some kind of AI sentinels because the question is not "if", but "when".


According to Agent Smith, virus + intelligence = humanity.


I don't remember Mr. Smith mentioning anything about intelligence :-)


According to Bill Hicks, virus + shoes = humanity.


Given ChatGpt lowers the entry to programming, I'd be worried about teenagers writing malware sooner than AI supremacy.


I honestly appreciate new woke meta of fearmongering against sentient AIs. It's a welcome break from the anti capitalism woke meta.


Breaking news from HN user ActorNightly: Anti-AI Visionary Elon Musk is a woke soyboy now. Will antifa's reign of terror ever end?


This reminds me very much of the time when someone from NASA came out with an article about them working on a hypothetical warp drive, and then everyone literally believed that we are going to have a FTL warp drive, because people from NASA were working on it.


Same screeching and pearl clutching as always. Covid has wound down, trump is out of office, Twitter is still there (unfortunately). People have to have something to get hysterical about and this is it for now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: