Hacker Newsnew | past | comments | ask | show | jobs | submit | Buffout's commentslogin

This "kid level" argument fooled even Albert Einstein, Roosevelt, and many others. into developing atomic bomb. "If we don't build it Nazis will"

Using it was achieved by different kid level argument "It will prevent even more death and suffering"


The difference is that a lot more thought went into it. Not only that, the presidents domain of expertise and distinction/authority is about these issues. You way down your options, you navigate a complex issue with many factors involved. To prevent more death and suffering was indeed what was one of the most contributing factors of Roosevelts decisions (also to use it, as a direct assault on Japan would have cost a factor more in lives and suffering for both Japan the US)

The difference is of course a well thought out weighing of options, to feel into the issue and make your choice. Taking my immediate surroundings, a lot of people don’t engage in that level of philosophy. Knee jerk reactions are still very common in our species.


Absolutely correct. But that fact doesn't make these arguments acceptable.


Scientist have moral responsibility not to build weapons of mass destruction for example. Too bad even Einstein couldn't keep with your advice. Scientists are sometimes not so smart. They can be fooled like anybody else.


We're not talking about accidentally inventing a machine gun, we're talking about being cautioned about inventing a machine gun and not caring:

"Hey you're inventing a machine gun!" "No I'm not, and even if I was, machine guns can be used for many things other than violence!"

Edit: Also, Einstein in this analogy is the one cautioning about someone being able to invent a machine gun.


"Hey, maybe I can use it for starting 10 remote jobs", said software engineer.


- some guy uses ChatGPT to generate food recipe, doesn't check it. He poison himself.

- some guy uses ChatGPT to generate electronic circuit, doesn't check it. He electrocute himself.


Those two seem about a week shy of drowning in a puddle on their own accord.


So, what prevents chatGPT to use detection tool and fine-tune its response accordingly?


Nothing, this is how adversarial training works, but it also works both ways.


It works both ways, but generation is advantaged in the long run. There has to actually be a statistical difference to detect, and AI outputs without statistical differences from human output are obviously possible, since humans make them all the time.


Sort off, I’m aware that in principle the generator has an advantage and eventually the detector will average out to a coin flip at best.

However some advantages can disappear when you put constraints on the output such as quality and correctness.

So whilst the end result might be less statistically significant in terms of was it human or AI generated it can overall be also less useful to the end user.


"However some advantages can disappear when you put constraints on the output such as quality and correctness."

Only if you suppose that the ideal output is superhuman. In the case of OpenAI et al, that's arguably the case, but those aren't the players that are going to get into an arms race with detection anyway. They want it to be relatively easy to detect AI generated content, because they're not in the plagiarism business, and anti-plagiarism measures will get the public and media off their backs. And nobody who is interested in targeting plagiarism has nearly the funding to build their own LLM on a level that matters.

So if there's an arms race in the near term, I expect it will be with postprocessors instead. These will be much smaller models (i.e. runs in your browser, or at least on a small backend machine) that take the output of ChatGPT and tweak it to fool detectors. They won't care about maximizing quality or accuracy, but will just care about preserving meaning while erasing statistical signs of AI generation.

I don't know if the business case for that will be there. It's there for selling papers, and almost certainly some people will try their hand at these models just for the challenge and/or to prove a point.


Im not sure if that how it actually would work out.

Most humans can’t write say an essay to save their life.

And those who do write very well tend to have their own signature.

Whilst it’s not 100% accurate we’ve managed to fairly successfully attribute a lot of unknown works to specific authors based on their known works.

So if you create a generator that produces output equals to say top 1% of human authors I’m not entirely sure that you can get one that doesn’t have its own signature.

Because whilst as you said most humans produce output that is statistically indistinguishable from most other humans the output that tends to survive selection bias and become known works is quite distinguishable by definition.

So you don’t even need to get to superhuman capability you just need to get to a high enough output quality that it would limit the statistical search space from billions to millions or even thousands.


This may be along the lines of what you’re suggesting, but what if you flipped this around: instead of trying to recognize AI, you recognize the student? You model each student’s quirks so you can tell if they wrote their essay, or if someone else did. Now you don’t care about AI specifically; you just care about whether they wrote what they submitted.

The main failure mode I see here is students dramatically improving and throwing the system off. If someone gets a tutor or goes to writing workshops, you don’t want to accuse them of plagiarism just because they got better. But there may be ways you could deal with that, like having the student submit new samples.


That could work but that is changing the problem and moving the goal posts, a plagiarism detection system that is essentially trained on individual authors would be able to identify any time they skew too far from their rolling average.

I’m not even sure if ML is absolutely necessary for this or not.


   > I have more success writing reusable libraries that do very, very fundamental stuff
Same here. Things like logical and mathematical operators. Memory access and stuff.


"generalized" means the most simple code you can come up with. This simple code is then used thousands times from very different programs, and it never changes.


See XKCD comic : )

I would disagree that "generalized" equates to simple. Generalized means generic and usually abstract.

- https://en.wikipedia.org/wiki/Rule_of_three_(computer_progra...

- https://blog.codinghorror.com/rule-of-three/ "It is three times as difficult to build reusable components as single use components"

I think implicit in the point, most code does not go on to be used thousands of time (let alone dozens or even more than once). Even if some code is used more than once, making it the same and shared has significant costs as well.


Never said it is easy to build simple reusable code. Code itself must be simple but process behind devising it may not be. If the code is not simple it is more prone to error.


Is this such a big problem? Everybody must verify data anyway. Otherwise bad things may happen. Like evil ai apocalypse that scientist have warned us about.


Researcher in article discovered that oven pings google , yandex and baidu to check if internet is present.

I think who ever coded oven firmware was fair because oven connectivity data will be equally shared with all major sides.

And also resilient, if it was only google, or only baidu or yandex, then they would have too much power to control the ovens. For example google may not respond and oven would never enter "internet present" state.

... Or they could just code different firmware for each region.


How will they contact us? We need Galactical mailbox. And you don't know how to build one unless you already have one.


Earth's galactic mailbox is located near our local planning department in Alpha Centauri, only four light years away.


We should probably check it out just in case they plan to build a hyperspatial express route through our star system.


" ‘I don’t know’ said the voice on the PA, ‘apathetic bloody planet, I’ve no sympathy at all.’ "


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: