Hacker Newsnew | past | comments | ask | show | jobs | submit | not2b's commentslogin

Conservapedia had to have a person create each article and didn't have the labor or interest. Grok can spew out any number of pages on any subject, and those topics that aren't ideologically important to Musk will just be the usual LLM verbiage that might be right or might not.

People were literally buying horse dewormer when their doctors wouldn't prescribe it for them. "Influencers" were selling it. So the media were being accurate. To the extent that this made people look dumb, the intent was mostly to shame them into trying something more effective.


> the intent was mostly to shame them

Yeah I don’t like news that does that, as opposed to giving the best information.


You dropped my words "into trying something more effective". Steering people into treatments found to be more effective is, precisely, giving people the best available information. Ivermectin is great if you have a parasitic infection. It doesn't help against viral infections.


Telling them which treatment is more effective is different than shaming, and also different than making up a misleading name like “horse dewormer”.


This is going to be a huge problem for conferences. While journals have a longer time to get things right, as a conference reviewer (for IEEE conferences) I was often asked to review 20+ papers in a short time to determine who gets a full paper, who gets to present just a poster, etc. There was normally a second round, but often these would just look at submissions near the cutoff margin in the rankings. Obvious slop can be quickly rejected, but it will be easier to sneak things in.


AI conferences are already fucked. Students who are doing their Master's degrees are reviewing those top-tier papers, since there are just too many submissions for existing reviewers.


I think on HN, people waste too much time arguing about the phrasing of the headline, whether it is clickbait, etc. and not enough discussing the actual substance of the article.


You're right, mostly, but the fact remains that the behavior we see is produced by training, and the training is driven by companies run by execs who like this kind of sycophancy. So it's certainly a factor. Humans are producing them, humans are deciding when the new model is good enough for release.


Do you honestly think an executive wanted a chat bot that confidently lies?


Do the lies look really good in a demo when you're pitching it to investors? Are they obscure enough that they aren't going to stand out? If so no problem.


In practice, yes, though they wouldn't think of it that way because that's the kind of people they surround themselves with, so it's what they think human interaction is actually like.


"I want a chat bot that's just as reliable at Steve! Sure he doesn't get it right all the time and he cost us the Black+Decker contract, but he's so confident!"

You're right! This is exactly what an executive wants to base the future of their business off of!


You say that like it’s untrue, but they measurably prefer a lying but confident salesman over one who doesn’t act with that kind of confidence.

This is very slightly more rational than it seems because repeating or acting on a lie gives you cover.


Yes, that is in fact their revealed preference.

Did you have a point?


You use unfalsifiable logic. And you seem to argue that, given the choice, CEOs would prefer not to maximize revenue in favor of... what, affection for an imaginary intern?


Cute straw man.

You must be a CEO.

I'm not arguing anything. I'm observing reality. You're the one who is desperate to rationalize it.


You are declaring your imagined logic as fact. Since I do not agree with the basis upon which you pin your argument on, there is no further point in discussion.


You're hallucinating things I did not say.


Given the matrix 'competent/incompetent' / 'sycophant/critic' I would not take it as read that the 'incompetent/sycophant' quadrant would have no adherents, and I would not be surprised if it was the dominant one.


People with immense wealth, connections, influence, and power demonstrably struggle to not surround themselves with people who only say what the powerful person already wants to hear regardless of reality.

Putin didn't think Russia could take Ukraine in 3 days with literal celebration by the populace because he only works with honest folks for example.

Rich people get disconnected from reality because people who insist on speaking truth and reality around them tend to stop getting invited to the influence peddling sessions.


They may say they don't want to be lied to, but the incentives they put in place often inevitably result in them being surrounded by lying yes-men. We've all worked for someone where we were warned to never give them bad news, or you're done for. So everyone just lies to them and tells them everything is on track. The Emperor's New Clothes[1].

1: https://en.wikipedia.org/wiki/The_Emperor%27s_New_Clothes


No, but they like the sycophancy.


Agreed. I used to review lots of submissions for IEEE and similar conferences, and didn't consider it my job to verify every reference. No one did, unless the use of the reference triggered an "I can't believe it said that" reaction. Of course, back then, there wasn't a giant plagiarism machine known to fabricate references, so if tools can find fake references easily the tools should be used.


Cute. But please don't use this, because in addition to making your text useless for LLMs it makes it useless for blind and vision impaired people who depend on screen readers.


And, conversely, it (presumably) has no effect on VLMs using captive browsers and screenshotting to read webpages.


> making your text useless for LLMs

It arguably doesn't even do this. If this is adopted widely, it would only be for current LLMs; newer models could (and would) be trained to detect and ignore zero-width/non-printable characters.


6,000+, and those machines served many others (back then there were tens of thousands of machines on the Internet, but probably 10x as many that were connected to these by relays that handled email or Usenet traffic).


Also worth remember that especially with Internet-connected computers almost everything was multiuser. You did work on the Internet from a shell on a shared Unix server, not from a laptop.


Serverless remote workspaces as you might call them now.


Well, sort of. RTM underestimated the effect of exponential growth, and thought that he would in effect have an account on all of the connected systems, without permission. He evidently didn't intend to use this power for evil, just to see if it could be done.

He did do us all a service; people back then didn't seem to realize that buffer overflows were a security risk. The model people had then, including my old boss at one of my first jobs in the early 80s, is that if you fed a program invalid input and it crashed, this was your fault because the program had a specification or documentation and you didn't comply with it.


Interestingly, it took another 7 years for stack overflows to be taken seriously, despite a fairly complete proof of concept widely written about. For years, pretty much everybody slept on buffer overflows of all sorts; if you found an IFS expansion bug in an SUID, you'd only talk about it on hushed private mailing lists with vendor security contacts, but nobody gave a shit about overflows.

It was Thomas Lopatic and 8lgm that really lit a fire under this (though likely they were inspired by Morris' work). Lopatic wrote the first public modern stack overflow exploit, for HPUX NCSA httpd, in 1995. Later that year, 8lgm teased (but didn't publish --- which was a big departure for them) a remote stack overflow in Sendmail 8.6.12 (it's important to understand what a big deal Sendmail vectors were at the time).

That 8lgm tease was what set Dave Goldsmith, Elias Levy, San Mehat, and Pieter Zatko (and presumably a bunch of other people I just don't know) off POC'ing the first wave of public stack overflow vulnerabilities. In the 9-18 months surrounding that work, you could look at basically any piece of privileged code, be it a remote service or an SUID binary or a kernel driver, and instantly spot overflows. It was the popularization with model exploits and articles like "Smashing The Stack" that really raised the alarm people took seriously.

That 7 year gap is really wild when you think about it, because during that time period, during which people jealously guarded fairly dumb bugs, like an errant pipe filter input to the calendar manager service that run by default on SunOS shelling out to commands, you could have owned up literally any system on the Internet, so prevalent were the bugs. And people blew them off!

I wrote a thread about this on Twitter back in the day, and Neil Woods from 8lgm responded... with the 8.6.12 exploit!

https://x.com/tqbf/status/1328433106563588097


This was great to read. Related: Morris also discovered the predictable TCP sequence number bug and described it in his paper in 1985 http://nil.lcs.mit.edu/rtm/papers/117.pdf. Kevin Mitnick describes how he met some Israeli hackers with a working exploit only in only in 1994 (9 years later) in his book "Ghost in the Wires" (chapter 33). I tried to chronicle the events here (including the Jon Postel's RFC that did not specify how the sequence number should be chosen) https://akircanski.github.io/tcp-spoofing


Mitnick's use of the sequence number spoofing exploit was a super big deal at the time; it's half of the centerpiece of his weird dramatic struggle with Tsutomu Shimomura, whose server he broke into with that exploit (the other half was Shimomura helping use radio triangulation to find him).

Mitnick didn't write any of this tooling --- presumably someone in jsz's circle did --- but it also wasn't super easy to use; spoofing tools of that vintage were kind of a nightmare to set up.


"Your security technique will be defeated. Your technique is no good"


I remember hearing the audio at the time and thinking it was pretty funny back before I realized racism was bad.


So this would be the first stack overflow after the Morris' fingerd one (well, first one that's widely publicized):

https://seclists.org/bugtraq/1995/Feb/109

> we've installed the NCSA HTTPD 1.3 on our WWW server (HP9000/720, HP-UX 9.01) and I've found, that it can be tricked into executing shell commands. Actually, this bug is similar to the bug in fingerd exploited by the internet worm. The HTTPD reads a maximum of 8192 characters when accepting a request from port 80.


Agreed. Some of the big companies seem to be claiming that by going with ReallyBitCompany's AI you can do this safely, but you can't. Their models are harder to trick, but simply cannot be made safe.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: