Hacker Newsnew | past | comments | ask | show | jobs | submit | retsibsi's commentslogin

> Now, it is a reality.

What are the details of this? I'm not playing dumb, and of course I've noticed the decline, but I thought it was a combination of losing the battle with SEO shite and leaning further and further into a 'give the user what you think they want, rather than what they actually asked for' philosophy.



As recently as 15 years ago, Google _explicitly_ stated in their employee handbook that they would NOT, as a matter of principle, include ads in the search results. (Source: worked there at that time.)

Now, they do their best to deprioritize and hide non-ad results...


Nobody can know, but I think it is fairly clearly possible without signs of sentience that we would consider obvious and indisputable. The definition of 'intelligence' is bearing a lot of weight here, though, and some people seem to favour a definition that makes 'non-sentient intelligence' a contradiction.

As far as I know, and I'm no expert in the field, there is no known example of intelligence without sentience. Actual AI is basically algorithm and statistics simulating intelligence.

Definitely a definition / semantics thing. If I ask an LLM to sketch the requirements for life support for 46 people, mixed ages, for a 28 month space journey… it does pretty good, “simulated” or not.

If I ask a human to do that and they produce a similar response, does it mean the human is merely simulating intelligence? Or that their reasoning and outputs were similar but the human was aware of their surroundings and worrying about going to the dentist at the same time, so genuinely intelligent?

There is no formal definition to snap to, but I’d argue “intelligence” is the ability to synthesize information to draw valid conclusions. So, to me, LLMs can be intelligent. Though they certainly aren’t sentient.


Can you spell out your definition of 'intelligence'? (I'm not looking to be ultra pedantic and pick holes in it -- just to understand where you're coming from in a bit more detail.) The way I think of it, there's not really a hard line between true intelligence and a sufficiently good simulation of intelligence.

I would say that "true" intelligence will allow someone/something to build a tool that never existed before while intelligence simulation will only allow someone/something to reproduce tools that already known. I would make a difference between someone able to use all his knowledge to find a solution to a problem using tools he knows of and someone able to discover a new tool while solving the same problem. I'm not sure the latter exists without sentience.

I honestly don't think humans fit your definition of intelligent. Or at least not that much better than LLMs.

Look at human technology history...it is all people doing minor tweaks on what other people did. Innovation isn't the result of individual humans so much as it is the result of the collective of humanity over history.

If humans were truly innovative, should we not have invented for instance at least a way of society and economics that was stable, by now? If anything surprise me about humans it is how "stuck" we are in the mold of what others humans do.

Circulate all the knowledge we have over and over, throw in some chance, some reasoning skills of the kind LLMs demonstrate every day in coding, have millions of instances most of whom never innovate anything but some do, and a feedback mechanism -- that seems like human innovation history to me, and does not seem like demonstrating anything LLMs clearly do not possess. Except of course not being plugged into history and the world the way humans are.


The most serious tournaments are played in person, with measures in place to prevent (e.g.) a spectator with a chess engine on their phone communicating with a player. For online play, it's kind of like the situation for other online games; anti-cheat measures are very imperfect, but blatant cheaters tend to get caught and more subtle ones sometimes do. Big online tournaments can have exam-style proctoring, but outside of that it's pretty much impossible to prevent very light cheating -- e.g. consulting a computer for the standard moves in an opening is very hard to distinguish from just having memorized them. The sites can detect sloppy cheating, e.g. a player using the site's own analysis tools in a separate tab, but otherwise they have to rely on heuristics and probabilistic judgments.


> You could make the same argument about compilers : whatever is the code you wrote, your compiler may produce assembly instructions in an undeterministic way.

Bit of a stretch, I think, because the compiler guarantees it will follow the language spec. The LLM will be influenced by your spec but there are no guarantees.


> the same place an increase in power comes from when you use a lever.

I don't understand the analogy. A lever doesn't give you an increase in power (which would be a free lunch); it gives you an increase in force, in exchange for a decrease in movement. What equivalent to this tradeoff are you pointing to?


> "Value" here is obviously subjective relative to the beholder

Agreed, but the confusing part is that you don't seem to be saying "to me, those services only provide X amount of value, and I'd rather have $Y than that" -- you seem to be saying that if 1password and Fastmail were more expensive, you might be willing to pay the asking price for some of those services you currently consider bad value.


You’re absolutely right! If Fastmail or 1password charged more for their services, the bar for what I consider good value would be different. Those two services are _absolutely_ worth more to me than they currently charge and I’d be happy to pay.

But they don’t, which is why I use them as my baseline. If my email provider and password manager - two services with damn near infinite vendor lockin - can do it, no one has any excuse.


Which is your choice, obviously, but you can see why people would find it strange -- it seems like you're potentially just leaving value on the table for an arbitrary reason. The price of Fastmail doesn't affect the value you would get from some unrelated product, so surely that other product is either worth the asking price or not worth it regardless of how much Fastmail is charging.


What do you mean infinite vendor lock in? Just export your vault and import it in another password manager and switching email provider is not that hard either (assuming you use your own domain).


> I would also say that the Always Sunny gang really aren't sympathetic either, but it's a para-social trick of having spent so much time "together" with them over so many episodes.

I'd say they're charismatic and funny, but irredeemably bad people. It was refreshing that the show didn't shy away from that; in lots of comedies, the characters are basically psychopathic if taken literally, yet we're still supposed to like them and to see them as having hearts of gold if they make the occasional nice gesture. Always Sunny just leaned hard into portraying them as terrible people who were only 'likable' in the shallow sense needed to make the show fun to watch rather than an ordeal.

But I think the creators eventually lost sight of that -- I remember the big serious episode they did with Mac's dance, and I just find it baffling because in order to buy into the emotion we were evidently supposed to feel, we needed to take the characters seriously. And as soon as we take the characters seriously we are (or should be) overwhelmingly aware that we're watching people who have proven over the previous umpteen years to be irredeemable sociopaths, which kind of takes the edge off the heartwarming pride story.


Could it be useful as a first line of defence? A failed initial reproduction would not be seen as disqualifying, but it would bring the paper to the attention of more senior people who could try to reproduce it themselves. (Maybe they still wouldn't bother, but hopefully they'd at least be more likely to.)


Your max price should be the price such that you're indifferent between buying the item at that price and not buying it at all.

At a shop, usually you're paying less than the maximum you'd be willing to pay, because the shop's prices are fixed and it would be a big coincidence if the price they set happened to match your max price exactly.[1] So even if we model you as homo economicus, it's normal that you're almost always fine with paying $X + $0.01.

In the case where $X really is your max price (i.e. it's right at your threshold of indifference), the idea of rejecting $X + $0.01 seems less silly. You were already very close to deciding $X was too much, so you're probably feeling ambivalent about making the purchase, and the trivial nudge of an extra cent being added to the price might as well be what pushes you over the edge.

[1] There are exceptions, e.g. when you have a negligible preference between brands A and B, so you're defaulting to brand A because the prices are exactly the same, but you would buy B if it were marginally cheaper. But that doesn't affect the main point here.


I don't see why I would buy something if I'm indifferent to whether I buy the thing.


So it should be the highest price such that you're not indifferent, you marginally prefer to buy it. The point is that you're right at the threshold where it's just barely worth it to you.


The gap between total indifference and wanting something bad enough to bid on it is always going to be more than $.01.


I reckon that's empirically false. Shops set prices like $499.99 for a reason.

(And it has to be theoretically false, otherwise $X is equivalent to $X + $0.01 for all X, and so if you'd buy something at 1c you'd buy it for the contents of your bank account.)

If you still dispute this, you need to try to explain how a larger price difference can affect your decision. If you'd happily place a $1 bid, and you'd definitely not place a $100 bid, and a 1c difference could never deter you from placing a bid, then... well, how is that possible?


Regarding the last part: it's simple, $1.01 is less than $100

This process doesn't work endlessly. You can't just add $.01 a billion times and I'd still pay it. But it works once or twice.

Shops set prices like $499.99 due to funny psychological effects: $499.99 is still a price "in the 400s" while $500 is "in the 500s". Nobody sits down and thinks logically about it and concludes that no, the $.01 difference between $499.99 and $500.00 crosses the line. But people see $499.99 and the brain initially goes "oh, it's only 400-something".


Are you:

- agreeing there must be some threshold such that if the price is $X then you will buy(/bid on) the item, but if the price is $X + $0.01 then you won't;

- but maintaining that in a case where you have already decided to buy/bid and the price then rises by $0.01, you will always go ahead and pay the extra cent (provided this hasn't already happened a bunch of times)?

If so, then I don't see the original problem. Do your best to estimate X (or, more specifically, the value of X you actually endorse as your 'true' valuation), and put that in as your maximum bid. If you get the item at $X you'll be marginally pleased; if you get it for less then you'll be more pleased; and if you miss out on it then you shouldn't mind, as you knew it was only going to be just barely worth it at $X.

If you're actually disagreeing with the first point, then you still need to explain how that can make sense. It's coherent to say that in practice, after making the decision to buy at a given price, you would always accept a 1c price rise but at some point between the first 1c rise and the billionth you'd tell the guy to piss off. But that's not the same as saying the actual value of the item, separate from the emotions involved in the purchase process, is somehow indeterminate. If it's not worth it at $1, and it's worth it at $100, but 1c can never take it from "worth it" to "not worth it", then ?


> are you

> - agreeing there must be some threshold such that if the price is $X then you will buy(/bid on) the item, but if the price is $X + $0.01 then you won't;

No, I'm not. If I will buy an item for price $X, I will buy the item for the price $X + $.01. The decision to purchase something is more complex and cannot be encapsulated as one single dollar value.

I think something your model fails to account for is: there is friction associated with a purchase. I will not necessarily go through the process of buying something whose "value" is $0.1 even if its price is $0.09, because there is friction to making a purchase which that $0.01 profit doesn't cover.

As an example: I recently played a Pokemon ROM hack where there was an NPC selling a nugget for 4999. You can sell the nugget for 5000. That's 1 coin profit; objectively a good trade, right? But going through the process of purchasing something isn't free. So in spite of what your economic models may suggest, I did not stop everything I was doing and spend the rest of the game buying nuggets for 4999 and selling them for 5000, because that would've been boring and my time has value.

If I've already gone through a lot of the process to decide to buy something at a certain price (which includes doing research to find out that the thing suits my needs, researching how the market looks for that category of thing, then bringing the item to the cashier or engaging in the eBay auction or contacting a seller), then I've already spent some not-insignificant amount of resources on the purchasing process. A $0.01 price increase will never be enough to stop me from completing that purchase, because $0.01 is not worth going through the whole process again.

If I'm already at the point where I want to bid on an item at $X, then I have spent more than $0.01 in effort researching things to bid on, so I would also bid $X + $0.01.


> If I've already gone through a lot of the process to decide to buy something at a certain price [...] then I've already spent some not-insignificant amount of resources on the purchasing process.

Yes, that's part of what I was trying to account for with my second bullet point. But before you've made that initial decision, there must be some price that would cause you to make it a 'yes' and some marginally higher price that would cause you to make it a 'no'.

This value obviously won't be totally constant across time -- it will vary with your mental state. But at any given time (and for any given roll of the mental dice, if we're assuming there's some true indeterminism here), it must exist. So when we're translating from "what's the maximum I would pay" to "what should I bid", we can imagine that we're in our most rational and clear-thinking frame of mind, aren't seized by any strange impulses, and so on.

The time and effort of researching a different item also has a value that could be pinned down in a similar way. So it doesn't fundamentally change the arguments here; if product A would be worth $X in a vacuum, but you'd happily pay $Y to avoid going through the research process again, then you should bid $X+Y.


Before I have made that initial decision, and before I have invested resources into evaluating what I think the value of a product is, I do not have a price in mind. Deciding on a price I think is fair for a product takes effort. The more accurately I want to determine it, the more effort it is.

Could there exist some hypothetical subjective value? I mean maybe. But not one that I have knowledge of, so it's not something that can even hypothetically affect my behavior. The only time at which I could possibly be aware of my own subjective value judgement of a product necessarily has to be after I have invested time to evaluate it.


So what is the problem? You've done the research, and your best estimate for the value is $X. And if you had to put a dollar value on avoiding doing the research again, it would be $Y. You put in a maximum bid of $X+Y, walk away from the auction, and come back to see that you won at a lower price (great!), won at your max price (fine), or lost (also fine; $X+Y was right at the threshold of what you considered worth paying, even accounting for the extra research you'll now have to do. Maybe if you look at the final price and see that you lost by 1c, you'll feel annoyed... but if that's anything more than an irrational emotional response, then why didn't you bid 1c more in the first place? You were free to enter any number you wanted, and you knew in advance that this might happen. If it is just an irrational emotional response, you can avoid that next time by not looking at the final price unless you win.)


Neither $X nor $Y are going to be hard dollar values. If I semi-arbitrarily pick some $X and some $Y, put in $X+$Y as my max bid, and lost the item due to $0.01, I would be annoyed not due to some irrationality but because $X and $Y were never cent-accurate in the first place.


They'll never be cent-accurate, but if you've done a decent job then they should be in your zone of rough indifference. Then you can simply avoid that annoyance by not looking at the final price, safe in the knowledge that at worst you may have missed out on a marginally worthwhile purchase by marginally underestimating its value. If that's not the case, you didn't bid enough in the first place.

(But also, how is the annoyance not irrational? Your estimates weren't cent-accurate, but they were just as likely to be slightly too high as slightly too low. And you haven't learned anything new about the true values -- unless you take your emotional reaction to be new evidence. For your emotional reaction to be new evidence, it has to be somewhat unpredictable, otherwise you could have fully factored it in in advance. But you seem to be saying that you're predictably going to be annoyed by a 1c loss.)


Adding a single grain of sand to a small pile of sand never turns it into a big pile of sand, yet big piles of sand exist... well, how is that possible? https://en.wikipedia.org/wiki/Sorites_paradox


Yes of course I know the Sorites paradox (and I can give my take on it if you are interested), but what point are you making in the context of this discussion?


That's not how paraphrasing works. They probably intentionally held back from guaranteeing an interview, for various reasons. One that seems obvious to me is that with the bar set at "Claude Opus 4.5's best performance at launch", it's plausible that someone could meet it by feeding the problem into an LLM. If a bunch of people do that, they won't want to waste time interviewing them all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: