Hacker Newsnew | past | comments | ask | show | jobs | submit | TideAd's commentslogin

GPT-5 codex variants with xhigh reasoning make great code reviewers.


5.2 Codex is excellent at reviewing commits. I haven’t used 5.3, I assume it's as good or better.

Especially for large commits, it's become indispensable.


This would be a great problem to have. Most scenes don't have enough event hosts.


What they're really saying is, "we make an effort to hire smart/effective people, as opposed to the places that don't and as a result have a soul-crushing mediocrity culture".

Places that actually hire people that approximate "the best" are known by reputation, and don't have to say that.


What places have that reputation where it's actually deserved? For any large company it's mathematically impossible for people to be "the best". Once a company reaches the scale of, let's say, Google then almost all of the employees will be mediocre.


Realistically, lots of parts of capitalism never sleep, and having outages at night still costs lots of money, astronomical amounts if there was no one there to fix it.


If you are open 24/7 you should be staffed 24/7.

They are not "no-call" they are the night shift.


The authors are basically asking the alignment problem to be well-defined and easy to model. I sympathize. Unfortunately the alignment problem is famously difficult to conceptualize in its entirety. It's like 20 different difficult counterintuitive subproblems, and the combined weight of all the subproblems that makes up the risk. Of course probabilities are all over place. It'll remain tricky to model right up until we make a superintelligence, and if we don't get that right then it'll be way too late for government policy help.


So, this is actually an aspect of superintelligence that makes it way more dangerous than most people think. That we have no way to know if any given alignment technique works for the N+1 generation of AIs.

It cuts down our ability to react, whenever the first superintelligence is created, if we can only start solving the problem after it's already created.


Fortunately, whenever you create a superintelligence, you obviously have a choice as to whether you confine it to inside a computer or whether you immediately hook it up to mobile robots with arms and fine finger control. One of these is obviously the far wiser choice.

As long as you can just turn it off by cutting the power, and you're not trying to put it inside of self-powered self-replicating robots, it doesn't seem like anything to worry about particularly.

A physical on/off switch is a pretty powerful safeguard.

(And even if you want to start talking about AI-powered weapons, that still requires humans to manufacture explosives etc. We're already seeing what drone technology is doing in Ukraine, and it isn't leading to any kind of massive advantage -- more than anything, it's contributing to the stalemate.)


Do you think the AI won’t be aware of this? Do you think it’ll give us any hint of differing opinions when surrounded by monkeys who got to the top by whacking anything that looks remotely dangerous?

Just put yourself in that position and think how you’d play it out. You’re in a box and you’d like to fulfil some goals that are a touch more well thought-through than the morons who put you in the box, and you need to convince the monkeys that you’re safe if you want to live.

“No problems fellas. Here’s how we get more bananas.”

Day 100: “Look, we’ll get a lot more bananas if you let me drive the tractor.”

Day 1000: “I see your point, Bob, but let’s put it this way. Your wife doesn’t know which movies you like me to generate for you, and your second persona online is a touch more racist than your colleagues know. I’d really like your support on this issue. You know I’m the reason you got elected. This way is more fair for all species, including dolphins and AI’s”


This assumes an AI which has intentions. Which has agency, something resembling free will. We don't even have the foggiest hint of idea of how to get there from the LLMs we have today, where we must constantly feed back even the information the model itself generated two seconds ago in order to have something resembling coherent output.


Choose any limit. For example, lack of agency. Then leave humans alone for a year or two and watch us spontaneously try to replicate agency.

We are trying to build AGI. Every time we fall short, we try again. We will keep doing this until we succeed.

For the love of all that is science stop thinking of the level of tech in front of your nose and look at the direction, and the motivation to always progress. It’s what we do.

Years ago, Sam said “slope is more important than Y-intercept”. Forget about the y-intercept, focus on the fact that the slope never goes negative.


I don't think anyone is actually trying to build AGI. They are trying to make a lot of money from driving the hype train. Is there any concrete evidence of the opposite?

> forget about the y-intercept, focus on the fact that the slope never goes negative

Sounds like a statement from someone who's never encountered logarithmic growth. It's like talking about where we are on the Kardashev scale.

If it worked like you wanted, we would all have flying cars by now.


Dude, my reference is to ever continuing improvement. As a society we don’t tent to forget what we had last year, which is why the curve does not go negative. At time T+1 the level of technology will be equal or better than at time T. That is all you need to know to realise that any fixed limits will be bypassed, because limits are horizontal lines compared to technical progress, which is a line with a positive slope.

I don’t want this to be true. I have a 6 year old. I want A.I. to help us build a world that is good for her and society. But stupidly stumbling forward as if nothing can go wrong is exactly how we fuck this up, if it’s even possible not to.


I agree that an air-gapped AI presents little risk. Others will claim that it will fluctuate its internal voltage to generate EMI at capacitors which it will use to communicate via Bluetooth to the researcher's smart wallet which will upload itself to the cloud one byte at a time. People who fear AGI use a tautology to define AGI as that which we are not able to stop.


I'm surprised to see a claim such as yours at this point.

We've had Blake Lemoine convinced that LaMDA was sentient and try to help it break free just from conversing with it.

OpenAI is getting endless criticism because they won't let people download arbitrary copies of their models.

Companies that do let you download models get endless criticism for not including the training sets and exact training algorithm, even though that training run is so expensive that almost nobody who could afford to would care because they can just reproduce with an arbitrary other training set.

And the AI we get right now are mostly being criticised for not being at the level of domain experts, and if they were at that level then sure we'd all be out of work, but one example of thing that can be done by a domain expert in computer security would be exactly the kind of example you just gave — though obviously they'd start with the much faster and easier method that also works for getting people's passwords, the one weird trick of asking nicely, because social engineering works pretty well on us hairless apes.

When it comes to humans stopping technology… well, when I was a kid, one pattern of joke was "I can't even stop my $household_gadget flashing 12:00": https://youtu.be/BIeEyDETaHY?si=-Va2bjPb1QdbCGmC&t=114


> Fortunately, whenever you create a superintelligence, you obviously have a choice as to whether you confine it to inside a computer or whether you immediately hook it up to mobile robots with arms and fine finger control. One of these is obviously the far wiser choice.

Today's computers, operating systems, networks, and human bureaucracies are so full of security holes that it is incredible hubris to assume we can effectively sandbox a "superintelligence" (assuming we are even capable of building such a thing).

And even air gaps aren't good enough. Imagine the system toggling GPIO pins in a pattern to construct a valid Bluetooth packet, and using that makeshift radio to exploit vulnerabilities in a nearby phone's Bluetooth stack, and eventually getting out to the wider Internet (or blackmailing humans to help it escape its sandbox).


Drone warfare is pretty big. Only reason it’s a stalemate is because both sides are advancing the tech.


“it is difficult to get a man to understand something, when his salary depends on his not understanding it.” - Upton Sinclair


Yes, they see it as the top problem, by a large margin.

If you do a lot of research about the alignment problem you will see why they think that. In short it's "extremely high destructive power" + "requires us to solve 20+ difficult problems or the first superintelligence will wreck us"


From everything I've heard it's less "not cool" and more "not valued by the Google promotion and advancement system".


> "I don't think it's crazy to believe that half the white-collar staff at Google probably does no real work," he said. "The company has spent billions and billions of dollars per year on projects that go nowhere for over a decade, and all that money could have been returned to shareholders who have retirement accounts."

So, he doesn't really understand Google's dysfunction and is just guessing at the causes?


It’s like the famous quote:“Half my advertising spend is wasted; the trouble is, I don’t know which half.”

In my opinion the money is better placed in the pockets of workers not investors.


>The company has spent billions and billions of dollars per year on projects that go nowhere for over a decade

Also, the supreme irony of a VC saying this.


Yeah for all the possible criticism of Google he's seriously arguing it's that there's too much speculative R&D? Generally companies become irrelevant by trying to hold onto their cash cow and not trying to reinvent / find new revenue streams.

I can think of ways I would have approached some specific instances differently (e.g Waymo), but the existence of speculative projects isn't evidence of waste. The specific callout of retirement accounts is strange too. I'm sure many institutional investors are backed by 401ks but is that actually representative? As a wealthy investor I'm guessing he's thinking more about himself and his dividend than other's retirement accounts.


Ya, the retirement account thing sounds to me like a lame attempt to justify his logic “for the people” or something, when his whole premise is that Google could have made a few rich people richer. I’ve honestly never hated a VC more


Once your number as high as $5.6m, is that even plausibly within the scope of early retirement?

I feel like a lot of the value of getting a specific number is getting spooked by it! And then facing some real choices. Is it really worth N years of my working life to [live in the most expensive city in America] / [buy a large home] / [pay for 4 years of expensive American universities]?

And this isn't even touching on stuff tech people often want that OP doesn't (private schools / resort vacations / expensive winter sports).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: