Hacker Newsnew | past | comments | ask | show | jobs | submit | more gitfan86's commentslogin

Yes, AGI was here at AlphaGo. People don't like that because they think it should have generalized outside of GO, but when you say AGI was here at AlphaZero which can play other games they again say not general enough. At this point is seem unlikely that AI will ever be general enough to satisfy the sceptics for the reason you said. There will always be some domain that requires training on new data.


You're calling an Apple an Orange and complaining that everyone else wont refer to it as such. AGI is a computer program that can understand or learn any task a human can, mimicking the cognitive ability of a human.

It doesn't have to actually "think" as long as it can present an indistinguishable facsimile, but if you have to rebuild its training set for each task, that does not qualify. We don't reset human brains from scratch to pick up new skills.


I'm calling a very small orange an orange and people are saying it isn't a real orange because it should be bigger so I show them a bigger orange and they say not big enough. And that continues forever.



Maybe not yet, but what prevents games from getting more complicated and matching rich human environments, requiring rich human like adaptability? Nothing at all!


But AlphaZero can't play those richer games so it doesn't really matter in this context.


Famous last words!


"AI will ever be general enough to satisfy the sceptics for the reason you said"

Also

People keep thinking "General" means one AI can "do everything that any human can do everywhere all at once".

When really, humans are also pretty specialized. Humans have Years of 'training' to do a 'single job'. And they do not easily switch tasks.


>When really, humans are also pretty specialized. Humans have Years of 'training' to do a 'single job'. And they do not easily switch tasks.

What? Humans switch tasks constantly and incredibly easily. Most "jobs" involve doing so rapidly many times over the course of a few minutes. Our ability to accumulate knowledge of countless tasks and execute them while improving on them is a large part of our fitness as a species.

You probably did so 100+ times before you got to work. Are you misunderstanding the context of what a task is in ML/AI? An AI does not get the default set of skills humans take for granted, its starting as a blank slate.


You're looking at small tasks.

You don't have a human spend years getting an MBA, then drop them in a Physics Lab and expect them to perform.

But that is what we want from AI, to do 'all' jobs equally as great as any individual human in that one job.


That is a result we want from AI, it is not the exhaustive definition of AGI.

There are steps of automation that could fulfill that requirement without ever being AGI - it’s theoretically possible (and far more likely) that we achieve that result without making a machine or program that emulates human cognition.

It just so happens that our most recent attempts are very good at mimicking human communication, and thus are anthropomorphized as being near human cognition.


I agree.

I'm just making point that for AI "General" Intelligence.

That humans are also not as "General" as we assume in these discussion. Humans are also limited in a lot of ways, and narrowly trained, make stuff up, etc...

So even a human isn't necessarily a good example for what AGI would mean. Human is not a good target either.


Humans are our only model of the type of intelligence we are trying to develop, any other target would be a fantasy with no control to measure against.

Humans are extremely general. Every single type of thing we want an AGI to do is a type of things that a human is good at doing, and none of those humans were designed specifically to do that thing. It is difficult for humans to move from specialization to specialization, but we do learn them with only the structure to "learn, generally" being our scaffolding.

What I mean by this is that we do want AGI to be general in the way a human is. We just want it to be more scalable. It's capacity for learning does not need to be limited by material issues (i.e. physical brain matter constraints), time, or time scale.

So where a human might take 16 years to learn how to perform surgery well, and then need another 12 years to switch to electrical engineering, an AGI should be able to do it the same way, but with the timescale only limited by the amount of hardware we can throw at it.

If it has to be structured from the ground up for each task, it is not a general intelligence, it's not even comparable to humans, let alone scalable beyond us.


So find a single architecture that can be taught to be an electrical engineer or a doctor.

Where today those are being done, but specialized architectures, models, combination of methods.

Then that would be a 'general' intelligence, the one type of model that can do either. Trained to be an engineer or doctor. And like a human once trained, they might not do the other job well. But they did both start with same 'tech', like humans all have the same architecture in the 'brain'.

I don't think it will be an LLM, it will be some combo of methods in use today.

Ok. I'll buy that. I'm not sure everyone is using 'general' in that way. I think more-often people think a single AI instance that can do everything/everywhere/all at once. Be an engineer and doctor at same time. Since it can do all the tasks at same time, it is 'general'. Since we are making AI's that can do everything, could have a case statement inside to switch models, half joking. At some point all the different AI methods will be incorporated together and will appear even more human/general.


Right, but even at that point the sceptics will still stay that it isn't "truly general" or unable to do X in the same way a human does. Intelligence like beauty is in the eye of the beholder.


But if humans are so bad, what does that say about a model that can't even do what humans can?

Humans are a good target since we know human intelligence is possible, its much easier to target something that is possible rather than some imaginary intelligence.


No human ever got good at Tennis without learning the rules. Why would we not allow an AI to also learn the rules before expecting it to get good at tennis.


> Why would we not allow an AI to also learn the rules before expecting it to get good at tennis.

The model should learn the rule, don't make a model based on the rules. When you make a model based on the rules then it isn't a general model.

Human DNA isn't made to play tennis, but a human can still learn to play it. The same should be for a model, it should learn it, the model shouldn't be designed by humans to play tennis.


So you're saying AI can be incompetent at a grander scale. Got it.


Yes. It can be as good and bad as a human. Humans also make up BS to answers.


Why is it real GDP when a government contractor makes a PDF report, but isn't real GDP when I make a poem?


Because money paid to the government contractor results in a person (persons?) getting paid along the way and then a large part of that flows into the real economy via consumption / spending.

In your $50 trillion poem example, no money can possibly flow into the real economy because you simply do not have $50 trillion to pay anyone - you're just describing a wash trade of a worthless $50 trillion IOU note not unlike a NFT or crypto memecoin. Best case is that the poem is worth something non-negative.


Anyone who is afraid of AI should ask yourself if you think we should ban printers so that secretaries can he hired to use typewriters? If not, why do people who want a secretary job not deserve one?


This seems like the most likely answer to the Fermi paradox. Our assumptions about time and space are wrong.

If we understood them we wouldn't be looking this way


How can you have a functional democracy where the person who was elected to lead the executive branch and who also campaigned on increasing efficiency is not allowed to do that? Especially after the judicial branch has OKed it.


Counterpoint - how can you have a functional democracy when citizens(?) have such a poor understanding of our system of government?

And by "the judicial branch has OK'd it", are you referring to the President's immunity from prosecution for official acts?

That is fundamentally different than "presidents have the power to do whatever they want".


There was a lawsuit to stop the firings. Just because you don't like it doesn't make it unconstitutional.

https://www.cbsnews.com/amp/news/federal-judge-wont-stop-tru...


You do not know what you're talking about.

- This ruling doesn't say the government isn't breaking the law, it says the people suing didn't go through the right channels.

- This ruling is not the government winning the case, or the plaintiffs losing the case. Plaintiffs asked for a restraining order and didn't get one.

- There are about 80 different lawsuits against Elon/DOGE right now, for various actions. Multiple judges have granted restraining orders against the government because they think the plaintiffs are likely to prevail in their claim.


So when Biden lost lawsuits he was also being a dictator?


"Also" being a dictator? You're the first to use that word.


You said "presidents have the power to do whatever they want". Do you not understand what that means?


He is allowed to increase efficiency by the means available within the law (including, where a change in law would make things more efficient, presenting a proposal for such a change to Congress.)

And the judicial branch hasn't okayed what he has tried to do, which is why there have been multiple orders issued by multiple courts against his stopping of payments.


"How can you have a functional democracy without a king" is what you seem to be asking. Do you see the problem?


Did you miss the election?


Yes, and Trump wasn't elected King, he was elected to an office whose duty is to see to it that the laws are faithfully executed.


When did he promise that during the campaign trail?

I keep hearing all these things about how the voters voted for this and that but uh when did the candidate promise those items?


> he judicial branch has OKed it.

This is not true as evidenced by your own link below.


Trump has zero intrest in stopping waste or corruption.

He is firmly pro corruption

https://www.cbsnews.com/news/trump-fcpa-anti-bribery-law-exe...


HN has attracted a lot of Woke people over the years, but also still has a lot of OG tech nerds.


Genuinely curious what makes you see those sets as disjoint? YMMV but as an older person I've always associated socially and politically advanced thinking with the mindset of the "original" pioneers in tech. Shallow money-grubbing, fame obsession, fragile egos... that archetype came much later.


If you are genuinely curious I would suggest reading Paul Graham's essay on wokeness.


Cheers, ok, but I couldn't locate a link between generation and outlook that satisfied me. The essay is a nice blast of pop psychology about "types of people", a worthy attack on political persecution, rabid ideology and hive-minds, intolerance, and a weaker attack on the idea of "performativeness" (so avoiding a frontal attack on "social justice"). But in this way Graham divides "wokeness" from the virtues of thoughtful system-theorists and original tech-optimists I mentioned, bracketing out "woke" as mere despicables and rebels without a cause. Any admirable social justice aims just evaporate in this treatment. But isn't this what we're in now, just with a pendulum swing? All the new-breed technofascists just want to "make the world a better place", right?


I can simply it for you.

In 2007 the average person on HN was considered progressive or left leaning because they generally felt that same sex marriage should be legal.

Those same people in 2025 are now considered "far right" by the left because they believe that women have a right to say they feel uncomfortable being exposed to penises in the locker room when playing college sports.


[flagged]


Might his have someothing to do with those hacker events actively excluding those with dissenting opinions. That many social institutions have been taken over by the woke crowds isn't news.


Your numbers are wrong. The last quarter profit was $19B and the projected profit is 21B next quarter. That is 84B/year profit with zero growth.

If META,STARGATE,xAI,etc.. all increased spending rapidly you could get to 200B profit rate in 2026.

3T / 200B = 15

That means they could return a 6.6 dividend which is higher than 10 year bonds and is in no way overvalued by historial standards.

All that to say, I sold all my NVDA last month because I don't think Wall St. is buying that everyone is going to actually invest that much.


You're right about the profit - I took last fiscal year. Still, it doesn't change anything in what I wrote.

What you just wrote is "IF the biggest companies on earth, and the US government decide to spend all of their money on a single chip maker, then you could get to 200B profit rate in 2026". I won't disagree with that.


What has changed is the perception that people like OpenAI/MSFT would have an edge on the competition because of their huge datacenters full of NVDA hardware. That is no longer true. People now believe that you can build very capable AI applications for far less money. So the perception is that the big guys no longer have an edge.

Tesla had already proven that to be wrong. Tesla's Hardware 3 is a 6 year old design, and it does amazingly well on less than 300 watts. And that was mostly trained on a 8k cluster.


The perception only makes sense if it is "that's it, pack up your stall" for AI.

I think what really happened is day to day trading noise. Nothing fundamentally changed, but traders believed other people believed it would.


I mean, I think they still do have an edge - ChatGPT is a great app and has strong consumer recognition already, very hard to displace.. and MSFT has a major installed base of enterprise customers who cannot readily switch cloud / productivity suite providers. So I guess they still have an edge it’s just nore of a traditional edge.


Microsoft don't have to use OpenAI though, they could swap that out underneath for the business applications.


and it is even questionable whether "bundling" AI in every product is legal wrt anti-competitive laws (i.e. the IE case)


Yes, it is still a valid business model and I would expect MSFT to continue to make profits.


As someone who bought NVDA in early 2023 and sold in late 2024 I can say this is wrong.

There was never a question of if NVDA hardware would have high demand in 2025 and 2026. Everyone still expects them to sell everything they make. The reason the stock is crashing is because Wall St believed that companies who bought 50B+ of NVDA hardware would have a moat. That was obviously always incorrect, TPUs and other hardware was eventually going to be good enough for real world use cases. But Wall St is run by people who don't understand technology.


Loving the absolute 100% confidence there and the clear view into all the traders' minds that are trading it this morning.

If they'll sell everything they make and it's all about the moat of their clients, why is NVDA still down 15% premarket? You could quote correlation effects and momentum spillover, but that is still just the higher order effects I mentioned about people's expectations being compounded and thus reactions to adverse news being convex.


> why is NVDA still down 15% premarket?

Presumably because backorders will go down, production volume and revenue won't grow as fast, Nvidia will be forced to decrease their margins due to lower demand etc. etc.

Selling everything you make is an extremely low bar relative to Nvidia's current valuation because it assumes that Nvidia will be able to grow at a very fast pace AND maintain obscene margins for the next e.g. ~5 years AND will face very limited competition.


That's literally what I wrote in my post, which the parent disagreed with. You could disagree with the part that it is because inference is now cheaper - but again I'd argue that's just a different way of saying there's no moat.


People owned NVDA because they believed that huge NVDA hardware purchases was the ONLY way to get a AI replacement for a Mid Level software engineer or similar functionality.


That's basically what I wrote: "it simply means that if DeepSeek is legit, you need much less NV hardware to run the same amount of inference as before."

So I still don't understand what it is that you are so strongly disagreeing with, and I also don't understand how having owned NVidia stock somehow lends credence to your argument.

We are in agreement that this won't threaten NVidia's immediate bottom line, they'll still sell everything they build, because demand will likely rise to the supply cap even with lower compute requirements. There are probably a multitude of reasons why the very large number of people who own NVidia stock have decided to de-lever on the news, and a lot of it is simple uneducated herding.

But we are fundamentally dealing with a power law here - the forward value expectations for NVidia have exponential growth baked in to the hilt, combined with some good old fashioned tulip mania, and when that exponential growth becomes just slightly less exponential, that results in fairly significant price oscillations today - even though the basic value proposition is still there. This was the gist of my comment - you disagree with this?


Up until recently there was a belief by some investors that OpenAI was going to "cure cancer" or something as big as that. They assumed that the money flowing into OpenAI would 10x, under the assumption that no one else could catch up with them after that event and a lot of that would flow to NVDA.

Now is looks like that 10x of flow of money into OpenAI will no longer exist. There will be competition and compodiditzation, which causes the value of the tokens to drop way more than 40x.


They could if their living expenses went to zero because everything was automated


Automation doesn’t mean life has zero cost. Even when it is extremely efficient.

And efficient machines earn for their owners. Not for the unemployed.

Also, efficiency increases the demand and price of resources. Including land.

This is especially true if there is extreme inequality, meaning a minority are benefiting from the savings and increased leverage from efficiency improvements. Increasing their lifestyles, but more significantly and endlessly, their need to compete harder with each other to maintain the value they have.

That self-reinforcing loop is already in full force. It’s just going to cycle faster.

We don’t tend to question that ever present reality in the wild, when we consider other species and individuals of other species.

For good, or ill, the human condition is about to change radically.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: