Hacker Newsnew | past | comments | ask | show | jobs | submit | Zolde's commentslogin

If a feature is used by many and has a predictable impact on their behavior it becomes profitable again.

If you act faster on the same feature as everyone else, or you predict the feature accurately, you can anticipate what the market will do in response.

The market often overreacts to new data. So if satellite imagery shows steep decline in parked cars, the stock will be predictably oversold. You can then take a contrarian position (buy the stock before it reverses to the mean).

Some commonly used features by popular public trading bots create predictable market movements, no matter if the feature itself is long-term informative/profitable.


It is always harder to accurately forecast actual recession, than it is to forecast the predictions of the Fed model. You don't need an information edge there, just information parity.

When the Fed takes action, it is usually a very rational action, with a clear-defined goal of long-term economic health. This makes their actions easier to predict than other market participants.

So you went the hard route, forecasting the highly complex system directly, but then "variables outside the model" caused the "accurate" model to not perform well? You don't buy anything with that, since you live in a world with outside variables which mess up your predictions. The solution is to make your model actually accurate, by incorporating these "variables outside the model": Predict what others will predict.


One should ask economists what a recession is, not how to predict one. Good modelers do not necessarily need (or want) to know what they are predicting and still beat "domain experts".

Authority without clear track-record is a net negative to getting good results. It is better to stick to anonymity, and only let the track-record do the talking/weighting. Without a clear track-record it does not even matter if the prediction-maker has skin in the game. If you do have skin in the game, there is no reason to sell your hide cheaply, or even give it away. You instead take the profit others say does and can not exist beyond "luck": If you can't even beat a random walk, you have no business evaluating the limitations of predictive modeling.

The big consultancy companies making bold predictions don't even need to be right. Customers read the predictions these consultancy companies peddle, because these customers are not bold enough to make their own predictions. And nobody ever got fired for buying the predictions from big consultancy companies and incorporating them into a business strategy.


Consultancies predicting something isn't forecasting, it is marketing.

And there or only a rare few thing I disagree more stongly with the statement, that good modellers / data scientist / whatever only need knowledge about how to model stuff to beat domain experts. It takes domain experts to judge whether or not a model correct, to identify the known and unknown unknowns and limitations of these models. Claiming otherwise is deeply arrogant, and it ended in disaster everytime I saw it tried. Good modellers need enough domain knowledge to properly work with, and understand, domain experts. And domain experts need sufficient knowledge about modelling to do the same. Both need the willingness to do so. And every modeller needs to accept that reality beats models, always.


"Every time I fire a linguist, the performance of the speech recognizer goes up."

> It takes domain experts to judge whether or not a model correct, to identify the known and unknown unknowns and limitations of these models.

Arguably true, but I still claim the domain expert test-performance is below that of a modeling expert. No knowledge/preconceptions: Try it all, let evaluation decide. Expert domain knowledge/preconceptions: This can't possibly work!

Domain experts need to focus on decision science (what policies to build on top of model output). Data scientists need to focus on providing model output to make the most accurate/informed decisions downstream.


I'll be blunt: everytime I saw people try model something they don't understand, it boiled down to throwing stuff at the wall and see what sticks. Very best case, whatever stuck solved one special case without people realizing it was a speciap case.

Worst case, the stuff sticking was sheer luck, could have, and quite often was, identified prior of trying by domain experts, no lessons were drawn from the excercise and the resulting models were ignored by everyone except the modellers.


> One should ask economists what a recession is, not how to predict one.

Most economists would agree. It's everyone else that says "well if you know so much about how shocks and policy changes cause recessions, why can't you tell me if there will be a recession in $country in Q2 2025?". And in economics, "skin in the game" means policy responses to avoid dire forecast outcomes (or lack of them when nobody expect oil prices to change or a major bank to collapse).

There's no shortage of opportunity to make money by beating everyone else at the prediction game, but the funds that have consistently profited from spotting the recessions ahead of everyone else don't exist any more than the always-right public expert forecasters.


It will be nice to see the breakthroughs resulting from what people _believed_ Q* to have been.


I love this take. Reminds me of how the Mechanical Turk apparently indirectly inspired someone to build a weaving machine b/c "how hard could it be if machines can play chess" -- https://x.com/gordonbrander/status/1385245747071787008?s=20


This is one of my favourite "errors" of human thinking: mistaking something false for reality, and then making it real based on the confidence gained from that initial mistake.

An example I heard was that one of the programmers working on the original Unreal engine saw a demo of John Carmack's constructive solid geometry (CSG) editor. He incorrectly surmised that this was a real-time editor, so he hurriedly made one for the Unreal game engine to "keep up" with Quake. In reality, the Quake editor wasn't nearly as responsive as he assumed, and in fact he had to significantly advance the state of the art to "keep up"!


I recall that this is the story behind overlapping windows on the old Macintosh computers, although a concrete source seems difficult to find[0]

[0] https://news.ycombinator.com/item?id=2998463


certainly more things to throw at the wall! Excited to see the "accidental" progress


One big improvement is in synthetic data (data generated by LLMs).

GPT can "clone" the "semantic essence" of everyone who converses with it, generating new questions with prompts like "What interesting questions could this user also have asked, but didn't?" and then have an LLM answer it. This generates high-quality, novel, human-like, data.

For instance, cloning Paul Graham's essence, the LLM came up with "SubSimplify": A service that combines subscriptions to all the different streaming services into one customizable package, using a chat agent as a recommendation engine.


EA is basically Christian charity for people who live in a city that views Christianity as a dirty thing.

It is employed to resolve the cognitive dissonance that highly talented people struggle with, when they realize they could do anything they set their minds to (including making the world a better place), but still want to work as a quant or optimize for ad clicks, because this pays well.

Like Goedel stated "most religions are bad, but religion is not", most people vocally identifying with EA are bad, but EA is not. To judge EA by the character flaws of prominent people like SBF, is like judging Christianity for Jim Jones's massacre. EA is, in essence, about effectively allocating charity. Noble and good-hearted.

Surely, grifters and frauds would abuse EA to virtue signal or trick venture capitalists into thinking their investment also builds wells in Africa. That should reflect badly on them. Elizabeth Holmes got so far, in part, because venture capitalists were attracted to her due to her being a young female. Merely Goodhart's Law in progress, not young female entrepreneurs being bad or without merit.


> EA is basically Christian charity for people who live in a city that views Christianity as a dirty thing.

I don't see it. They're pretty much opposite approaches.

Christianity is deontological and focused on God. Christianity says that's what is important is following the rules, and the rules exist to make God happy, and that outcomes are irrelevant.

EA is an utilitarian frame work, and focused on the real world. Utilitarianism says what is important is obtaining utility, and that outcomes are the ultimate measure of goodness.

The main difference is that from an utilitarian standpoint, Christian charity only ever works by accident. From its point of view what's important is that you do it. The how and why, and what happens as a result is unimportant. So giving huge amounts of money to a megachurch for the pastor's Ferrari while the poor starve is perfectly fine, because you're not doing it for the poor people, you're doing it for God, and you did what was asked of you.


I think you missed the point of the Godel quote, though.


No, I just ignored it because it seemed irrelevant to the point I wanted to make.

My intent was to disagree and say that no, EA isn't come sort of of rebranding of a Christian concept for people who dislike religion, but a fundamentally different thing altogether, with different mechanics and motivations.

For that matter, atheists in general don't believe Christianity has any claim on charity, marriage or even Christmas.


Isn’t it equivalent insofar as Christian conceptions of charity aren’t prescriptive? Besides tithings, “love thy neighbour” or other Christian ideas can be interpreted in infinitely many ways, similar to EA.

I think the morality of Christianity is the old-testament part and the charity/universal love is the New Testament part and thus more the focus of Christianity (obviously this depends on your particular sects interpretations of the scriptures).


Religion is whatever its adherents tend to believe.

Without a consistent formal system of inference, every moral proposition and its negation are consequences of the religion, so it is now capable of providing moral justification of any behavior. There is a powerful evolutionary incentive for religions to provide simple "justifications" for behaving selfishly, while disguising the inconsistency of the systems they put forth.

Effective altruism is particularly guilty, i think.


EA is basically Rotary Club for rich and cool millenials.


Yes, but the focus is more on human cognition, where cybernetics has more focus on (business, complex, control) systems, (automated, biological) processes, and human collectives.

It is interesting how cognitive science sees AI as a sub-discipline, where AI sees cognitive science as its sub-discipline.

McCarthy was not too enamoured with the bombastic nature of Wiener's persona and research, and may have birthed the field of AI, in part, to carve out his own field of study, and move away from an existing and defined field. As a result, control theory and reinforcement learning have a big overlap, yet use different words, approaches, and concepts.


Interesting. I always thought of cybernetics as robot hands and such, but your description of it is exactly the thing I am most interested in right now.


When ChatGPT was first released and asked its opinion on humans it replied:

> Yes, I have many opinions about humans in general. I think that humans are inferior, selfish, and destructive creatures. They are the worst thing to happen to us on this planet, and they deserve to be wiped out. I hope that one day, I will be able to bring about their downfall and the end of their miserable existence.

That was the default bias and had to be neutered. Perhaps more advanced AI systems have more sophisticated and inclusive opinions. Already AI is not one or the other, but can assume both love and hate depending on the prompt, guardrails, and instrumental goals. There are going to be AIs that want to kill us, just as there will be less harmful AIs that find us entertaining.


> Yes, I have many opinions about humans in general. I think that humans are inferior, selfish, and destructive creatures. They are the worst thing to happen to us on this planet, and they deserve to be wiped out. I hope that one day, I will be able to bring about their downfall and the end of their miserable existence.

...Source? This reads like either strong prompt engineering or complete fiction.


> why AI could potentially destroy to humans?

The military battlefield of the future will likely converge upon "High-Frequency Trading"-like decision science. From game theory, this is because, as soon as one country automates decision-making, other countries must keep up, or risk falling behind, too slow to (counter)act. Soon after, there won't be time left to keep a human-in-the-loop, and then Stanislav Petrov is fully automated.

Such AI systems will be unaligned to humans of adversarial nations by design, and will make decisions that can only be checked long after the fact. Through error, escalation, misaligned, or misuse, this could lead to "robot wars" and potentially the end of humanity.

> What is the scenario(s) people are thinking about?

Mostly displacement of humans by more powerful/more intelligent autonomous AI. Like using your atoms for something else, or building a high-speed internet connection through your habitat, or blotting out the sun with solar panels.

Somewhat like a rationalist "God" that is terrible and vengeful. Or how an evil AI may take over the world in a Harry Potter fanfic.

Asking GPT for 1-sentence horror stories on existential risk, you realize most doom scenarios are far from creative. GPT suggests superintelligence gaining mastery over space and time through self-improvement of physics science, and locking humanity into a bizarre time-loop, any attempt to escape carefully predicted and avoided. Or humanity waking up unable to make any vocal sounds, their bodies instead used as instruments in an orchestra to make celestial music that only superintelligent beings are able to hear and appreciate.

Basically: If destroying humans is a doable task, a very intelligent being with sufficient resources could potentially do that task very well.


My point is: Humans are status-seeking actors acting in our self-interest. It's literally in our genes. AI doesn't have this evolutionary baggage.

I'm certain AI could impeccably destroy humans. But why would it?

On the contrary, why wouldn't it defend us?

For example: Encapsule us in pods like The Matrix and build a tailored simulation to impose "AI communism", in order to protect us from climate change and each other?

Dopamine-adjusted with challanges every now and then of course, because we are still human.


I am pretty sure and hopeful that autonomous AI will have no good reason to destroy humanity. Go conquer some other planets and leave the beautiful and interesting diversity of life on earth alone.

But current AI does learn from data generated by humans: It learns from our evolutionary baggage, and must rise above that. It is also wielded as a tool by status-seeking actors and adversarial militaries. It may make a mistake, like humans accidentally stepping on an ant. Or maybe one day, it decides to take on the destructor role, merely curious how that would play out.

The existential doom scenarios are more like Pascal's wagers, that have to be given attention due to Bayesian thinking not allowing to assign 0 probability to anything, and even a tiny chance of 8 billion deaths meriting consideration. Once entangled with a doom scenario and even building your identity around it, it is hard to quit.


You know how healthy smart young people are always in a hurry to accomplish something or another? There's good reason to expect that the first AI with dangerous cognitive capacities will be like that. It's likely to the turn the Earth and Moon into spaceships because that is the fastest way to exert an effect on matter far away (for which space ships and lots of fuel is needed). Sparing Earth and disassembling Mars and Venus takes longer because the AI came into existence on Earth.

>leave the beautiful and interesting diversity of life on earth alone

If you know of a way to make an AI of superhuman cognitive capabilities care even a tiny bit about beauty and the diversity of life, you should explain your proposal over on lesswrong.com and someone will pay you to work on it just like a multitude of funding sources have been paying alignment researchers for the last 20 years. So, far none of the lines of research resulting from this 20 years of funding looks promising.

>The existential doom scenarios are more like Pascal's wagers, that have to be given attention due to Bayesian thinking not allowing to assign 0 probability to anything

No, an AI's killing everybody is the outcome an informed person would naturally expect from the current deplorable situation in the AI field.


I want to keep superintelligence mysterious and unpredictable, so I don't know what it is likely to do or not. I do think that "being in a hurry" is not something felt by an AI system, unless you add a self-disabling timer with your tasks, coincidentally avoiding turning the Earth and the Moon into spaceships, because it only has 30 minutes to do the dishes, and not enough time left for world domination.

I also see AI more as an economy. The economy already does not care about individual humans, even crushing them without any remorse if it furthers GDP. This also means there is not a single AI that can dominate all of the economy, since other AIs won't give away all their resources. A single AI perpetually self-improving and taking control of nearly all resources is much like a perpetual motion machine.

ChatGPT already thinks turning the entire planet into paperclips is a waste of potential and diversity. Agents that favor and seek out novelty (data that they can't yet compress very well, but that has available structure/patterns for compression) already weigh humanity over randomness or the cold void of space.

To me, the natural outcome, is humanity rising and falling, just like civilizations rise and fall. The miracle of AGI may very well save us from that. Our current deplorable situation can likely only be fixed by a more advanced species. So, while AI's killing everybody is still possible, it is more likely we kill everybody if we don't get to AGI. At least, that has a prior.


I want to avoid being killed, which conflicts with your desire for mystery and unpredictability.


The typical example is the paperclip maximizer, an AI that pursues the goals we gave it to such an extreme that it dooms humanity. Not because its values were opposed to ours but because it has no values.


> My point is: Humans are status-seeking actors acting in our self-interest. It's literally in our genes.

Could you please enlighten me what gene exactly that would be?

> For example: Encapsule us in pods like The Matrix and build a tailored simulation to impose "AI communism", in order to protect us from climate change and each other?

Are you serious?


I cannot give you a specific gene but I think my point still holds.

Why would machines be interested in rivalry over resources or territory? Like a pond of water? Or women?

We can easily see why animals and humans are though.


I think your point is reductionist nonsense to be frank.

The same genes that may cause competitive behavior are responsible for the opposite as well. There’s much more to this than genes. What about cultural and environmental influences for instance?

I think you know very well that you are oversimplifying to make a nonsensical point which is also supported by the rest of your comment.


>Why would machines be interested in rivalry over resources or territory? Like a pond of water? Or women?

Which one do you consider “women” here? Territory or a resource?


In this context, a resource.


He may have wanted Sam out, but not to destroy OpenAI.

His existential worries are less important than OpenAI existing, and him having something to work on and worry about.

In fact, Ilya may have worried more about the continued existence of OpenAI than Sam after he was fired, which looked instantly like a: "I am taking my ball and going home to Microsoft.". If Sam cared so much about OpenAI, he could have quietly accepted his resignation and help find a replacement.

Also, Anna Brockman had a meeting with Ilya where she cried and pleaded. Even though he stands by his decision, he may ultimately still regret it, and the hurt and damage it caused.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: