In the end this depends on your definition of "fair". What percentage of your generated production do you think is fair for the company to take? 95%? 50%? 10%?
That depends on the value of your generated production, among many other things, and ultimately isn't the right question to ask.
Can an employee obtain better employment terms elsewhere (which is a complex concept to define in itself)? If so, they are underpaid, if not, they aren't.
You were talking about exploitation. Using the fact that the employee cannot obtain a better employment elsewhere to extract as much of the production or value from the employee smells a lot like exploitation to me.
If an employer offers an employee $100 per hour, and the next best offer that employee can obtain elsewhere is $90 for an otherwise equivalent job, should the employee take that job for granted? Is the employer exploiting them with their pay rate?
That would be the case in an idealized world. As with everything this depends on the circumstances and the economic activity of where the person is living in. I guess that with the north american eyes it is the employee's fault if the employee cannot find some other job since the only constraint for doing it is the personal drive. But there are other economical/educational constraints that don't allow people to have the necessary mobility for your example to be efficient and accurate.
Put down the Ayn Rand BS books. What if the employers make 10k per unit of work while they pay you only $10 per unit of work and they have all talked to each other to never pay more than $10? What do you do then? Complain? Go to court? Who do you think has more influence over the politicians/courts? You making $10 or your bosses that are all millionaires because of your severly underpaid work?
Notation an symbology comes out of a minmax optimisation. Minimizing complexity maximizing reach. As with every local critical point, it is probably not the only state we could have ended at.
For example, for your point 1: we could probably start there, but once you get familiar with the notation you dont want to keep writing a huge list of parameters, so you would probably come up with a higher level data structure parameter which is more abstract to write it as an input. And then the next generation would complain that the data structure is too abstract/takes too much effort to be comunicated to someone new to the field, because they did not live the problem that made you come with a solution first hand.
And for you point 2: where do you draw the line with your hyperlinks. If you mention the real plane, do you reference the construction of the real numbers? And dimensionl? If you reason a proof by contradiction, do you reference the axioms of logic? If you say "let {xn} be a converging sequence" do you reference convergence, natural numbers and sets? Or just convergence? Its not that simple, so we came up with a minmax solution which is what everybody does now.
Having said this, there are a lot of articles books that are not easy to understand. But that is probably more of an issue of them being written by someone who is bad at communicating, than because of the notation.
> As Venkatesh concludes in his lecture about the future of mathematics in a world of increasingly capable AI, “We have to ask why are we proving things at all?” Thurston puts it like this: there will be a “continuing desire for human understanding of a proof, in addition to knowledge that the theorem is true.”
This type of resoning becomes void if instead of "AI" we used something like "AGA" or "Artificial General Automation" which is a closer description of what we actually have (natural language as a programming language).
Increasingly capable AGA will do things that mathematitians do not like doing. Who wants to compute logarithmic tables by hand? This got solved by calculators. Who wants to compute chaotic dynamical systems by hand? Computer simulations solved that. Who wants to improve by 2% a real analysis bound over an integral to get closer to the optimal bound? AGA is very capable at doing that. We just want to do it if it actually helps us understand why, and surfaces some structure. If not, who cares it its you who does it or a machine that knows all of the olympiad type tricks.
> Right now, even people who reject meritocracy understand its logic. You develop rare skills, you work hard, you create value, and you capture some of that value.
The premise is that AI does not allow to do this any more, which is completely false. It may not allow to do it in the same way, so its true that some jobs may disappear, but others will be created.
The article is too alarmist by someone who has drank all of the corporate hype. AI is not AGI. AI is an automation tool, like any other that we have invented before. The cool thing is that now we can use natural language as a programming language which was not possible before. If you treat AI as something that can thin k, you will fail again and again. If you treat it as an automation tool, that cannot think you will get all of the benefits.
Here i am talking about work. Of course AI has introduced a new scale of AI slop, and that has other psycological impacts on society.
Yes, but I don't think it's about the present, necessarily.
AI is still shit. There are prompt networks, what some people call agents, but presently models are still primarily trained as a singular models, not made to operate as agents in different contexts with RL on each agent being used to improve the whole indirectly.
Tokens will eventually become cheaper to the point where it will be possible to actually train proper agents. So we probably will end up with very powerful systems in time, systems that might actually be at least some kind of AGI-lite. I don't think is far off. At most a decade.
I get your point, but i think the real issue is -(1/(-1/x)). It is the one that is being overlooked the most in our society, as if it were something normal, but it contains some of the deepest truths imho.
Not sure what you are talking about. What you wrote reduces to just x. What I meant was, if you substitute say, -x for x in -1/x, you get 1/x, which is the third inverse. Same is true for the other two pairs. So, if we call them functions f, g and h, then, f=g(h)=h(g); g=f(h)=h(f); h=f(g)=g(f)
Ahh nothing better than seeing someone on the wild thinking that their life decisions are 100% independent from their environment. Enjoy your false sense of freedom while you can!
The reward functions in the problems that they proposed alphaevolve are easy. The reward funtions of at least 50% of maths are not. You can say that validating if a proof is correct is a straightforward reward, but the size of interesting theorems over the space of all theorems is very small. And also what does "interesting" could even mean?
> AlphaEvolve did not perform equally well across different areas of mathematics. When testing the tool on analytic number theory problems, such as that of designing sieve weights for elementary approximations to the prime number theorem, it struggled to take advantage of the number theoretic structure in the problem, even when given suitable expert hints (although such hints have proven useful for other problems). This could potentially be a prompting issue on our end,
Very generous from Tao to say it can be a prompting issue. It always surprises me how easily it is for people to says that the problem is not the LLM, but them. With other types of ML/AI algorithms we dont see this. For example, after a failed attempt or lower score in a comparison table, no one writes "the following benchmark results may be wrong, and our proposed algorithm may not be the best. We may have messed up the hyperparameter tunning, initialization, train test split..."
Even without such acknowledgments it is hard to get past reviewers ("Have you tried more extensive hyperparameter tunning, other initializations and train test splits?"). These are essentially lab notes from an exploratory study, so (with absolutely no disrespect to the author) the setting is different.
Of course people don't say it, but there are many cases where reported algorithmic improvements are attributable to poor baseline tuning or shoddy statistical treatment. Tao is exhibiting a lot more epistemic humility than most researchers who probably have stronger incentives to market their work and publish.
The closest thing that you may get is a manifold + noise. Maybe some people thing about it in that way. Think for example of the graph of y=sin(x)+noise, you can say that this is a 1 dimensional data manifold. And you can say that locally a data manifold is something that looks like a graph or embedding (with more dimensions) plus noise.
But i am skeptical whether this definition can be useful in the real world of algorithms. For example you can define things like topological data analysis, but the applications are limited, mainly due to the curse of dimensionality.
Sometimes statistical rates for empirical risk minimization can be related to the intrinsic dimension of the data manifold (and noise level if present). In such cases, you are running the same algorithm but getting a performance guarantee that depends on the structure of the data, stronger when it is low dimensional.
True, but 4 years old? The reactions that 4 year olds have to videos on screens is like drugs. They are fully hipnotized while watching the video, to the point that its difficult to get them to react to the outside world, and turning off the screen triggers some hard withdrawal reactions. At that age they have 0 tools to control and understand their emotions.
reply