The `torch.compile` API itself will not be available from C++. That means that you won't get the pytorch 2.0 performance gains if you use it via C++ API.
There's no plan to deprecate the existing C++ API, it should keep working as it is. However, a common theme of all the changes is implementing more of pytorch in python (explicitly the goal of primtorch), so if this plan works it could happen in the long run.
Do you actually think it would be a good thing if an IRB was required for this type of thing? Sure, it's "human experimentation" but the likelihood for any serious harm is basically zero.
It goes with the zeitgeist to argue for what makes the life of big tech companies hard, but they are big enough that they can afford things like that. It's smaller companies and academics that would end up not being able to innovate as much
Go down that road and you end up with an IRB evaluation requires for an A/B test that changes the color of a button
Agreed. This is using an AI to play a game, IRB seems like overkill. I guess the only potential problem is if it went off the rails and spouting toxic language, but that presumably was not a real possibility.
They are giving probabilities for discrete events, which already captures their level of uncertainty. Probabilities of probabilities (i.e., a probability distribution of a probability) are not very useful concepts.
It looks like they are simply providing the summary data to their multiple choice survey questions. Still, CI or SEM would not apply (there is no SEM for 80% of forecasters said 'yes'). These graphs are literally just telling you the percent of responders who picked a given answer to the corresponding question.
edit- the longer I browse their website for the exact methodology, the less impressed I am with this group. The "Introducing the Superforecasters" section is so cringe.
This would imply that the confidence interval around the coefficient in a logistic regression is not a very useful concept, which I don't think is true.
That it is a little different. There you are estimating a continuous parameter (which happens to be interpretable as a probability) and it makes sense to have a probability distribution over that.
But if you are talking about whether a single discrete events will happen or not, a single number (the probability) already fully captures the uncertainty about it.
There is a lot of missing information in their probability of binary events presented. Presumably they polled N forecasters and are presenting an x/N prediction. The fact that each forecaster is estimating in a continuous space and then binarizing their result means that a lot of information has been lost.
To look at an extreme example… were all the “yes” votes 95%+ certain and the “no” votes just under the line 49%? Or was it more like a bunch of no votes at 49% and a bunch of yes votes at 51%?
Binarizing forecasts necessarily discards information. Aggregating a bunch of binary predictions into a percentage does no recapture said information, unfortunately.
People tend to forget that while gas is being used for energy production, it is also being used as basis of chemical processes, e.g. for ammonia production, which then starts a whole chain of products (most prominently: carbonic acid, fertilizer, cooling, removing CO2 from industrial exhaust, ...)
We can't just replace natural gas with nuclear power, even if we did, we'd still be in deep trouble.
I know. In our state we're very pro-nuclear and the local Green party basically ended itself over this issue. I used to think that's surely going to happen in Germany too - such a scientific, rational and pragmatic nation/state, right? But oh well, seems like economical pragmatism wins over ecology and health.
Yes, 100% agree with last paragraph. And the waiting for me to type Hello back pattern is so annoying.
This really should be basic work etiquette. Type your whole question in one go. If you want to you can include pleasantries at the beginning but never if there is going to be a more than 2 second interval between them and your actual question.
Should probably be part of first day orientation for new hires.
Yes, I now remember my colleague mentioning a car :)
But well, I wouldn't have enough faith in XXX cashiers.
I'd want a hard receipt (confirmation of submission on paper with some kind of checksum which I could confirm online if I wanted to).
So you have a centralized database that tracks all purchases country-wide? Maybe Portugal government could compete with Google/FB providing personalized ads...
Portugal is moving to digital receipts for individuals in 2019. If you give the cashier your VAT ID, no paper receipt is produced, and you may download the receipt from your IRS dashboard.
Anonymous paper receipts will remain valid, indefinitely (i.e. they are not being phased out).
To combat tax evasion/parallel economy. The rationale goes: People want to get deductions and maybe even win a car, so they demand a receipt from business. That means said business can't keep that transaction off the books.
It is not quite that bad.
Tax withholding rates are not marginal, they do apply to the whole salary.
But the actual tax rates at the end of the year are marginal.
So it is possible a pay raise means you get less money after witholding on a monthly basis, but on an yearly basis, after taxes paid/refunded, a pay raise is always net positive.
But that's essentially a big no-interest loan people are giving to the government no? The money returned at the end of the year is money people could have been investing and earning a return on no? That seems closer to "insane" than "silly." Or am I missing something?
You are not wrong, except it's not usually a big loan. The witholding rates are chosen so that in most cases what is withheld is close to your final tax bill.
In all countries I know of, tax withholding is not a perfect match to final tax bill, and that could amount to a no-interest loan to government like you say. Not sure if in practice that loan is unusually large in Portugal, could be.
From the other side of the table, that is money the government is investing and earning a return on. If they did it the other way they'd have to raise the tax rates to compensate, which wouldn't go down for the politicians doing it, even if the net result is the same.
>"From the other side of the table, that is money the government is investing and earning a return on."
With the exception of a few notable oil economy countries that have sovereign wealth funds, federal governments most certaily do not invest tax revenue in the market. Taxes are used to finance the running of the country. If governments are lucky something is left over paying its' bills and the government runs a surplus.
Please provide a citation that Portugal invest its' citizen's tax revenues in the financial markets and makes a profit from those.
>"If they did it the other way they'd have to raise the tax rates to compensate, which wouldn't go down for the politicians doing it, even if the net result is the same."
Umm no if they did it the other way around the government could additionally tax people's gains from investments, and people could save for things like their retirement at the same time.
>"that is money the government is investing and earning a return on."
"Investing money and earning a return on it" is universally understood to mean putting your money to work in the financial markets. Your comment is disingenuous at best.
Further governments don't earn a "return" when they spend money on a stoplight or a bridge. Infrastructure requires upkeep, maintenance and eventual replacement.
It's a coast center not a profit center. If it was the latter governments would be building infrastructure like crazy and running a budget surplus. And that clearly isn't the case is it?
Except the phrase was not "investing", it was "investing money and earning a return on it." Maybe you should reread the thread.
In the context of a government or any other large institution this is most certainly understood to mean the financial markets. That is not "opinion" but rather common understanding in English language business parlance.
You have resorted to cherry picking words and trying to play semantic games. You have added exactly nothing to the conversation. In fact it's worse as resorting to "that's just your opinion" type remarks just degrades the level discourse. It's just slightly above name calling.
As a former physicist that used Mathematica heavily not that long ago, I have to disagree. It would be great if there were open source alternatives that were "superior in every way" but that is just not the case.
For some use cases at least, Mathematica is clearly better than anything else I have tried.
I feel like something like ipython notebooks with the right combination of libraries might eventually get there, but that is unfortunately still years away.
As another physicist who used mathematica what you mean is that you were too lazy to think about what assumptions were made implicitly in the calculations you were performing and liked mathematica because it did them automagically for you.
This is shown no where better in the paper than when they try to calculate Integrate[Exp[-pt](Sinh[t])ˆ3, {t, 0,
Infinity}]. In a good cas, such as maxima, you will get no result and a ton of errors. Which is what you should get without specifying what sort of variable p is, is it a matrix, polynomial, group of some sort? That it's implicitly assumed to be a real number that might turn out to be complex under some circumstances isn't a feature, it's a bug.
I am genuinely surprise at how confidently you (quite wrongly) diagnosed my problem.
What I did mean, was that for my use cases, I found Mathematica was superior to the alternatives. You are welcome to think this was because I was lazy or misguided, but it was definitely not because I liked not having to specify domains for my variables (and I do remember having to write Assuming[p>0 && x \in Reals, ...] and the like often, it definitely does not assume everything is a positive real).
One thing I used Mathematica a lot for was numerical integration of ODEs (that had been derived using the CAS part of Mathematica and had some pretty nasty coefficients). NDSolve in my experience was just better than competition. You can definitely get nonsense out of it, but with a modicum of care it works incredibly well.
This is not entirely correct. The reduction in tax liabilities will be much smaller than you might expect and this is not a merger motivated purely by that. For an in depth discussion see [1].
Very interesting read, and thanks for the link. Had no idea that the King had fallen on such hard times (or maybe that Tim was comparatively doing so well) Almost double the owned assets ($1.5 B vs $0.8 B) and nearly triple the total revenue ($3.0 B vs $1.1 B). The King may be valued higher for its IP, but it appears like Tim is running a significantly better core business. Admittedly, Tim is basically a monopoly in its home market. At best, this looks like a merger of equals, rather than an inversion; if not a big, but unknown, player buying a weak, but well known, competitor in a new market. Very similar tact as many foreign companies (UK, Japan, German, Chinese) have done in other sectors like tech (doubled from 1996-2005). [1]
Right, I don't think that's the only motivation, but it seems like it's at least part of it. If I read that article correctly, almost half of Burger King's locations are outside the US and Canada, and the revenue all of those locations, as well as the Canadian ones (however many of those there are) should see a reduction in tax liability; further, that slice of the pie is growing, as Burger King's US presence is shrinking while its foreign presence is growing.
There's no plan to deprecate the existing C++ API, it should keep working as it is. However, a common theme of all the changes is implementing more of pytorch in python (explicitly the goal of primtorch), so if this plan works it could happen in the long run.