Hacker Newsnew | past | comments | ask | show | jobs | submit | eska's commentslogin

I can corroborate this. I coached mechanical engineers who had to learn some programming to conduct research by analyzing factory machine data I provided them (them being the domain experts). The ones who learned python and sql using AI hardly had learned anything after half a year, the ones I instructed where to find the API docs and a beginner tutorial weren’t just much further along, they were also on a faster trajectory for the future. I think AI is a beginner trap because it allows them to throw shit at the wall and see what sticks. It is much more useful in the hands of an expert in the long term.

This doesn’t make sense. BSP is a form of culling.

I don’t think that the author considers collaboration bad in general, but instead how it’s done in most large corporations:

But there’s a huge difference between communication and collaboration as infrastructure to support individual, high-agency ownership, and communication and collaboration as the primary activity of an organisation. Which, if we’re honest, is what most collaboration-first cultures have actually built.


At the people saying my suggestion in the previous thread was incorrect: booyah! o/

Is this an intentional pun on “Megadeath”? If so, awesome o/


Now, don't get me wrong. Trigonometry is convenient and necessary for data input and for feeding the larger algorithm.


The compiler is often not allowed to rearrange such operations due to a change in intermediate results. So one would have to activate something like fastmath for this code, but that’s probably not desired for all code, so one has to introduce a small library, and so on. Debug builds may be using different compilation flags, and suddenly performance can become terrible while debugging. Performance can also tank because a new compiler version optimizes differently, etc. So in general I don’t think this advice is true.


The problem with Horner’s scheme is that it creates a long chain of data dependencies, instead of making full use of all execution units. Usually you’d want more of a binary tree than a chain.


Not in this case because the dependencies are the same:

Naive: https://godbolt.org/z/Gzf1KM9Tc

Horner's: https://godbolt.org/z/jhvGqcxj1


Still, it's no worse than the naïve formula, which has exactly the same data dependencies and then more.

_Can_ you even make a reasonable high-ILP scheme for a polynomial, unless it's of extremely high degree?


For throughput-dominated contexts, evaluation via Horner's rule does very well because it minimizes register pressure and the number of operations required. But the latency can be relatively high, as you note.

There are a few good general options to extract more ILP for latency-dominated contexts, though all of them trade additional register pressure and usually some additional operation count; Estrin's scheme is the most commonly used. Factoring medium-order polynomials into quadratics is sometimes a good option (not all such factorizations are well behaved wrt numerical stability, but it also can give you the ability to synthesize selected extra-precise coefficients naturally without doing head-tail arithmetic). Quadratic factorizations are a favorite of mine because (when they work) they yield good performance in _both_ latency- and throughput-dominated contexts, which makes it easier to deliver identical results for scalar and vectorized functions.

There's no general form "best" option for optimizing latency; when I wrote math library functions day-to-day we just built a table of the optimal evaluation sequence for each order of polynomial up to 8 or so and each microarchitecture and grabbed the one we needed unless there were special constraints that required a different choice.


Ah, I didn't know of either scheme. Still, am I right in that this mainly makes sense for degrees above five or six or so?


You can often eke something out for order-four, depending on uArch details. But basically yeah.


Most of the studies they looked at did not use the most robust methods and included small numbers of people, making it difficult to work out the true effects.

Not a good sign for a meta study. When you average garbage you still get garbage.

As for my personal experience, I looked at scientific papers 5 years ago (no, intermittent fasting isn’t some new social media fad). The consensus back then was that it only slightly increases the speed at which one loses weight, but it helps significantly with adherence to a diet. This was a game changer for me too. With just attempts to control my caloric deficit I failed because I ended up snacking. With intermittent fasting (the strict variant of only eating once after work in my case) I simply had no appetite from the morning until my meal. I also didn’t have cravings before sleep.


> The consensus back then was that it only slightly increases the speed at which one loses weight, but it helps significantly with adherence to a diet.

This is my experience as well. IF didn't teach me how or what o eat, it taught me not to snack after the last meal of the day, and after losing around 10% of my weight (still obesity grade II), the weight-loss effect was lost. It didn't work for long-term weight loss, it just established a new equilibrium. Also, there's a bounce back if you stop it.

Currently, I'm on Mounjaro (diabetes type II likely caused by obesity and genetics), and IF meshes wonderfully with it, since mounjaro forces you to eat less or feel miserable. So far I've reached obesity grade I (aka plain obesity), and the effect has been very consistent over time, no new equilibrium reached yet.

So, for me, IF is not for weight loss, but it helps if you combine it with a proven method for weight loss.


Surely there is so much money to be made selling random people's faces.

I really hope I misread sarcasm in that statement. Because of course there is a lot of money in that


How much? 2 bucks per user?

Their paid users shell out 3 a month...

And then you think of the real world

> secretly selling your IDs data behind your back, they have to account for that revenue in their books, put it in their privacy policies or do it illegally, it's weak to whistleblowers, third parties get breached all the time (as well as yourself), and you have to trust the people you're selling this to. It's not credible.


How many users are paying? a few million? How many use the service for "free"? A few hundred million? Are you stupid?


>How many users are paying?

7.3 million paying every month

>How many use the service for "free"?

143 million times maybe 2 bucks once. Most likely five cents once.

>Are you stupid?

Flagged


While what GP said was not worded how the site rules say it should be, your original point is very tedious and can only be read charitably if we assume you never read any news or barely retain anything. We are currently on a news website. I think if you want non-commenting readers to see your point and have charitable thoughts of you it would help to explain why you're ignoring reality for whatever it is you are positing (consumer protections because of subscriptions? really? for this corporation?).

What you're saying in this post essentially just underlines GPs point, which I imagine isn't what you're trying to communicate. You have to help a reader understand your point of view, especially if it's far removed from objective reality (which is that a corporate entity will betray you for money, regardless of whether that makes sense long-term).


Nope, when corporate overlords sell your data they say it in their terms of use and privacy policies because no one is that stupid. If Discord says they're not selling that data, they're not selling that data. The day they'll start doing it, they'll put it in their policy.

You're making up a reality that doesn't exist in your head and claiming it's the truth.

You have in your head examples like facebook or spotify. Spoiler: They tell you exactly with what sauce you're gonna be eaten


Discord had a scandal not too long ago where pictures of people/passports were stolen. There they said that they delete those files immediately after processing them. This proves your statement as false.


You got that fact from my own comment a few ones above this

https://www.bbc.com/news/articles/c8jmzd972leo

It was a 3rd party


Are you saying that corporations respect the letter of the law when it comes to privacy? They don't, they can just drop some lunch money when caught red-handed [0]

Even when they write in their privacy policy that they collect private data and sell them to third parties, unlawfully, that does not make it any better. Cambridge Analytica was operating with respect to Facebook policies. Would you say that people that took an IQ test and were manipulated into voting pro-Brexit were well-aware of the sauce they were eaten with?

Discord is unfortunately no different, they're profit-driven and likely to sell user data already or in the future, because it's incredibly easy and profitable to do so. Why would a chat app try and predict its users' gender? [1]

[0] https://en.wikipedia.org/wiki/GDPR_fines_and_notices [1] https://x.com/DiscordPreviews/status/1790065494432608432


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: