Hacker Newsnew | past | comments | ask | show | jobs | submit | bzbz's commentslogin

Exactly, this form of internet bullying always detracts from the point its author is trying to make.


That is a crazy low rate. Which gym is it?


What part of it seems like a criticism to you?


I don't know about GP's view, but to me this does seem like a criticism:

"Women have a long history of being depicted as technical objects in computing... gendered assumptions about the characters of Alice and Bob have been read into their fictional lives. Images of Alice, Bob, and Eve depict the three as in love triangles, with Alice and Eve alternately portrayed as disrupting one another’s blissful domestic life with Bob. Visual depictions of Alice, Bob, Eve, and others used in university classrooms and elsewhere have replicated and reified the gendered assumptions read onto Alice and Bob and their cryptographic family, making it clear that Bob is the subject of communications with others, who serve as objects, and are often secondary players to his experience of information exchange. Thus, while Rivest, Shamir, and Adleman used the names “Alice” and “Bob” for a sender and receiver as a writing tool, others have adapted Alice and Bob, in predictable, culturally-specific ways that have important consequences for subsequent, gendered experiences of cryptology."


Does it seem like a criticism of Rivest, Shamir and Adleman using the name Alice? Or of others projecting gendered assumptions onto Alice?


The previous paragraph criticized Ivan Sutherland for so much as drawing a girl's face with sketchpad, so in context this does seem to be critical of RSA.


For anyone who's wondering, their estimation method works like so:

1. Assume a range of values

2. Assume a fair probability function for sampling over the range of values

The estimated size is the %-of-hits * the total range of values.


I skimmed through the article but that's a lot of assumptions there if so.

1. So let's say that possible range of values is true (10 characters of specific range + 1). That would represent one big circle of possible area where videos might be.

2. Distribution of identifiers (valid videos) is everything. If Youtube did some contraints (or skewing) to IDs, that we don't know about, then actual existing video IDs might be a small(er) circle within that bigger circle of possibilities and not equally dispersed throughout, or there mught be clumping or whatever... So you'd need to sample the space by throwing darts in a way to get a silhouette of their skew or to see if it's random-ish, by I don't know let's say Poisson distribution.

Only then one could estimate the size. So is this what they're doing?

Also.. anyone bothered to you know, ask Youtube?


No the distribution doesn't matter at all. I've given an extreme example here: https://news.ycombinator.com/item?id=38742735


I see what you did there. So basically an overlapped proportion (or hits proportion) would be overlapping hits divided by samples run, and then an estimated total would be this proportion divided by total space of possibilities. That would work.


Video IDs are generated by hashing a a secret identifier, so they should be uniformly distributed.


In your example, the amino acids order is sufficient to directly model the result: the sequence of amino acids can directly generate the protein, which is either valid or invalid. All variables are provided within the data.

In the original example, we are testing weather using the previous day’s weather. We may be able to model using whatever correlation exists between the data. This is not the same as accurately predicting results, if the real-world weather function is determined by the weather of surrounding locations, time of year, and moon phase. If our model does not have this data, and it is essential to model the result, how can you accurately model?

In other words: “Garbage in, garbage out”. Good luck modeling an n-th degree polynomial function, given a fraction of the variables to train on.


>All variables are provided within the data.

electrostatic protein interaction, hydrophobic interaction, organic chemistry etc

all variables are in fact not provided within the data. Protein creation is not just _poof_ proteins. There are steps, interactions and processes. You don't need to supply any of that to get a model accurately predicting proteins. That is the main point here, not that you can predict anything with any data.


> This is not the same as accurately predicting results, if the real-world weather function is determined by the weather of surrounding locations, time of year, and moon phase.

How many have the "human intelligence" to do this? Especially more accurately than a computer (and without using any themselves) training on the same inputs and outputs?


> For some reason […]

Likely because Apple themselves provide an emulator that accommodates most developers’ needs.


So sign your UUIDs and combine them into “$UUID:$HASH” strings for the same benefit. Or a more structured JWT-like payload that still verifies auth against the DB (as opposed to carrying authorization within the token).

No need to reinvision the rest of the auth flow if you just want to add hashing to reduce DB load.


so ... recreate jwt?


> An ambassador service can be thought of as an out-of-process proxy that is co-located with the client.

> This pattern can be useful for offloading common client connectivity tasks such as monitoring, logging, routing, security (such as TLS), and resiliency patterns in a language agnostic way. It is often used with legacy applications, or other applications that are difficult to modify, in order to extend their networking capabilities. It can also enable a specialized team to implement those features.

Not surprised this is a Microsoft page, given their legacy of long lifetime support for their software products.

It’s not for microservices, but rather for software maintenance of systems that other vendors would consider past EOL.


It's similar to a thick client, what Google does by eschewing language agnosticism. It's a reasonable approach, really: thin clients of course work, and you have to provide them across popular languages, making them thicker adds value.


Unless you view systems that consist of microservices as "applications that are difficult to modify".


If anything, an obfuscated microservice-based application is easier to understand than a monolithic version: network data transfer is easier for observers to understand than memory modification.


This could be argued, but obfuscated apps are a land of their own.

(You could also argue that obfuscated monolithic programs are be easier to reverse engineer, breakpoint, replay, emulate, time-travel-debug, trace, etc because you can completely control them in your test bench and aren't then working against a hostile distributed system)


What is this comment even trying to say?


By the time they make a movie about OpenAI, there will be no more human actors.


And OpenAI will be run by the Q*LLM.


“You should leave YC to focus on what you want” is not “you’re fired”

It’s like they’re trying to make a TV drama out of nothing.


This number includes taxes, benefits, etc, not just raw salary.

Notably Signal employees do not get equity, so the salary must be higher to remain competitive.

Signal is probably the hardest class of product to build. Name an optimization/distributed systems problem, they probably have it. And quite literally, a Signal bug could jeopardize an activist/journalist’s life.

So for a <$200k salary and no equity, how many world-class engineers do you think you could hire?

I simply wouldn’t trust the product, if it had mediocre engineers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: