Hacker Newsnew | past | comments | ask | show | jobs | submit | alicesreflexion's commentslogin

Feelings are useful for figuring out what your goals are.

They can be money, they can be something more vague like recognition, or truly nebulous like "a sense of human connection," but you've got to have goals. Otherwise you're just a child stumbling in the dark and yelling out.

Reading the post, I don't know what the author's goals are, and I get the impression they don't either.

It sounds like they weren't sure about their goals to begin with, didn't do anything to achieve them, felt robbed because other people's goals were misaligned with theirs, before finally accepting they can't change the past, which

Duh.


Kagi has been fantastic to me. The search results seem better. The option to search by EDU or discussion or by "literally anything that doesn't have 5000 trackers" with a single click is indispensable. Showing results from the internet archive and old blogs is a nice plus. Shoving all the listicles into their own compact section is great.

If this is the quality I can expect going forward, I have zero problem paying the $10/mo down the line


This doesn't matter until someone tries suing them for it, right?

And as I understand it, you don't really have a case without evidence that the hiring algorithm is discriminating against people with disabilities.

How would an individual even begin to gather that evidence?


The process of gathering evidence after the suit has started is called discovery.

There are three major kinds of evidence that would be useful here. Most useful but least likely: email inside the company in which someone says "make sure that this doesn't select too many people with disabilities" or "it's fine that the system isn't selecting people with disabilities, carry on".

Useful and very likely: prima facie evidence that the software doesn't make necessary reasonable accomodations - a video captcha without an audio alternative, things like that.

Fairly useful and of moderate likelihood: statistical evidence that whatever the company said or did, it has the effect of unfairly rejecting applicants with disabilities.


And one could go a step further: run the software itself and show that it discriminates. One doesn't just have to look at past performance of the software; it can be fed inputs tailored to bring out discriminatory performance. In this way software is more dangerous to the defendant than manual hiring practices; you can't do the same thing to an employee making hiring decisions.


How would you make sure that the supplied version has the same weights as the production version? And wouldn't the weights and architecture be refined over time anyway?


Perjury laws. Once a judge has commanded you to give the same AI, you either give the same AI, or truthfully explain that you can't. Any deviation from that and everyone complicit is risking jail time, not just money.

"this is the June 2020 version, this is the current version, we have no back ups in between" is acceptable if true. Destroying or omitting an existing version is not.


Not that not having backups is something that you can sue the company for as an investor. If you say we have the June 2020 version, but not the july one you asked for you are fine, (it is reasonable to have save daily backups for a month, monthly backups for a year, and then yearly backups). Though even then I might be able to sue you for not having version control of the code.


True, but if you really never had it, that's money, not jail time.


If a non-hired employee brings a criminal action, this may matter.

For a civil action, the burden of proof is "preponderance of evidence," which is a much lower standard than "beyond a reasonable doubt." "Maybe the weights are different now" is a reasonable doubt, but in a civil case the plaintiff could respond "Can the defendant prove the weights are different? For that matter, can the defendant even explain to this court how this machine works? How can the defendant know this machine doesn't just dress up discrimination with numbers?" And then it's a bad day for the defendant to the tune of a pile of money if they don't understand the machine they use.


Don't most production NN or DLN optimize to a maximum?

Seems like the behavior becomes predictable and then you have to retrain if you see unoptimal results.


> How would you make sure that the supplied version has the same weights as the production version?

You just run the same software (with the same state database, if applicable).

Oh wait, I forgot, nobody knows or cares what software they're running. As long as the website is pretty and we can outsource the sysop burden, well then, who needs representative testing or the ability to audit?


I am not sure, but if I remember correctly employer must prove they are not discriminating. And just because they are using AI they are not immune to litigation.


How can the employer prove a negative?

At most I imagine the plaintiff is allowed to do discovery, and then has to prove positive discrimination based on that.


If you read the document again (?) maybe you'll see it's not about proving a negative. Instead, it's a standard of due care. Did you check whether using some particular tool illegally discriminates and document that consideration? From the document itself:

"Clarifies that, when designing or choosing technological tools, employers must consider how their tools could impact different disabilities;

Explains employers’ obligations under the ADA when using algorithmic decision-making tools, including when an employer must provide a reasonable accommodation;"


If it's a civil case, it's just the preponderance of the evidence. The jury just has to decide who they think is more likely to be correct.


> I am not sure, but if I remember correctly employer must prove they are not discriminating.

That seems backwards, at least in the US.


This is what that demographic survey at the end of job applications is for. It can reveal changes in hiring trends, especially in the demographics of who doesn't get hired. I don't know how well it works in practice.


I am a person, not a statistic. I always decline to answer these surveys; I encourage others to do the same.


Those are for persuading people who do see you as a statistic. You can unilaterally disarm if you like, but they're going to keep discriminating until they see data that proves they're discriminating. Far too few people are persuaded by other means.


I also do this. But given the context of this post ("AI" models filtering resumes prior to ever getting in front of a human), maybe "decline to answer" comes with a hidden negative score adjustment that can't be (legally) challenged.

I think the Americans with Disabilities Act (ADA) requires notification. (i.e. I need to talk to HR/boss/whoever about any limitations and reasonable accommodations.) If I am correct, not-answering the question "Do you require accommodations according to the ADA? []yes []no []prefer not to answer" can legally come with a penalty, and the linked DoJ reasoning wouldn't stop it.


"Employers should have a process in place to provide reasonable accommodations when using algorithmic decision-making tools;"

"Without proper safeguards, workers with disabilities may be “screened out” from consideration in a job or promotion even if they can do the job with or without a reasonable accommodation; and"

"If the use of AI or algorithms results in applicants or employees having to provide information about disabilities or medical conditions, it may result in prohibited disability-related inquiries or medical exams."

This makes it sound like the employer needs to ensure their AI is allowing for reasonable accommodations. If an AI can assume reasonable accommodations then what benefit would they ever have to assume not supplying the reasonable accommodations that they are legally required to?


I’m trying to but my employer has said they will use “observer-identified” info to fill it in for me. I find it ridiculous that I can’t object to having someone guess my race and report that to the government.


That sounds broken. It's supposed to be voluntary.

PDF: https://www.eeoc.gov/sites/default/files/migrated_files/fede...

>> "Completion of this form is voluntary. No individual personnel selections are made based on this information. There will be no impact on your application if you choose not to answer any of these questions"

Your employer shouldn't even be able to know whether or not you filled it out.


My experience is the reporting on current employees, which I guess is not voluntary. It's not very clear though:

"Self-identification is the preferred method of identifying race/ethnicity information necessary for the EEO-1 Component 1 Report. Employers are required to attempt to allow employees to use self-identification to complete the EEO-1 Component 1 Report. However, if employees decline to self-identify their race/ethnicity, employment records or observer identification may be used. Where records are maintained, it is recommended that they be kept separately from the employee’s basic personnel file or other records available to those responsible for personnel decisions."

From: https://eeocdata.org/pdfs/201%20How%20to%20get%20Ready%20to%...


These days, disparate impact is taken as evidence of discrimination, so it's easy to find "discrimination".


What's the difference? Discrimination is an effect more than an intent. Most people are decent and well-intentioned and don't mean to discriminate, but it still happens. If there's a disparate impact, what do you imagine causes that if not discrimination? Remembering that we all have implicit bias and it doesn't make you a mustache-twirling villain.


>If there's a disparate impact, what do you imagine causes that if not discrimination?

20+ years of environmental differences, especially culture? The disabilities themselves? Genes? Nothing about human nature suggests that all demographics are equally competent in all fields, regardless of whether you group people by race, gender, political preferences, geography, religion, etc. To believe otherwise is fundamentally unscientific, though it's socially unacceptable to acknowledge this truth.

>Remembering that we all have implicit bias

This doesn't tell you anything about the direction of this bias, but the zeitgeist is such that it is nearly always assumed to go in one direction, and that's deeply problematic. It's an overcorrection that looks an awful lot like institutional discrimination.

>Remembering that we all have implicit bias and it doesn't make you a mustache-twirling villain.

Except pushing back against unilateral accusations of bias if you belong to one, and only one, specific demographic, you effectively are treated like a mustache-twirling villain. No one is openly complaining about "too much diversity" and keeping their job at the moment. That's bias.


There is no scientific literature which confirms that any specific demographic quality determine's an individuals capability at any job or task.

What does exist is, at best, shows mild correlation over large populations, but nothing binary or deterministic at an individual level.

To whit, even if your demographic group, on average, is slightly more or less successful in a specific metric, there is no scientific basis for individualized discrimination.

It's "not socially unacceptable to acknowledge this truth", it's socially unacceptable to pretend discrimination is justified.


>There is no scientific literature which confirms that any specific demographic quality determine's an individuals capability at any job or task

There absolutely is a mountain of research which unambiguously implies that different demographics are better or worse suited for certain industries. A trivial example would be average female vs male performance in physically demanding roles.

Now what is indeed missing is the research which takes the mountain of data and actually dares to draw these conclusions. Because the subject has been taboo for some 30-60 years.

>To whit, even if your demographic group, on average, is slightly more or less successful in a specific metric, there is no scientific basis for individualized discrimination

We are not discussing individual discrimination, I am explaining to you that statistically significant differences in demographic representation are extremely weak evidence for discrimination. Or are you trying to suggest that the NFL, NBA, etc are discriminating against non-blacks?

>It's "not socially unacceptable to acknowledge this truth", it's socially unacceptable to pretend discrimination is justified

See above, and I'm not sure if you're being dishonest by insinuating that I'm trying to justify discrimination or if you genuinely missed my point. Because that's how deeply rooted this completely unscientific blank slate bias is in western society.

Genes and culture influence behavior, choices, and outcomes. Pretending otherwise and forcing corrective discrimination for your pet minority is anti-meritocratic and is damaging our institutions. Evidenced by the insistence by politicized scientists that these differences are minor.

A single standard deviation difference in mean IQ between two demographics would neatly and obviously explain "lack of representation" among high paying white collar jobs; I just can't write a paper about it if I'm a professional researcher or I'll get the James Watson treatment for effectively stating that 2+2=4. This isn't science, our institutions have been thoroughly corrupted by such ideological dogma.


The usual view of meritocracy is this sports-like idea of wanting to see each person's inherent capability shine though.

Instead, we could give everyone the absolute best tech and social support, and only then evaluate performance, not of individuals, but of individuals+tech, the same way we evaluate a pilot's vision with their glasses on.


Please link any study which shows a deterministic property and not broad averages.


Broad averages of what? Difference in muscle characteristics and bone structure between males and females? Multiple consistent studies showing wide variance in average IQ among various demographics? The strong correlation between IQ and all manner of life outcomes, including technical achievements?

Or are you asking me to find a study which shows which specific cultural differences make large swaths of people more likely to, say, pursue sports and music versus academic achievement? Or invest in their children?

Again, the evidence is ubiquitous, overwhelming, and unambiguous. Synthesizing it into a paper would get a researcher fired in the current climate, if they could even find funding or a willing publisher; not because it would be factually incorrect, but because the politicized academic culture would find a title like "The Influence of Ghetto Black Cultural Norms on Professional Achievement" unpalatable if the paper didn't bend over backwards to blame "socioeconomic factors". Which is ironic because culture is the socio in socioeconomics, yet I would actually challenge YOU to find a single modern paper which examines negative cultural adaptations in any nonwhite first world group.

Further, my argument has been dishonestly framed (as is typical) as a false dichotomy, I'm not arguing that discrimination doesn't exist, but the opposition is viciously insisting, that all differences among groups are too minor to make a difference in a meritocracy, and anyone who questions otherwise is a bigot.


I did not call you a bigot. I never made any assumptions or aspersions as to your personal beliefs.

I am pointing that, despite your claim that your viewpoint is rooted in science, you have no scientific basis for your belief beyond your own synthesis of facts which you consider "ubiquitous, overwhelming, and unambiguous".

You have a belief unsupported by scientific literature. If you want to claim that the reason it is unsupported is because of a vast cultural conspiracy against the type of research which would prove your point, you're free to do so.


>You have a belief unsupported by scientific literature

I have repeatedly explained to you that the belief is indeed supported by a wealth of indirect scientific literature.

>You have a belief unsupported by scientific literature. If you want to claim that the reason it is unsupported is because of a vast cultural conspiracy against the type of research which would prove your point, you're free to do so.

Calling it a conspiracy theory is a dishonest deflection. It is not a conspiracy, it is a deeply rooted institutional bias. But I can play this game too: can you show me research which rigorously proves that genes and culture have negligible influence on social outcomes? Surely if this is such settled science, it will be easy to justify, right?

Except I bet you won't find any papers examining the genetic and/or cultural influences on professional success in various industries. It's like selective reporting, lying through omission with selective research instead.

But you will easily find a wealth of unfalsifiable and irreproducible grievance studies papers which completely sidestep genes and culture while dredging for their predetermined conclusions regarding the existence of discrimination. And because the socioeconomic factors of genes and culture are a forbidden topic, you end up with the preposterous implication that all discrepancies in representation must be the result of discrimination, as in the post that spawned this thread.


>If there's a disparate impact, what do you imagine causes that if not discrimination?

Disparate impact is often caused by discrimination upstream in the pipeline, not discrimination on the part of the hiring manager. Suppose that due to systematic discrimination, demographic X is much more likely than demographic Y to grow up malnourished in a house filled with lead paint. The corresponding cognitive decline amongst X people would mean they are less likely than Y people to succeed in (or even attend) elementary school, high school, college, and thus the workplace.

A far smaller fraction of X people will therefore ultimately be qualified for a job than Y people. This isn’t due to any discrimination on the part of the hiring manager.


The reason these two collide so often in American law is that the two historically overlap.

When a generation of Americans force all the people of one race to live in "the bad part of town" and refuse to do business with them in any other context, that's obviously discrimination. If a generation later, a bank looks at its numbers and decides borrowers from a particular zip code are higher risk (because historically their businesses were hit with periodic boycotts by the people who penned them in there, or big-money business simply refused to trade with them because they were the wrong skin color), draws a big red circle around their neighborhood on a map, and writes "Add 2 points to the cost" on that map... Discrimination or disparate impact? Those borrowers really are riskier according to the bank's numbers. But red-lining is illegal, and if 80% of that zip code is also Hispanic... Uh oh. Now the bank has to prove they don't just refuse Hispanic business.

And the problem with relying on ML to make these decisions is that ML is a correlation engine, not a human being with an understanding of nuance and historical context. If it finds that correlation organically (but lacks the context that, for example, maybe people in that neighborhood repay loans less often because their businesses fold because the other races in the neighborhood boycott those businesses for being "not our kind of people") and starts implementing de-facto red-lining, courts aren't going to be sympathetic to the argument "But the machine told us to discriminate!"


Quite a part from the fact that implicit bias doesn't replicate, if you have 80% male developers it is not because you are discriminating against women, it is because the pool you hire from is mostly men.

If you refuge to hire a woman because she is a woman, you are discriminating. Fortunately that is historically rare today.


This depends on the item. Items that just use a UPC do get comingled, products that use an Amazon-specific barcode like an X00 do not, and this is required for some types of products (eg food or anything with an expiration date)

As I understand it, anything used or returned will also get an LPN barcode, which tracks that specific unit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: