Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> These workers were subjected to the most vile and graphic depictions of sexual abuse content imaginable for next to nothing.

Let’s imagine an alternative timeline where OpenAI pays $200/hr for this service.

At that price point is it still exploitive?

Is it ever appropriate or ethical to ask humans to voluntarily subject themselves to traumatic experiences in exchange for compensation?

I’m still processing my own thoughts on this topic so I’m curious to learn about other viewpoints.



A couple of years ago I seriously considered joining the federal police department in charge of dealing with internet crime here in Germany, so I've thought quite a bit about this topic.

Basically, it is a job that needs to be done in society, but one that is torturous, and can leave you with long-term or permanent problems. In essence though, it is not fundamentally different in the way a coal miner would jeopardize their physical health, just with mental health instead. This risk/possible damage should be rewarded with a higher wage, and adequate measures should be put in place to minimize the possible damage, eg. in the case of content moderation with access to therapy and only exposing employees to short intervals of traumatic content.

This is of course how things should be, in reality coal miners working environments only reached a decent level through unions and a long fight for better rights. Content moderators are not paid well anywhere either.


As a war veteran I can confidently say most veterans would probably not be traumatized by work like this. We've all seen much worse and the things that became problematic memories for me had little to do with reading/hearing/seeing the worst humanity has to offer in terms of violence and fucked up shit people do to each other. The stuff that really sticks and eats at you is usually stuff that happened to or close the individual or something that happened as the result of actions taken by the individual or their immediate group.

Probably the most important measure a company could take to prevent lasting harm with this kind of work would be to spread it around a whole lot more than just 36 people. The real risk of long term impact here would probably be with persistent exposure to it all day long. Most people can handle reading or seeing some graphic stuff with proper mental preparation for it but to see nothing but that day and day out would quickly wear you down.

>Is it ever appropriate or ethical to ask humans to voluntarily subject themselves to traumatic experiences in exchange for compensation?

Probably not. Yet people still voluntarily sign up for military service around the world by the millions, and they do so for a bunch of personal, family, idealistic, cultural, and societal reasons that are hard to reduce to a few easy to argue points like a lot of people online try to do with stuff like this.

Personally, I think it's admirable to hold the ideal that we should like to never offer jobs like this, we should also like to never offer jobs that involve going to war, cleaning up hazardous materials, dealing with explosives, working around heavy fast-moving machines, cleaning sewers, or a myriad of other terrible experiences either; but we're probably not in a position to make those better choices just yet. Until then, people are willing to do these things for a dozen different reasons per person, only two of which are the pay and support they get from the employer.


> At that price point is it still exploitive?

My belief: When you have a legitimate choice of your place of employment, and all of the opportunities will let you live your definition of a comfortable life, that's when it's no longer exploitative.

So many times - especially for poor folks - there is no meaningful choice. "I do this or I don't have a place to live." "I hold two jobs so I can feed my child."


> My belief: When you have a legitimate choice of your place of employment, and all of the opportunities will let you live your definition of a comfortable life, that's when it's no longer exploitative.

By that argument, paying 200$ an hour would be much more exploitative. This would be like landing a job paying $6 million a year in the US; it would be insane to quit such an opportunity, especially since every other opportunity is basically poverty (not even in comparison). Following this logic, it's 'graceful' to only pay 2$/h, since that makes them equal to the other opportunities and therefore not exploitative (while still paying reasonably well).

Effectively, it seems like you're calling OpenAI exploitative based on factors they can't change.


Huh?

> paying 200$ an hour would be much more exploitative

I didn't say that - I only said that it depends on choice. Does the employee have a choice if they have a $60k (average individual income in the US) option and a $6M option? Yes. Are they de-facto forced into taking the $6m one? No.

I know many people who didn't take higher paying jobs, or left such jobs, because they knew the high paying job was going to be miserable.

There was even an article about one such individual just the other day here on HN: Quitting the Rat Race

"I’m currently working at a top tier investment bank as a software engineer. I’m an insignificant cog in a machine that skims the cream from the milk. I’m earning the most money I’ve ever made and yet I’m the least fulfilled I’ve ever been."


Maybe I should have steelmanned your position a bit more. That being said, the grand³parent said:

> This whole thing makes OpenAI seem evil if you ask me. Just another company exploiting people who are already being exploited.

In that context, there is no way for OpenAI not to be evil, since they are (by definition) only one option in the market. In fact, taking your argument to the extreme, there is no way to offer jobs in Kenya as the first company to offer jobs would either be exploitative by paying minimum wage or exploitative by being the only real option. Going from that, paying a higher wage just worsens the situation, as it makes the alternatives even less feasible.

That being said, I do get where you are coming from. But it is not a good point to accuse OpenAI on, as they are making the situation better by offering options at a (for a Kenyan level seemingly reasonable) rate and they really don't have any other option[0].

[0] Except maybe paying Americans a lot of money for the job, but I find it morally hard to argue that they should pay US citizens a lot of money instead of paying Kenyans (comparatively) good money, even leaving aside economical feasibility.


If a huge portion of Kenya can survive off of less than $2/hr (either from local companies or OpenAI) how do they not have a legitimate choice?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: