Sure. But if European countries can insist on globally de-indexing truthful information because they feel its continued propagation is harmful, that seriously undercuts any moral high ground available when China tries to do the same.
There's always a reason for expanding government bureaucracies. Some worthy problem, which the government plausibly could solve if it had just a little bit more power to intervene.
That doesn't change the direct consequence of the intervention, that there's now yet another area where the government sits there telling everyone what to do. If we believe that government ought to be limited, at some point we have to balance against that.
In my country, EU privacy laws applied by our National Data Protection Commission have prevented and pushed back against multiple governmental plans (cameras on the street, sensitive questions in the census, mandatory reporting of certain personal banking data, etc). Is that an increase or a decrease in the reach and size of government?
Surely you understand how it creates bad incentives, if feedback about individual productivity can be delivered bluntly but feedback about prioritization can't be given at all.
Fingers crossed for sure, but there's such a thing as growing too fast for your own good. Github famously stopped being profitable the year after it got a $250M round.
I'm sure it's a nightmare for a professional journalist. And I don't mean that as an insult - it'd be a nightmare for me too.
But you know what's more of a nightmare? Driving around to every retail outlet in a 30 mile radius, trying to convince them that you can be the most submissive and obedient out of the 200 applicants. It's hard to begrudge the gig economy for providing an alternative to that.
I've been running my own tech company for over a decade now, and prior to that my job was everything from cleaning toilets to changing lightbulbs. On paper it sounds terrible, but I was very happy.
The work environment matters a lot. The other people you work with matter. How you are managed matters. I think if gig software designers thought a lot about this, they would see a lot of improvement in worker retention and satisfaction.
This, and very well written! A lot of the complaints from the article are really all easily solvable issues. I mean, really? I need to click an app for an hour to get the block? Just run a lottery. And what do you mean I have to catch people around? Change the rules so that I need to make best effort (and document it), and that's it. And give customers your own numbers so you can forward calls just during the working hours (though that makes it look even more like the workers are your employees). I won't even comment on trouble with scanning codes.
Note that while this would make job a lot more enjoyable for workers, I still think everyone should enjoy the same basic benefits, no matter what kind of employment relationship they are in. But that could also be solved in other ways (basic income for example).
A momentary feeling that you're your own boss (which isn't even correct as you're micromanaged by the app) doesn't really compensate that you don't get any benefits or protections.
I don’t think you have to do lot of convincing to get minimum wage jobs these days. Benefits are virtually non-existent for most minimum wage jobs. Gig economy just gives you flexibility while traditional job gives you predictability and some hope for climbing the ladder. This journalist would have felt misrable for virtually all minimum wage jobs out there.
People do at least own their own schedule which is a big difference from much of retail. That said, as this article notes, your cost to do these gigs is often higher than people think it will be (or even realize it is).
People do at least own their own schedule which is a big difference from much of retail.
Right, and this can easily be underappreciated by those of us working in environments where it's hardly ever a problem if you need to take a few hours to deal with personal issues.
Certainly, but that doesn't mean the schema has to be a strict validation encoded into your storage format. It's a perfectly well-defined programming model to say "well, I'm reading query X with schema Y, and if some rows don't match Y give me nulls instead".
"well, I'm reading query X with schema Y, and if some rows don't match Y give me nulls instead"
Seems like a recipe for disaster to me, but well... A database isn't a "storage format". It's most often the single source of truth for a set of information.
Not being fully sure what data you expect from that source of truth and yet being able to query it is really dangerous. What if you start to update this data after having nullified things you didn't understand ?
Schemaless databases are good for scenarios where the database isn't a source of truth. If you have a table full of e.g. per-second heartbeats from a bunch of deployed services, there's no fundamental underlying truth anyone's trying to gather from it, and you can't afford to run a full schema migration every time someone adds a new metric.
I recognize some people do try to use schemaless databases in the way you're describing, and I agree that's weird and dangerous.
Certainly not. But is Google planning to hand over lists of users who search for banned terms? It seems like there's a big difference between handing over lists of users and restricting information.
They'll almost certainly hand over user data to the central government. They're required to partner with a Chinese firm, and this is a fairly common practice among Chinese technology companies.
> But is Google planning to hand over lists of users who search for banned terms?
Yes? That's so obvious I'm a little stunned you're even asking. If the government orders them to hand over that information, they won't have any choice.
Most business ideas aren't rejected for any kind of compact reason. The decisionmaker typically has other things that are higher priority, or they just kinda avoid giving a green light until it's interpreted as a rejection.
There's a sense in which summary views are the real data. If I asked Spotify to share my data, and they just sent me a 250 MB file of every interaction they've ever recorded, I would conclude they're trying to obfuscate which data they actually use and how they use it.
It seems to me that these kinds of authors always see the alternative as a Silicon Valley with their politics. This article describes a bunch of concrete things that tech companies ought to do: ensure the poor aren't marginalized, bring local communities together, and don't provide mechanisms for creating echo chambers. But what neither Turner nor Khan discuss is how tech companies will realize they ought to do this.
What if Amazon politicizes in the direction of libertarianism, and decides its only social responsibility is to increase the world's GDP?
What if Facebook politicizes in the direction of social justice, and determines that segregating people into identity-based safe spaces is the way to go?
What if Google politicizes in the direction of some political party, and decides it's duty-bound to tweak its search algorithm to hurt opposing candidates?
Yeah, I think there's a strand of political thought which assumes that no-one can honestly come to a different political conclusion than them and it's everywhere. It even gets applied to stuff like TV programs and other works of fiction. If you don't politicize everything according to their politics, it's because you're just too clueless to understand that everything has to be political.
I think that is basically what has happened, only it is one step removed from the corporations listed. For example Cambridge Analytica leveraged Facebook to further their political agenda. US government leveraged all the above to setup a surveillance state unrivalled in it's scope (bar maybe by the Chinese). The web was supposed to be a liberator allowing the marginalised to connect and flourish. This happened. Now nerds are cool but no one ever considered that racist misogynist scum are also a marginalised group.
Personally I see the alternative as a Silicon Valley which accepts that its output can never be "neutral" or "apolitical", no matter how many claims are made to the contrary, and which gives consideration to that before building things, allowing that consideration to influence the design.
Any time you're building technology with (or with the intention of) mass reach, "if the person I consider to be the most evil in the world were to commandeer this, what's the absolute worst they could manage to do with it" is a useful question to ask. If some companies had asked this question far earlier in their lifecycles, we might see a very different world around us today.
In a worst case scenario, at least the people at those companies are somewhat intelligent. All other things being equal I would prefer an oligarchy of people who are intelligent over one composed mostly of idiots.