> I think EU should make it easier for US citizens who get any job here to move over.
I'm pretty sure that any employable american can achieve that without hard efforts, and AFAIK most welfare is granted to salaried people, except for unemployment allowance. Same with education, some countries are equal (very low) tuitions no-question-asked.
For hardly employable 'hard worker' american, I'm pretty sure that are plenty of hard working immigration candidate to fill that. And there's no reason that low wages jobs will end better life condition for americans than others, and end up to the same problems that often comes with poverty : crime, social resent and injustice, just like home.
If you think that americans will work harder and paid better that an struggling immigrant, I think you are highly delusional, low wages hard just enough to live, and nobody will pay more for the same job.
After getting the job offer and being sponsored by UCL, it took months for them to grant me a visa, and it cost thousands of dollars just to apply for one. (That's just what I had to pay out of pocket, and doesn't include whatever UCL had to pay to sponsor me.)
I've very mixed feelings with all the craze around deep neural network and learning. On one side, some results brings so much magic it's astounding and really looks promising. On the other side, anything that is not a use case of convergence from general to specific, like hand writing recognition, but general to general, like qualifying pictures with vocabulary, fails miserably (in a very entertaining way).
And the slip from neural networks to artificial neural networks to artificial intelligence we see on the broad news, really make it look like expert systems all over again. At first, it's said that it can solve any kind of problems, and then, we end up with a very narrow set of problems it solves reliably.
I don't think many serious deep learning researchers claim that it will solve any kind of problems. Instead, it allows us to make significant strides in areas that have been stagnant for many years now (e.g., image recognition, speech recognition, some natural language tasks, and other domain-specific tasks). The potential impact based on these "narrow" domains is already huge (voice commands, image search, analyze medical images, analyze legal/medical documents, aid self-driving cars, etc.). Basically, it allows us to turn data that is highly structured but in a complex way (e.g., images, voice) into standard classification/regression problems. So far, I don't think it does much more than that.
It´s clearly not magic. My experiments point this out playfully. Blackboxes don´t help innovation and education. In terms of research, lots of exiting work happening across the board, worth tracking and learning.
At the moment it is much more interesting when applied to more narrowly targeted tasks. These can include video captioning but for clearly specified tasks. The (too quick of a) tentative jump from well automated targeted tasks to generic task automation is the culprit here.
In my few years of practice of deep neural networks in industry, I can say they are very nice to work with as they fit a large number of tasks and yield excellent results for many of them.
To those who don't want to be tied to a specialized library, you can architect an app like that way (unidirectional data flow) quite easily if you use KVO properly.
> However, it's a fact that they lead to more conversions, more signups and more sales.
I wouldn't jump to conclusions. Maybe you have expertise on this, I don't, but I've seen nothing that could make me lay a definitive conclusion like this.
The article incision is on the fact that some people needs analytics to back up evaluation of the web site trying to know if it's doing good or not, regardless of revenue. So you end up to those techniques that drives irrelevant numbers up, that can even damage potential revenue, but it's ok, you're doing the right things, look at those good numbers. You can't test such things with A/B testing or anything else, it's long term relationship with the audience, and those analytics are not measuring this.
Yeah, I'm only talking about small niche websites aimed at ranking in Google and/or Facebook and driving leads/sales. Various pop-ups and free offers work very well.
I'm pretty sure they don't work on tech-savvy people and for large blogs where the audience matters more than monetization.
Can't find source but I remember reading here the story of a payment business that competed with VISA and was taking off until VISA started threatening his client, making them fear that their payment wouldn't be processed at some point, and they would be subject to fines, if not worse.
I remember reading other stories on the same line, about visa using shady approaches to frighten people to preserve their monopoly, but a quick search can't bring any up. Search terms are too generic.
Well actually you can look at it exactly like a kernel, where the backend is the kernel and http clients are the processes, and access control is done at resource level access, by the kernel. The things is, you couldn't even model facebook access with unix perms, and if you've played with acl, I think you realize that the problem is not solely due basic soft architecture.
That said, Facebook should have addressed this problem seriously by now.
But Facebook permissions can be modelled. They may not be direct mappings to UNIX permissions or ACLs, but that's taking my OS analogy too literally. The point is, Facebook should have a shared component that does the permission checks, rather than giving each page global access and relying on the author to do the checks themselves.
I deeply agree for the facebook case, I just wanted to point out that there is no known general solution for a centralized resource access control for web backend that will fit all use cases properly.
There's also rustlex [1] that have been around for a while and provides rust stable v1.0 support through syntex [2]. I use it for a handlebars implementation [3].
To me, the problem in choosing the "better" way of doing something is that I can't stop comparing diverging design choices.
I like to think that the best way to choose between seemingly equally advantageous designs is to start with the one whose first step(s) is(are) the most straightforward. The thing is, while I prepare myself for implementation of that first step, I've a background loop in my mind that constantly checks against other implementations choices. What I am losing here, what I would gain otherwise.
In the end, I never really make definitive decision before starting. I start with that background brain noise on the “most simple first step design”, and when the background noise stops and the raw pleasure of coding kicks in, I known I'm on a good track.
I guess I do go through that process once I start work, but I'll definitely keep your advice in the back of my mind. Don't let the choice become overwhelming, get started knowing that it's ok to change your mind during the process.
For any feature / bit of work that's in isolation, I don't really worry. I figure out an approach and I implement it. When it comes to changing data structures / larger changes within the product there's a fear of getting it wrong and having a mess to unpick later.
I'm pretty sure that any employable american can achieve that without hard efforts, and AFAIK most welfare is granted to salaried people, except for unemployment allowance. Same with education, some countries are equal (very low) tuitions no-question-asked.
For hardly employable 'hard worker' american, I'm pretty sure that are plenty of hard working immigration candidate to fill that. And there's no reason that low wages jobs will end better life condition for americans than others, and end up to the same problems that often comes with poverty : crime, social resent and injustice, just like home. If you think that americans will work harder and paid better that an struggling immigrant, I think you are highly delusional, low wages hard just enough to live, and nobody will pay more for the same job.