It’s the vacuum. That empty space between the promise of a new technology and the ‘killer app’ that applies that tech to society in a productive way.
Innovation will happen in that gap in the absence of progress in the actual tech space.
The innovation that has to happen with AI is in UX. We have to have interfaces that will for example, allow a lay person to declaratively build software in natural language. Think VB6 powered by GPT-4.
Until every new AI product stops linking to Google Collab or a discord bot, the tech won’t break out. It will remain a nerdtoy and the grifters will own the day.
As someone who finished a Baccalaureate online, I’d say these students simply aren’t prepared for the mind shift to having to learn independently. Most of my professors seemed to loathe having to teach an online class and merely threw together a mountain of almost random assignments and research papers as if to say, “if you can survive this, you can have a degree”. Most of my courses didn’t even offer video lectures and were taught by low-paid adjuncts that never responded to email. You were left alone to figure it out. It’s a whole other ball game when you don’t have the deep support system of being on campus. I hope this brings improvements to online education in general.
As part of my engineering degree, it wasn't all books and lectures. We had to learn the machine-shop, operate the lathe construct a robot, mix chemicals in organic chemistry lab, experiment with lasers in different mediums, wire hard drives... etc...
A lot of that is hands on work. No Zoom session is going to make up for that.
This was me big time. You sound like you have more experience than I did at the time, but what finally made it click for me was taking CS50 on EdX. Not that you should take it, but that it exposed what was holding me back. Any challenge I made for myself I would end up saying 'screw it' when it got challenging because internally I'd think that maybe my idea was messed up somehow. CS50, which I only made it thru 5 assignments, exposed me to having to stick thru a problem, possibly for days until I got it. I felt pressured to complete them because I saw that my classmates were completing them and that told me it was doable. After that, something in my head changed and for the first time I was able to complete my own projects and enjoy that feeling when you build something you came up with yourself. In other words try to get experience sticking thru challenges. Try leetcode or hackerrank, those sites have advanced problems that might crack that cieling for you if your problem is the same as mine was. Just my experience.
It's weird that there some subtle defensiveness here. To heck with robots.txt and what not, when a site touts it's privacy, even this, however irrelevant, is slightly eyebrow-raising.
How does this in any way reflect on either the privacy of DuckDuckGo (I don't think they ever claimed or even vaguely implied that if I search for, say, Banana, no one else will be able to see the results page for Banana, as that would be absurd), or Google, who are simply indexing publicly accessible pages, which is what they've always done since coming into existence.
What violation of privacy or hint thereof is happening here?
None. Thus my question of why the Curt responses and passive aggression. DDG has positioned itself as the anti Google and lo and behold Google still finds out what's being searched there, though not necessarily who searched it. If you're a bit cynical, it's vaguely interesting.
I actually really like flowcharts, though they are somewhat inefficient to draw out more complex structures, they are great for working out small sections of thorny logic. The only issue I sort of take those in the article is the physical flow is a little distracting when decisions don't branch laterally. having them continue top to bottom sort of throws me off. perhaps its just because i draw them with branches to the side.
Certifications were key to my career in tech since I transistioned late from a totally unrelated, blue collar field. For me they were a way for me to convince myself it was okay to apply for a job doing something I was new at. What's weird is I see many of those who tout hands on experience and 'projects' as the only true mark of competence who routinely apply for and get jobs that they aren't exactly qualified for. I never had that kind of ego. Certs were my confidence.
My next web search will concern corporate psychology, and studies that explore the mindsets employees within an organization adopt resulting from the influence of leadership. These mindsets inevitably leak out into society, and are evident in the defenses offered to the indefensible behavior described in the article. Of course, there is nothing illegal here in that apparently customers are allowed the option of reporting the crimes themselves, however when you consider that perhaps there was not so subtle manipulation from investigators which may have discouraged such reports (the careful wording described, though following certain lines, indicates no boundary that can't be crossed to protect the organization), the situation clearly describes a bit of an ethical crisis. To attempt to paint it as commonplace in other organizations does an egregious disservice to the majority of companies that carry out business honorably everyday. This is only another example of tech organizations that sprung up without mature business people to run them and more budget than anything which grow much faster than is healthy and end up doing anything to survive. This trend will unfortunately continue as the concept of value and profitability continues to lose definition in our economy.
I was never patient enough for piracy as it takes diligence to find reliable sources. One thing that does irritate me is this general antipiracy attitude on the web today. It's like the entire population works for Big Tech. It used to be if you did enough searching, you could find pretty much anything you wanted, but now everyone acts like the piracy police and gets all offended if you even insinuate piracy. Probably sounds rather antisocial, but I think that attitude is dangerous as it gives corporations way too much influence, beyond what we've yielded them already. Maybe it's just the places I frequent.
What strikes me about this issue as a whole is what it says about the true state of "AI". This is a perfect job for such technologies. I mean how is it that we're already making 'deep fake' videos and audio but can't feed a video stream which is just a stream of images to an algorithm which can determine if it's inappropriate. I recognize that some such tech is being utilized on the front end in this case, and that the problem is non trivial, but I see this as FB saying 'good enough' and not pushing as hard as they could to improve the tech to where it can be trusted to make the decision. I sense that they may be telling themselves they're doing social good by 'creating jobs'. Why must humans be subjected to this torture? What happened to "move fast and break things"? Why not put the algorithms out front and let them have the final say, and let them learn and improve quickly? I suppose just because meat is cheaper than chips.
> I recognize that some such tech is being utilized on the front end in this case, and that the problem is non trivial, but I see this as FB saying 'good enough' and not pushing as hard as they could to improve the tech to where it can be trusted to make the decision. I sense that they may be telling themselves they're doing social good by 'creating jobs'.
This seems like a weird take to me. Why would this be your conclusion rather than that the technology isn't good enough yet?
Because the move to hire so many moderators so fast was a big expensive move that seemed like more of an implementation for PR purposes considering some of the troubles the company was experiencing. When your motto is 'move fast and break things', and you implement features like live video streaming without much thought to the full ramifications of doing so, it would seem even if only to me, that company might not be afraid to take a leap on tech that's 'not quite ready'. Again, I know such a hasty implementation could hurt quality, but given what's at stake (human health), they might get credit for doing the right thing. Obviously, though in our society, where they wouldn't get such credit, and would only get flamed for having a bad user experience for the wrong videos being taken down, the mental health if a few humans who willingly signed up is a small sacrifice for profitability.
Perhaps it’s the political nature of the study but everyone seems to be missing the basic flaw in the argument for publishing this reproduction, that the flaws of the original research ( small sample size, weak correlation, etc ) were published for everyone to see. We’re talking about a science journal here and the general mindset of the comments seems to be equating it with a magazine or something. Educated readers, the real audience of this journal, would’ve read the research ‘against the grain’, if you will, and immediately saw that there was little or nothing there. This begs the question of why publish in the first place, to which an answer might be that it was unique research at the time. The methods pass muster and the data is there to be scrutinized by the reader. These guys are basically taking advantage of the obvious and attempting to, as they say themselves, ‘make their careers’.
But “we” didn’t see the flaws. It has been a high impact study, and people both inside and outside of academia have been working off the assumption that it is true.
There’s no reason not to have seen the flaws ( chief among them the minuscule sample size particularly for such a broad conclusion ) if one read the study critically. This gets at a symptom of our culture’s lack of understanding of how research works. That’s why one of the most powerful marketing techniques today is to merely state: ‘reasearch shows xyz’ with only a footnote referring to the underlying studies that no one will read, and then those that do will do so with specific predjudice rather than general skepticism. Those who work under the assumption you mention are making the same mistake of reading the headline and taking it as knowledge, usually because it supports some bias of theirs or some point they’d like to make one way or the other, just like those who are unnerved by the journal not publishing the replication study.
Innovation will happen in that gap in the absence of progress in the actual tech space.
The innovation that has to happen with AI is in UX. We have to have interfaces that will for example, allow a lay person to declaratively build software in natural language. Think VB6 powered by GPT-4.
Until every new AI product stops linking to Google Collab or a discord bot, the tech won’t break out. It will remain a nerdtoy and the grifters will own the day.