Hacker Newsnew | past | comments | ask | show | jobs | submit | largepeepee's commentslogin

Exactly, the 737 was the highest-selling commercial aircraft before the incident, and the MAX was promised to be a simple but efficient upgrade to that popular series that many already had.


What's new?

We have been using that same excuse to block out Japanese and Chinese goods in various eras.

Just feels weird the Canadians are copying our playbook


They're using this playbook because it worked. Look at China, they were once a big threat to American Exceptionalism or whatever and now... <checks notes>... oh. Nevermind.


You can summarize it as

"they want more control over the people"

And AI seems to be another tool being twisted to do that


Well they are also discovering that many people would rather pay the 20-30% extra than deal with the hassle of a subpar platform.

The cost to build/maintain a competent platform is also not cheap even for the big boys like EA.


Which speaks to his point, once upon a time Sega would hog their IPs for their console and arcade games.

They changed their strat abt a decade ago.


They changed after they exited the console market.

If Nintendo ever exit the console market then we can have that conversation about Nintendo IP on Stream. But if that happens, and it is a massive IF, that’s not going to happen for a long time.


They were dipping toes in the water even before their console market exit. Specifically, Comix Zone in 1995 [0], as well as Sonic CD and Virtua Fighter PC in 1996[1]

[0] - There were other Sega games published in 1995, but the examples I mention are titles specifically developed in house by Sega rather than titles by a 3rd party that they published.

[1] - Politely ignoring that whole NV1 debacle here, since it was in the end a very niche card.


If it happens, Nintendo has a long memory. Better to get on their good side now.


You do realize once this is normalized, it wouldn't even come as a surprise to have them eventually mandating company devices to be on when at home and so on.


You know what's funny? Even if the numbers are hot garbage, they proved the point about how easy it is to publish fake science papers, since it got published.

Kinda similar to those researchers years back who proved how easy it was to go into certain social science journals as long as you copied their ideology.


Well, there is a difference between "fake science" and "tried to do correct science but ending being wrong". If the second is "fake science", then basically all that Newton has ever produced is "fake science".

For the social science journals bit, are you thinking of the "grievance studies affair": https://en.wikipedia.org/wiki/Grievance_studies_affair ?

Ironically, this study has generated a lot of "fake news" on the field of social science. The conclusions of this study were widely spread mainly by people for ideological reason. When we look at the study in question, it's clear the conclusions are quite different than what the rumors say. For example, the same researchers tried such hoax before the ones they mention in their study, except that these hoaxes failed to be published, and they "forgot" to mention it. They did not have any control group, neither as "correct article" or "article defending the opposite ideology" (so, how can we conclude that the reason these bad articles were published were because of ideology if you don't know how many articles are published without being critically reviewed). They also count as valid a lot of journals that are pay-to-publish and not seriously used in the field. One of the author, ironically, ended up supporting platforms publishing conspiracy theories (and he was even banned from Twitter) (not that the study should be judged based on that, but it's a funny anecdote: the author who, according to some, had the courage to defend real science against bad woke ideology, who ends up demonstrating that he never cared about real science and is driven by ideology not science)


There's also a difference between outright fake science i.e. lies/fabricated data in the manuscript and bad science i.e. the conclusions drawn by the authors were always "fake" because of bad practices but if you look at the details of the work they are honest about what they did. Of course ideally you would minimize both types of bad paper, but the latter isn't too damaging to the system in isolation while the former can cause a handful of papers to mislead a subfield of science for years. Also how to screen for and how to systemically discourage these two things could be quite different.


And the first one should be divided further into two categories:

1) committed deceiver that started working in this field for years and that somehow managed to not get caught (pretty rare).

2) fake science articles that get published but has absolutely no impact on scientists because scientists don't progress based on randomly found articles, but by meeting the authors in workshops, exchanging with them, ... which make a one-off fake article with fake author totally irrelevant.

If you are a junior scientist, the articles you read are mainly the ones recommended by senior scientists around you, and if you are a senior scientist, you are part of a community, you know the people who publish, and if you see a random article coming from nowhere, you may read it just in case, but you don't let it mislead you or change significantly your own research just based on reading it.

I think it's a flaw on some lawman people when they discuss "fake articles being published": they don't realize how small "having an article published" impact the field. Presenting it in workshop and debating with colleagues does, but what the layman person has in mind will never maintain the illusion.


A reviewer should have seen that massive red flag


>Even if the numbers are hot garbage, they proved the point about how easy it is to publish fake science papers, since it got published.

Not by the definition of "fake" used in the article, as the data wouldn't be plagiarized or fabricated. It'd just be shitty data.


It's a medRxive preprint. It didn't get published anywhere. Science (the magazine) has lowered it's standards.


It's also useful to point out that historical verbal tradition trained a very specific type of memory recall but that doesn't automatically make anyone wise.

Just because you memorized 10000 random articles on Wikipedia, doesn't mean you now have the wisdom to apply that in a particular circumstance.

Very much like early AI models.


I think the contrast should be between studying and internalizing a subject versus having the ability to look up a subject. That seems the most true to Plato's intention.

It's common and easy to fall into considering the things you could look up as things you already know.

What's the difference, one might ask? What's the problem with offloading some of this knowledge and free up space in your head? Well the thing when you learn something is that it doesn't just permit access to the information, it also permits synthesis of new ideas. The sum of knowledge is greater than its parts.

A very concrete example: As someone who only speaks English one may look up the Latin terms 'manus' (hand) and 'facere' (to act/do/make); but unless you actually do, you'll probably not immediately grok the etymology of the English term 'manufacture'.


Exactly. My compsci prof was forcing us to learn so much by heart, but then it's internalized and you start to think in those terms. Right now I am writing my PhD thesis in management and in the beginning I didn't have all of those papers really in my head. But now slowly that knowledge accumulates and I can think through things I couldn't think before. But on the other hand, I now think, how could I not understand that, it's trivial. And to add, it is the same for literature and poems. If you know a poem by heart, it's not just fancy to recite it, but that you start to really incorporate part of that language.


I think of it in terms of computer memory levels.

There are some computations (synthesis) that require so many (non-front loadable) memory accesses that it's impractical to do them from memory with significant delay (books), because the number_of_accesses * access_time dominates the project time.

Instead, you must have a working set of core information (or at least pointers to information) in low-latency memory (your brain).

Example: How much longer would it take me to do a multi-digit multiplication if I had to look up the process for multiplication in a book for every digit multiplication I did? And what if that multiplication were just one of many in the higher-level math problem I was trying to solve? (Then generalize to any problem that requires a core base of knowledge)


It is very similar to caching performance impacts. And like you say, sometimes performance is just faster, and sometimes it actually enables functionality…


Strong memory is almost always an indicator of exceptional skill. Whether it's chess players, musicians, writers or poets, mathematicians or also programmers, people who excel generally have astonishing ability to recall.

That's not an accident. Wisdom emerges out of practice and the effortlessness that comes with it. The genius piano player isn't that good because of some pie in the sky wisdom about music, just like the AI they just played tons of scales and training pieces. Literally meaningless stuff. This rejection of rote memorization as some sort of lower skill, that students should be 'smart and lazy' is one of the stupidest modern tendencies.


>Just because you memorized 10000 random articles on Wikipedia, doesn't mean you now have the wisdom to apply that in a particular circumstance.

It's much more likely that you can, that someone who would only look them up "on demand", however, as you at least are aware of the possibilities in those domains.


This is actually what Plato also mentions, a few paragraphs later:

> He who thinks, then, that he has left behind him any art in writing, and he who receives it in the belief that anything in writing will be clear and certain, would be an utterly simple person, and in truth ignorant of the prophecy of Ammon, if he thinks written words are of any use except to remind him who knows the matter about which they are written.

https://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext...


> also useful to point out that historical verbal tradition trained a very specific type of memory recall

I'm genuinely curious, why do you think this is relevant? How would you differentiate "specific types of memory recall" scientifically?


Cued recall in the form of verbal tradition is often criticised to be subject to inaccuracies as, for one example, stories accrue embellishments over time.

https://en.wikipedia.org/wiki/Recall_(memory)


In Appalachia that's just called storytelling. :)


I feel like it's not only products that get left behind once Google gets bored.

Many of Google's software eventually get worse too. YouTube for example has killed most of their social features by removing dislikes, limited searchability of videos by channel, allowed bots to fill the comments. And Google search has so much malware in their ads that pop up first, that even govts are recommending ad blockers by default.

Such a pity, Google used to be the most respected of the SV big tech.


Google is just shit.

As my company briefly decided to look into using them for a cloud provider it quickly became apparent that they are just awful stewards of any software that isn’t related to advertisements.

It’s beyond shocking how amateurish GCP is being operated.

My assumption is it comes down to really awful leadership. They only care about making and selling new things. You simply can’t trust that any piece of software or hardware will receive ongoing support, including their cloud offerings.


Looking into Android Code Search, I've found the same issues. Google keeps saying C++ is a terrible language, but the way they use it is pretty bad too. "A bad workman blames his tools" springs to mind a lot reading that code.


That is interesting, I thought GCP might be a good alternative - at least for managed Kubernetes.


GKE at least used to be a bit better, but that’s only worth so much compared to the rest of the services where you’ll frequently find feature gaps where you end up adding a +1 to an issue which has been open for years, and building your own service (yay, more toil). There’s some room for debate on security depending on exactly what features you use and what your needs are but if you use GCP, I’ll just note that Google gave all of your compute instances privileged access to your entire project and you should fix that that immediately if you haven’t already done so.

I’d also note that it used to be fairly comparable on pricing but they’ve been raising prices on some things which I noticed meant a cost savings migrating away.


> you’ll frequently find feature gaps where you end up adding a +1 to an issue which has been open for years, and building your own service (yay, more toil)

Yup, exactly my experience and what I meant by amateurish. We aren't even talking about "crazy" features.

For example, their identity management service supports MFA only though a cell text. Adding support for Google auth was a years old issue that we were told "yeah, lots of other people want this and if you plus 1 it maybe PM will prioritize it".


It's probably the best managed Kubernetes out there. People just like to make hyperboles. The consumer stuff, especially the free ones, get abandoned every now and then, but the "enterprise" services are pretty good.


GKE is an abomination. Its reputation is completely undeserved. Sure there's some nice things out of the box, like authenticating to clusters with your Google/GCP account, but day 2 operations are a constant frustration.

What sucks?

1. The Kubernetes Pod garbage collector is configured to be abominally slow, keeping terminated pods in the API server for far too long. This interferes with cluster monitoring by making it seem like there's a consistently high number of OOMKilled etc. pods rather than blipping as it happens. GCP support claims this is working as intended and recommends manually running a script to clean up the API server if it bothers you (this is a managed service?!). See e.g. https://stackoverflow.com/questions/75374590/why-kubernetes-... .

2. The rest of the Kubernetes world moved on from kube-dns and on to CoreDNS. Not GKE! On GKE your two options are kube-dns and the GCP VPC-native Cloud DNS (i.e. Kubernetes service and pod records are listed in the private DNS zone for the VPC). Surprise surprise - if you pick Cloud DNS to help scale your cluster, because GCP isn't operating kube-dns well enough on its managed control plan, then you're on the hook for paying for the Cloud DNS zone as well, it's not included in the GKE cluster costs. See e.g. https://cloud.google.com/kubernetes-engine/docs/how-to/cloud... .

3. GKE clusters automatically log to GCP Cloud Logging, the first 50 GB of which is free. Fair enough. But the ingestion price afterwards is a truly mind-boggling $0.50/GB! (https://cloud.google.com/logging/#section-7). How do you turn off GCP Cloud Logging so that you can ship your logs to a cheaper vendor instead? Nope, there's no first-class managed setting; all you get is a community tutorial (https://cloud.google.com/community/tutorials/kubernetes-engi...) that links to this GitHub configuration (https://github.com/GoogleCloudPlatform/community/blob/master...) aaaaand good luck :)

4. No native IPv6. See e.g. https://stackoverflow.com/questions/64110542/has-anyone-iden... . AWS of course does support IPv6: https://docs.aws.amazon.com/eks/latest/userguide/cni-ipv6.ht...

And this is just off the top of my head.


A lot of this criticism could also be levelled at Azure’s AKS service.

IPv6 is broken there as well, and they similarly overcharge for logging. However, instead of 50c per GB, they charge $2.75 per GB, which is highway robbery. That’s more than 5x what GCP or AWS charge.

I swear Microsoft must have been aiming for “half price to be competitive” and then accidentally put the decimal point in the wrong place.

Let this sink in: they charge the price of a serviceable used car or a decent gaming PC to store 1 TB of text for a month!


This is the sad reality of the cloud currently, it is. It not remotely a commodity. Eventually it certainly will be.


GCP is excellent.


GCP is clearly good if your use cases match their services. However, there are a lot of things that are sub-optimal.

IPv6 doesn't have non-premium networking. Premium networking is nice, but non-premium networking is less expensive.

Instances sometimes take more than 5 minutes to shutdown. A lot of things seem very slow like this. Really frustrating to use for testing when it takes so long to bring an environment up and to clean it afterwards.

Load balancing is hard to use outside of http style short connection use cases. There's no load feedback mechanism, and at small request rates, requests are severely unbalanced anyway. Managed instance groups can auto scale downwards, but connection drain is implemented as take it out of rotation and wait a configurable time and destroy the instance. If the instance drains faster, it won't be destroyed faster. If you want to drain for more than 60 minutes, that's too bad (this isn't that unreasonable, but while I'm ranting...)

Google's container optimized OS has documentation that tells you how to configure docker log retention... But then their container runner (konlet) forces their own log settings for the main container, so your settings are ignored.


I have said it before and will say it again:

Google wants to look amateurish for as long as possible to avoid stronger regulation.


You should probably stop saying it because it makes no sense


I had to delete the YouTube app simply because there was no way to disable Shorts. Those things are hopelessly addictive and unavoidable. If I wanted shorts, I'd install TikTok.


It's crazy, when I remove the YouTube Shorts pane it says "Shelf will be hidden for 30 days".

It feels like a slap to the face by some promotion seeking Google Manager who tells me that he knows better than me. It's crazy. I didn't stop using YouTube like you did but I was darn close. I guess they do know better than me, in a way.


It’s the same in Instagram: removing recommended posts only has an option to remove for 30 days. The consistent enshittification of products with these sorts of behaviours just makes me want to use them less. Unfortunately I’m sure they’ve got data that shows that my reaction is a minority.


Black Mirror anyone? https://www.youtube.com/watch?t=99&v=5P_HqwgDFJM

Fascinatingly this content is narrated by a robot voice...


On Android you can install https://newpipe.net and turn off a lot of things: no more trending/start screen recommendations if you don't want them, you can turn off comments, or "watch next" recommendations.. the shorts show up in the search results like other normal videos, with a slider so you can still navigate it.


I know this is an unpopular question, but are there ads in newpipe?


Nope, there are zero ads in NewPipe. No ad blocker needed!


They also are very slowly "injecting" shorts everywhere. I'm randomly browsing the feed, not shorts, click on something and before I know it, I'm watching shorts. And yeah, they also recently added a shorts "pane" on the side of normal videos.


I'm annoyed by the inability to hide shorts but I don't really find them addictive. I've watched a few, determined they were as useless as they appeared, and just ignore them. But it's annoying the amount of space they take up, it's a good third of the screen on my phone.


Try YouTube Revanced from https://revanced.net/


I'm pretty sure revanced.net is not affiliated with the actual project and shouldn't be trusted. AFAIK there's no prebuilt APK doing everything for you that can be trusted, it's a multi-step process one has to follow front the official source.

The official Github is https://github.com/revanced

The Github is very obtuse about the installation steps, so this guide on reddit gives the procedure more adequately: https://www.reddit.com/r/revancedapp/comments/xlcny9/revance...


this is NOT the actual domain. do not share this around. the only domain is https://revanced.app, and it will redirect to their github.


Shorts are displayed on one single line of the home and subscriptions tab. They’re trivially easy to avoid.


> I feel like it's not only products that get left behind once Google gets bored.

As someone who had been working at Google in this space (Nest Hubs, Matter, and planned thing I couldn't tell you about even if I knew its current status) until the decimation, I assure you it's not only products that get left behind.


>I assure you it's not only products that get left behind.

Could you please enlighten us what else gets left behind?


People.


Ethics


Not allowing good search within a channel is the most ridiculous. They're a search company. How is that self-consistent? Especially when you know they have the data and index, but are simply using it for advertising.


They are a search company, yet they decided to give the same name to two completely different web frameworks. Ok one is called AngularJS and the other one is Angular, but still.


Agreed. See this across all of Google products.

It's amazing to me that Google Assistant was more useful when it came out than it is today. Just the past few days I've been dealing with this bug where I can't even set an alarm through it.

Google Photos is another one: content-based search used to work really well, but totally unreliable today.


Obviously it didn't work out for meta. That isn't even up for debate.

Meta has admitted to pivoting to AI and cutting down their metaverse teams significantly.

Besides, even if Apple succeed. Meta will still be stuck in the exact same situation - under Apple's boots again, which it so desperately tried to escape from by going hardware heavy in VR in the first place.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: