If the user was told by a smarty-pants computer person to do it, they'll assume it works and is safe. On the other hand, if something goes really wrong, and the user can convince a judge that the mystery binary blob was involved? And the author never said anything about it possibly being unsafe? Then the court can decide that the user had a valid assumption that it would be safe, and the author won't be able to prove otherwise.
They put these disclaimers into licenses because people have already won these kind of cases.
I don't get the hate, it looks like they reverse-engineered the nest thermostat and wrote a firmware for it? That's super cool and the fact that an open source project doesn't have a privacy policy yet doesn't really matter at this point
> ...looks like they reverse-engineered the nest thermostat and wrote a firmware...
Not to diminish what this project has done, but they modified existing firmware to make it communicate with a different server. They've also implemented a server for the thermostat API.
It's pretty neat but, at this point, it's just a hacked firmware that talks to a different proprietary server.
Edit: It's not even a modification to the firmware binaries. They're just injecting /etc/hosts entries into the firmware[0]. If the Nest device just uses DNS to resolve these names then you wouldn't even need to modify the firmware-- just point it at a DNS server that's authoritative for the necessary names.
They're also injecting a CA bundle so, presumably, they're in including their own root of trust so they can sign their own certificate. I'm on mobile and can't easily look at what they're including.
Edit: Guess I've got openssl in my termux environment. They're injecting a fake Nest root CA key. Makes sense.
I'm shocked it was this easy to subvert the root of trust on these devices. I would expect a newer device would have the trust root pinned in hardware (TPM, etc) and firmware updates would be have been authenticated.
>I would expect a newer device would have the trust root pinned in hardware (TPM, etc) and firmware updates would be have been authenticated.
All those things cost money in hardware or development time, so companies basically never bother. You're probably also letting all the stories about DRM on phones or whatever color your experience on IOT as a whole. TPM basically makes no sense to implement on anything that's not a PC. Not even phones use it.
You could argue TPM can work as a generic term for security coprocessors, but on a technical forum that makes as much sense as saying the pixel tablet is an "iPad".
To be fair, I was using TPM a little genetically (hence the "etc"). I (perhaps wrongly) assume most SoC's today have a non-volatile area for storing roots of trust and possibly a bootloader. My only embedded experience was an Android-based tablet project where DRM on the firmware was of major import because features were locked behind time/geo-limited license keys.
I'm glad they didn't go that far... I wouldn't want that to get into a home device as long as it requires physical access to bypass/update the security in place. I'm really not a fan of excessively locked down hardware.
It’s the “no longer evil” marketing without actually proving that “no longer evil.com” is in fact … from from evil.
I was assuming that I could point the nest data stream & control UI to my own hosted thing on eg my local NAS or docker farm. That’s what I think would warrant the moniker “free from evil” in this kind of strong privacy preserving marketing.
If they really want to show that they're building something that protects user privacy, they'd open source their backend server, and make it possible and easy to self-host it and point the modified firmware[0] at your own instance.
[0] They didn't write their own firmware; they hacked the stock firmware to redirect traffic from Google's servers to their own.
Edit: looks like they plan to open source the backend and enable self-hosting "soon". Hopefully that comes to pass!
Running open-source firmware someone's hacking on (which gets little to no testing) on a gas appliance that can burn your house down is probably not the best idea.
If you are paranoid about Nest being evil maybe stick to one of those Honeywell round hockey-puck things with the mercury inside.
Or use a Z-Wave/Zigbee thermostat from a reputable vendor (there aren't many) and control it from a gateway of your choice.
This is for people who have already bought a nest and got burnt by the deprecation of their online services. Of course they could get another thermostat but then that'd just be more stuff for the landfills.
Early generation Nest hardware was garbage, and was known for blowing FETs that failed closed, turning people's ACs into giant ice cubes. Putting it in the landfill would be doing yourself a favor.
The ex-Apple culture in the early history of Nest was evident, which ostensibly spec'd FETs over mechanical relays for superficial reasons, because clicking sounds are ugly. The results were in the spirit of other Apple engineering marvels (Titanium Powerbook, Antennagate, Bendgate).
Well that's certainly a take. Solid state relays using optoisolated MOSFETs have been around for fifty years. Mechanical relays are overkill for signal switching as in HVAC thermostats, IMHO, but you do you.
Anecdotally, I have a first generation Nest and haven't had a problem. Maybe some of the earlier hardware had fewer protection against misuse (e.g., with non-24VAC systems or otherwise incorrect installation), but that's generally the case with most new things.
Sounds like something Nest engineers would have said.
It's not "signal switching", you see.
HVAC equipment is as old and varied as you can imagine, and there is higher current than you think running through those terminals, powering all sorts of nasties, oil burner relays, damper motors, crude AC contactors causing voltage spikes etc. HVAC low voltage power is as dirty as can be.
No one took this into account, they were more concerned with making the thermostat pretty.
Nest is hardly the only thermostat out there using solid-state relays. Have you considered the possibility that they did take it into account and they deliberately chose to use SSRs instead of electromechanical relays? Have you considered the possibility that they were concerned about the impact that mechanical relays may have on the RF, especially if "there is higher current than you think running through those terminals"? Have you considered the possibility that they were worried about making the first one fatter than it already was?
In my heat pump, none of the thermostat wires directly control the contactors. They all run into a logic board that applies logic like time delays, temperature-controlled defrost cycling, and active protection lockouts for the compressor. I mean, there's a seven-segment LCD on the logic board for system troubleshooting. The air handler has a variable speed blower as well.
I understand that HVAC equipment varies wildly, but if you try to solve every possible problem or scenario and target every possible customer, you'll never make it to market.
I also understand that I am the target demographic.
It doesn’t just not have a privacy policy yet, but it’s not actually open source either. Honestly they probably fully intend on doing it, but it is important to point out that it is not yet open source.
> Open Source Commitment
>We are committed to transparency and the right-to-repair movement. The firmware images and backend API server code will be open sourced soon, allowing the community to audit, improve, and self-host their own infrastructure
Obviously you probably thought about it but what about rendering the subtitles on top of the video stream? Was there a reason it was not possible (e.g DRMs?)
This kind of softsubbing is what Crunchyroll primarily does, but it has hardsubbed encodes for devices that cannot do softsubbed rendering of the ASS subtitles that Crunchyroll uses. I go over some ways in how they could do away with these hardsubbed variants in the article without any notable loss in primary experience quality.
I’m pretty sure it’s not too hard to implement an ASS → PNG renderer (especially considering vibe coding is now a thing). Then, just need to split out subs that can be actual text somehow from the ones that have to be overlays.
Apart from that... surely they could at least keep ASS subs for the players that support it, and serve “fallback” subs for low-end devices?
So you make the business decision to stop supporting weird devices that can't do the job right? Why on earth does a cartoon streaming site need provably-correct subtitle support for devices that clearly suck?
If you hardsub the video, then you need to have a full copy of the video for every language. That's the opposite of what people want. They want a single textless video source that can then accommodate any internationalization.
The article claims that you can slice up the video and only use language-specific hardsubs for parts that need it. I'd be interested if there are technical reasons that can't be done.
To be more specific, basically all online streaming today is based around the concept of segmented video (where the video is already split into regular X-second chunks). If you only hardsubbed the typesetting while keeping the dialogue softsubbed (which could then be offered in a simpler subtitle format where necessary), you would only need to have multiple copies of the segments that actually feature typesetting. Then you would just construct multiple playlists that use the correct segment variants, and you could make this work basically everywhere.
You can also use the same kind of segment-based playlist approach on Blu-ray if you wanted to, though theoretically you should be able to use the Blu-ray Picture-in-Picture feature to store the typesetting in a separate partially transparent video stream entirely that is then overlaid on top of the clean video during playback.
It's incredibly fragile at the CDN level if deployed at scale for a start.
You'd see playback issues go up by 1000%.
In the nicest possible way, it is pretty clear that this article was written by somebody who has only ever looked at video distribution as a hobbyist and not deploying it at scale to paying customers who quite reasonably get very upset at things not working reliably.
What would be the problems? When I’ve looked into streaming video before (for normal, non ripping reasons), I’ve noticed that most are already playlists of segments. You’d just need to store the segments that are different between versions, which should be better than keeping full separate versions which is what they apparently do currently.
This is just an excuse. There needs to be a hard english sub and then keep other languages can be single video with different text file. Deleting 80% good things only to keep other 20% happy should not be an excuse.
Only english is the most popular and just keep it. Most of the good hard subs are made for english and that is what people want.
That is exactly what I thought and I am not even a native English speaker. My English is infinitely better than my Japanese though, so if I cared about anime I’d much rather watch a good English version rather than a bad German one
You can argue all day whether it’s ok to do this, and I’d absolutely say it’s fine, even laudable that they’re trying to make a real business where you have to pay for a product. Great for them!
But “rug pull” is absolutely still a correct description of what’s happening, because it was free, and now it’s not. Here’s a nice rug, but you have to get off of it by $DATE because we’re going to pull it. It’s a rug pull.
If it wasn’t a rug pull, I’d be able to keep standing on the rug (the free version.)
Very strange logic. If we follow your example, going to the dealership and taking a car for a test drive is a rug pull because eventually the car dealer will ask you to pay for the car?
No, because that would be absurd. You're not "following my example", you're using reductio ad absurdum. Any phrase can sound stupid if you take it out of context like that.
To make a non-fallacious analogy: If a ride sharing service gave car rides for free for a month, and a friend said "I'm going to use this instead of buying a car", you would very rightly say "they're going to pull the rug on the free rides, you may want to rethink that". And that would be a perfectly valid thing to say, even if the company told everyone the free rides were only for a month. Because the purpose of the discussion is whether it's a good idea to depend on the free service or not.
You seem hung up on this, like it's a judgement call or something. Maybe just free yourself of negative connotations with the term. It's fine to do this. I don't think it's a problem whatsoever.
The phrase is useful for what the metaphor implies: Likening using the product to sitting on a rug. If you start getting used to your place on the rug (putting your stuff on it, eating dinner on the rug, etc), you have to be aware that they're going to pull it, so you have to have a plan for when that happens (either pay or switch to a competitor.) Being aware of this is important: If you start developing a workflow that depends on this kind of software, you have to understand that it won't be free in the future and that you should either not depend on it, or be willing to pay. This is all fine.
The fact that you don't like the negative connotation doesn't mean the phrase isn't applicable.
There's a way to do that. Don't call it free beta with no pricing attached. Call it "free trial for X period" and ideally advertise the price ahead of time as it was always done in the past. Not calling it a "trial" is not an accident. It is deliberate and that's what makes it a rug pull.
If you read up on this story, you'd find out it's not run off the mill corruption. Sarkozy actually conspired with a foreign state – in particular with someone who directed a terror attack that killed 50 french citizens (!) – to fund his campaign.
People cite this figure a lot, but its a little misleading because when you own your own servers a lot of the expenses that are typically hosting actually fall under a different category.
If you use AWS, the people hired to manage the servers is part of the price tag. When you own your own you have to actually hire those people.
I mean, it's not like you can get away with running with zero SREs if you're running in the cloud. The personnel costs for on-prem hosting are vastly exaggerated, especially if you contract out the actual annoying work to a colo.
Smart hands is more expensive than having dedicated datacenter staff, and the dedicated staff do a considerably better job. It's worth noting that WMF runs _very_ lean in terms of its datacenter staff.
You're also ignoring the need for infrastructure/network engineers, software engineers, fundraising engineers, product managers, community managers, managers, HR, legal, finance/accounting, fundraisers, etc.
1. I think their spending is a good thing. Charitable scholarships for kids and initiatives to have a more educated populous in general are things that I am happy to donate to.
2. As stated in the article, hosting is still a relatively simple expenditure compared to the rest of their operation. If Wikipedia really eats a huge loss, falling back to just hosting wouldn't be unrealistic, especially since the actual operations of Wikipedia are mostly volunteer run anyways. In the absolute worst case, their free data exports would lead to someone making a successor that can be moved to more or less seamlessly.
The only real argument in my eyes is that their donation campaigns can seem manipulative. I still think it's fine at the end of the day given that Wikipedia is a free service and donating at all is entirely optional.
AFAIK, they don't do any scholarships or really do any educational activities. By far their biggest spending item is just $105 million for salaries, mainly for all of its leadership, which is a majority of its expenses.
The second biggest line item is grants at $25 million, primarily for users to travel to meet up.
Then $10 million for legal fees, $7 million for Wikipedia-hosted travel.
I think it's pretty unethical to say you have to donate to keep Wikipedia running when you're practically paying for C-suite raises and politically-aligned contributors' vacations.
Paying the travel for a bunch of highly active volunteer contributors to meet up ocassionally and hash out complex community issues pays massive dividends. It keeps the site moving forward. Its also pretty cheap when you consider how much free labour those volunteers provide.
Whenever people criticize wikimedia finances, i think they miss the forest for the trees. I actually think there is a lot to potentially crticize, but in my opinion everyone goes for the wrong things.
What are the rights things to criticize in your opinion?
Also, asking out of ignorance, what things need to move forward? I thought wikipedia is a solved problem, the only work i would expect it to need is maintenance work, security patches etc.
> What are the rights things to criticize in your opinion?
I think criticism should be based on looking at what they were trying to accomplish by spending the money, was it a worthwhile thing to try and do and was the solution executed effectively.
Just saying they spent $X, X is a big number, it must be wasteful without considering the value that is attempting to be purchssed with that money is a bit meaningless.
> Also, asking out of ignorance, what things need to move forward? I thought wikipedia is a solved problem, the only work i would expect it to need is maintenance work, security patches etc.
I think the person who i was responding to was referring to volunteer travel not staff travel (which of course also happens but i believe would be a different budget line item). This would be mostly for people who write the articles but also for people who do moderation activity. In person meetings can help resolve intractable disputes, share best practises, figure out complex disagreements, build relationships. All the same reasons that real companies fly their staff to expensive offsites.
Software is never done, there are always going to be things that come up and things to be improved. Some of them may be worth it some not.
As an example, there are changes coming to how ip addresses are handled, especially for logged out users. Nobody is exactly saying why, but im 99% sure its GDPR compliance related. That is a big project due to some deeply held assumptions, and probably critical.
A more mid-tier example might be, last year WMF rolled out a (caching) server precense in Brazil. The goal was to reduce latency for South American users. Is that worth it? It was probably a fair bit of money. If WMF was broke it wouldn't be, but given they do have some money, it seems like a reasonable improvement to me. Reasonable minds could probably disagree of course.
And an example of stupid projects might be WMF's ill-fated attempt at making an AI summarizer. That was a pure waste of money.
I guess my point it, WMF is a pretty big entity, some of the things they do are good, some are stupid, and i think people should criticize the projects they embark on rather than the big sum of money taken out of context.
A lot of it is because platforms need a wedge against AWS/Azure.
Take edge computing for example – most apps are CRUD apps that talk to a single database, but vendors keep pushing for running things on edge servers talking to databases with eventual consistency because hey, at least AWS doesn't offer that! (or it would cost you 10k/month to run that on AWS)
For the problem of eventual consistency, Google's Spanner and Firestore both provide strong or tunable global consistency. Instead of "eventual consistency because AWS doesn’t offer that", GCP has "we give you strong consistency at scale."