My work health insurance recently offered a free scale and blood pressure monitor, I thought that's a nice perk, I'll use that, so I ordered with the intent of never using their app, just using it for my own tracking. The first time I used it, I got an email from my insurance company congratulating me and giving me suggestions. Both devices have a cellular modem in them, and arrived paired to my identity.
I destroyed them and threw them in a dumpster like that Ron Swanson gif.
All to say, little cellular modems and a small data plan are likely getting cheap enough it's worth being extra diligent about the devices we let into our homes. Probably not yet to the point of that being the case on a tv, but I could certainly see it getting to that point soon enough.
Similarly, I had a workplace dental provider ship me a ‘smart toothbrush’.
Turns out they track the aggregate of everyone’s brushing and if every employee brushes their teeth, the plan gets a discount.
”Lower rate based on group's participation in Beam Perks™ wellness program and a group aggregate Beam score of "A". Based on Beam® internal brushing and utilization data.”
Technology is starting to become genuinely terrifying. Computers used to sit on desks in full visibility, and we used to be in control. Now they're anywhere and everywhere, invisible, always connected, always sensing, doing god knows what, serving unknown masters, exploiting us in unfathomable ways. Absolutely horrifying.
I'd have tried to disassemble it, locate the SIM card or cellular modem, and see if it could be used for other traffic. A wireguard tunnel fixes the privacy problem, and I can always use more IP addresses and bandwidth.
Until people start abusing these "features", they will not go away.
The data plans on some embedded modems are quite different from consumer plans. They are specifically designed for customers who have a large number of devices but only need a small amount of bandwidth on each device.
These plans might have a very low fixed monthly cost but only include a small data allowance, say 100 KB/month. That's plenty for something like a blood pressure monitor that uploads your results to your doctor or insurance company.
If you are lucky that's a hard cap and the data plan cuts off for the rest of the month when you hit it.
If you are unlucky that plan includes additional data that is very expensive. I've heard numbers like $10 for each additional 100 KB.
I definitely recall reading news articles about people who have repurposed a SIM from some device and using it for their internet access, figuring that company would not notice, and using it to watch movies and download large files.
Then the company gets their bill from their wireless service provider, and it turns out that on the long list of line items showing the cost for each modem, a single say $35 000 item really stands out when all the others are $1.
If you are lucky the company merely asks you to pay that, and if you refuse they take you to civil court where you will lose. (That's what happened in the articles I remember reading, which is how they came to the public's attention).
If you unlucky what you did also falls under your jurisdiction's "theft of services" criminal law. Worse, the amount is likely above the maximum for misdemeanor theft of services so it would be felony theft of services.
Through what technical or legal mechanism is the company identifying or locating you - assuming you never logged in or associated the product with your identity?
What law is preventing Best Buy from telling TVManufacturer that a credit card with these last 4 digits bought the TV with this exact serial number?
And once the SIM connects near your house, what is preventing the phone company from telling TVManufacturer the rough location of the SIM, especially after that SIM is found to have used too much data?
Then use some commercially available ad database to figure out that the person typically near this location with these last four digits is 15155.
That's just a guess, but there is enough fingerprinting that they will know with pretty high certainty it is you. Whether all this is admissible in civil court, idk.
> What law is preventing Best Buy from telling TVManufacturer
No law: reality and PCI standards prevent this. And of course, the manufacturer could get a subpoena after enough process. This also assumes the TV was purchased with a credit card and not cash.
> And once the SIM connects near your house
> what is preventing the phone company from telling
Again: reality and the fact that corporations aren't cooperative. A rough location doesn't help identify someone in any urban environment. Corporations are not the FBI or FCC on a fox hunt.
Can you cite a single case where this has happened on behalf of a corporation? These are public record, of course.
Anecdotally, you may want to avoid Best Buy either way. There's a chance the TV box contains just rocks, no TV, and that they refuse to refund your purchase.
Yup. Works great. All things equal I'd prefer just not buying a damn Smart TV to begin with, but absent that as a realistic option (every 4K TV I've ever seen is smart) I'll happily settle with them never seeing one byte of Internet.
I’m in the same camp. The next escalation is defending against a TV scanning for, and joining unprotected neighbor networks to “phone home.” It’s a thing.
Bet this is easy to fool with a fake/honeypot open network with a high rssi that blocks all traffic except the initial captive portal / connectivity check.
I mean yeah or they include a 5G modem because the ads are so lucrative. But then we can start discussing how to cut the red wire to disarm your spy rectangle.
Imagine if we could put this kind of innovation to work to solve actual problems and not find ways to bypass people attempting to not have capitalism screaming at them 24/7 to buy things.
> Dumb TVs sold today have serious image and sound quality tradeoffs, simply because companies don’t make dumb versions of their high-end models. On the image side, you can expect lower resolutions, sizes, and brightness levels and poorer viewing angles. You also won’t find premium panel technologies like OLED. If you want premium image quality or sound, you’re better off using a smart TV offline. Dumb TVs also usually have shorter (one-year) warranties.
You should preface this with some important information about what that does.
There are some trade-offs!
Changing that setting to 1 gives you weaker anonymity guarantees. Using multiple guards spreads your traffic across different IP addresses, making it harder for an adversary who controls a subset of the network to correlate your activity.
Reducing to a single guard concentrates all traffic through one point, increasing the chance that a hostile relay could observe a larger fraction of your streams...
Depends, if they're representative / a good cross-section it'd be statistically significant enough. That said, I wonder how they get these surveys; there's a number of "get paid pennies to fill in a survey for studies" schemes out there, I can well imagine the quality of the responses of those is not great.
I'd take this a step further and say that the design flaws that motivated Perl6 were what really killed Perl. Perl6 just accelerated the timeline.
I do imagine a saner migration could've been done - for example, declaring that regexes must not start with a non-escaped space and division must be surrounded by space, to fix one of the parsing problems - with the usual `use` incremental migration.
Perl was effectively "dead" before Perl 6 existed. I was there. I bought the books, wrote the code, hung out in #perl and followed the progress. I remember when Perl 6 was announced. I remember barely caring by that time, and I perceived that I was hardly alone. Everyone had moved on by then. At best, Perl 6 was seen as maybe Perl making a "come back."
Java, and (by extension) Windows, killed Perl.
Java promised portability. Java had a workable cross-platform GUI story (Swing). Java had a web story with JSP, Tomcat, Java applets, etc. Java had a plausible embedded and mobile story. Java wasn't wedded to the UNIX model, and at the time, Java's Windows implementation was as least as good as its non-Windows implementations, if not better. Java also had a development budget, a marketing budget, and the explicit blessing of several big tech giants of the time.
In the late 90's and early 2000's, Java just sucked the life out of almost everything else that wasn't a "systems" or legacy big-iron language. Perl was just another casualty of Java. Many of the things that mattered back then either seem silly today or have been solved with things other than Java, but at the time they were very compelling.
Could Perl have been saved? Maybe. The claims that Perl is difficult to learn or "write only" aren't true: Perl isn't the least bit difficult. Nearly every Perl programmer on Earth is self-taught, the documentation is excellent and Google has been able to answer any basic Perl question one might have for decades now. If Perl had somehow bent itself enough to make Windows a first-class platform, it would have helped a lot. If Perl had provided a low friction, batteries-included de facto standard web template and server integration solution, it would have helped a lot as well. If Perl had a serious cross-platform GUI story, that would helped a lot.
To the extent that the Perl "community" was somehow incapable of these things, we can call the death of Perl a phenomena of "culture." I, however, attribute the fall of Perl to the more mundane reason that Perl had no business model and no business advocates.
Excellent point in the last paragraph. Python, JavaScript, Rust, Swift, and C# all have/had business models and business advocates in a way that Perl never did.
Do you not think O'Reilly Associates fits some of that role? It seemed like Perl had more commercial backing compared to the other scripting languages if anything at that point. Python and JavaScript were picked up by Google, but later. Amazon was originally built out of Perl. Perl never converted its industry footprint into that kind of advocacy, I think some of that is also culture-driven.
Maybe until the 2001 O'Reilly layoffs. Tim hired Larry for about 5 years, but that was mostly working on the third edition of the Camel. A handful of other Perl luminaries worked there at the same time (Jon Orwant, Nat Torkington).
When I joined in 2002, there were only a couple of developers in general, and no one sponsored to work on or evangelize any specific technology full time. Sometimes I wonder if Sun had more paid people working on Tcl.
I don't mean to malign or sideline the work anyone at ORA or ActiveState did in those days. Certainly the latter did more work to make Perl a first-class language on Windows than anyone. Yet that's very different from a funded Python Software Foundation or Sun supporting Java or the entire web browser industry funding JavaScript or....
Thanks for detailed reply. Yes, the marketing budget for Java was unmatched, but to my eye they were in retreat towards the Enterprise datacentre by 2001. I don't think the Python foundation had launched until 2001. Amazon was migrating off Perl and Oracle. JavaScript only got interesting after Google maps/Wave I think, arguably the second browser wars start when Apple launches Safari, late 2002.
So, I guess the counterfactual line of enquiry ought to be why Perl didn't, or couldn't, or didn't want, to pivot towards stronger commercial backing, sooner.
People were being crybabies; the critics were extremely vocal and few. Python 3 improved the language in every way and the tooling to upgrade remains unmatched.
It was annoying but if it hadn't happened Python would still be struggling with basic things like Unicode.
Organizations struggled with it but they struggle with basically every breaking change. I was on the tooling team that helped an organization handle the transition of about 5 million lines of data science code from python 2.7 to 3.2. We also had to handle other breaking changes like airflow upgrades, spark 2->3, intel->amd->graviton.
At that scale all those changes are a big deal. Heck even the pickle protocol change in Python 3.8 was a big deal for us. I wouldn't characterize the python 2->3 transition as a significantly bigger deal than some of the others. In many ways it was easier because so much hay was made about it there was a lot of knowledge and tooling.
> It was annoying but if it hadn't happened Python would still be struggling with basic things like Unicode.
They should've just used Python 2's strings as UTF-8. No need to break every existing program, just deprecate and discourage the old Python Unicode type. The new Unicode type (Python 3's string) is a complicated mess, and anyone who thinks it is simple and clean isn't aware of what's going on under the hood.
Having your strings be a simple array of bytes, which might be UTF-8 or WTF-8, seems to be working out pretty well for Go.
What you propose would have, among other things, broken the well established expectation of random access for strings, including for slicing, while leaving behind unclear semantics about what encoding was used. (If you read in data in a different encoding and aren't forced to do something about it before passing it to a system that expects UTF-8, that's a recipe for disaster.) It would also leave unclear semantics for cases where the underlying bytes aren't valid UTF-8 data (do you just fail on every operation? Fail on the ones that happen to encounter the invalid bytes?), which in turn is also problematic for command-line arguments.
With the benefit of hindsight, though, Python 3 could have been done as a non-breaking upgrade.
Imagine if the same interpreter supported both Python 3 and Python 2. Python 3 code could import a Python 2 module, or vice versa. Codebases could migrate somewhat more incrementally. Python 2 code's idea of a "string" would be bytes, and python 3's idea of a "string" would be unicode, but both can speak the other's language, they just have different names for things, so you can migrate.
That split between bytes and unicode made better code. Bytes are what you get from the network. Is it a PNG? A paragraph of text? Who knows! But in Python 2, you treated them both as the same thing: a series of bytes.
Being more or less forced to decode that series into a string of text where appropriate made a huge number of bugs vanish. Oops, forget to run `value=incoming_data.decode()` before passing incoming data to a function that expects a string, not a series of bytes? Boom! Thing is, it was always broken, but now it's visibly broken. And there was no more having to remember if you'd already .decode()d a value or whether you still needed to, because the end result isn't the same datatype anymore. It was so annoying to have an internal function in a webserver, and the old sloppiness meant that sometimes you were calling it with decoded strings and sometimes the raw bytes coming in over the wire, so sometimes it processed non-ASCII characters incorrectly, and if you tried to fix it by making it decode passed-in values, it start started breaking previously-working callers. Ugh, what a mess!
I hated the schism for about the first month because it broke a lot of my old, crappy code. Well, it didn't actually. It just forced me to be aware of my old, crappy code, and do the hard, non-automatable work of actually fixing it. The end result was far better than what I'd started with.
That distinction is indeed critical, and I'm not suggesting removing that distinction. My point is that you could give all those types names, and manage the transition by having Python 3 change the defaults (e.g. that a string is unicode).
I’m a little confused. That’s basically with Python 3 did, right? In py2, “foo” is a string of bytes, and u”foo” is Unicode. In py3, both are Unicode, and bytes() is a string of bytes.
The difference is that the two don't interoperate. You can't import a Python 3 module from Python 2 or vice versa; you have to use completely separate interpreters to run them.
I'm suggesting a model in which one interpreter runs both Python 2 and Python 3, and the underlying types are the same, so you can pass them between the two. You'd have to know that "foo" created in Python 2 is the equivalent of b"foo" created in Python 3, but that's easy enough to deal with.
Why would I ever risk inflicting a stack trace like
Traceback (most recent call last):
File "x.py", line 2, in <module>
foo.encode()
UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)
on a user of Python 3.x where it isn't possible? (Note the UnicodeDecodeError coming from an attempt to encode.)
It would absolutely have been harder. But the pain of going that path might potentially have been less than the pain of the Python 2 to Python 3 transition. Or, possibly, it wouldn't have been; I'm not claiming the tradeoff is obvious even in hindsight here.
I think you have causation reversed: it would have been at least two orders of magnitude greater to act like moving to python 3 was harder than staying. But you do you boo :emoji-kissey-face:
Pain on whose part? There was certainly pain porting all the code that had to be ported to Python 3 so that the Python developers could have an easier time.
reply