Hacker Newsnew | past | comments | ask | show | jobs | submit | deafpolygon's commentslogin

It’s just a modern-day MLM scam.

tl;dr: don’t connect it to a network, and/or use a computer monitor.

My work health insurance recently offered a free scale and blood pressure monitor, I thought that's a nice perk, I'll use that, so I ordered with the intent of never using their app, just using it for my own tracking. The first time I used it, I got an email from my insurance company congratulating me and giving me suggestions. Both devices have a cellular modem in them, and arrived paired to my identity.

I destroyed them and threw them in a dumpster like that Ron Swanson gif.

All to say, little cellular modems and a small data plan are likely getting cheap enough it's worth being extra diligent about the devices we let into our homes. Probably not yet to the point of that being the case on a tv, but I could certainly see it getting to that point soon enough.


Similarly, I had a workplace dental provider ship me a ‘smart toothbrush’.

Turns out they track the aggregate of everyone’s brushing and if every employee brushes their teeth, the plan gets a discount.

”Lower rate based on group's participation in Beam Perks™ wellness program and a group aggregate Beam score of "A". Based on Beam® internal brushing and utilization data.”


Technology is starting to become genuinely terrifying. Computers used to sit on desks in full visibility, and we used to be in control. Now they're anywhere and everywhere, invisible, always connected, always sensing, doing god knows what, serving unknown masters, exploiting us in unfathomable ways. Absolutely horrifying.

Time to turn your house into a giant Faraday cage

I'd have tried to disassemble it, locate the SIM card or cellular modem, and see if it could be used for other traffic. A wireguard tunnel fixes the privacy problem, and I can always use more IP addresses and bandwidth.

Until people start abusing these "features", they will not go away.


Be very very careful if you do that.

The data plans on some embedded modems are quite different from consumer plans. They are specifically designed for customers who have a large number of devices but only need a small amount of bandwidth on each device.

These plans might have a very low fixed monthly cost but only include a small data allowance, say 100 KB/month. That's plenty for something like a blood pressure monitor that uploads your results to your doctor or insurance company.

If you are lucky that's a hard cap and the data plan cuts off for the rest of the month when you hit it.

If you are unlucky that plan includes additional data that is very expensive. I've heard numbers like $10 for each additional 100 KB.

I definitely recall reading news articles about people who have repurposed a SIM from some device and using it for their internet access, figuring that company would not notice, and using it to watch movies and download large files.

Then the company gets their bill from their wireless service provider, and it turns out that on the long list of line items showing the cost for each modem, a single say $35 000 item really stands out when all the others are $1.

If you are lucky the company merely asks you to pay that, and if you refuse they take you to civil court where you will lose. (That's what happened in the articles I remember reading, which is how they came to the public's attention).

If you unlucky what you did also falls under your jurisdiction's "theft of services" criminal law. Worse, the amount is likely above the maximum for misdemeanor theft of services so it would be felony theft of services.


Example: https://news.ycombinator.com/item?id=2509967 (the original source is gone and not in the Wayback Machine)

Through what technical or legal mechanism is the company identifying or locating you - assuming you never logged in or associated the product with your identity?

They shipped it to you. They associated a machine UUID with you at that time, as well as the SIM card.

Now maybe you mean the TV? That’s not what this particular thread is about.


> That’s not what this particular thread is about

This thread is about removing the SIM from a TV.

If I bought that TV in cash (or even credit card, sans subpoena) at a Best Buy and removed the SIM, how is any corporation identifying me?


What law is preventing Best Buy from telling TVManufacturer that a credit card with these last 4 digits bought the TV with this exact serial number?

And once the SIM connects near your house, what is preventing the phone company from telling TVManufacturer the rough location of the SIM, especially after that SIM is found to have used too much data?

Then use some commercially available ad database to figure out that the person typically near this location with these last four digits is 15155.

That's just a guess, but there is enough fingerprinting that they will know with pretty high certainty it is you. Whether all this is admissible in civil court, idk.


> What law is preventing Best Buy from telling TVManufacturer

No law: reality and PCI standards prevent this. And of course, the manufacturer could get a subpoena after enough process. This also assumes the TV was purchased with a credit card and not cash.

> And once the SIM connects near your house

> what is preventing the phone company from telling

Again: reality and the fact that corporations aren't cooperative. A rough location doesn't help identify someone in any urban environment. Corporations are not the FBI or FCC on a fox hunt.

Can you cite a single case where this has happened on behalf of a corporation? These are public record, of course.


Anecdotally, you may want to avoid Best Buy either way. There's a chance the TV box contains just rocks, no TV, and that they refuse to refund your purchase.

https://wonderfulengineering.com/rtx-5080-buyer-opens-box-to...

I know I'm sure never shopping there again.


Why not just remove the cell modem?

We shouldn't have to.

Holy shit! I would’ve done the same! This is pure evil! I guess the box never had this info on it

Yup. Works great. All things equal I'd prefer just not buying a damn Smart TV to begin with, but absent that as a realistic option (every 4K TV I've ever seen is smart) I'll happily settle with them never seeing one byte of Internet.

I’m in the same camp. The next escalation is defending against a TV scanning for, and joining unprotected neighbor networks to “phone home.” It’s a thing.

Bet this is easy to fool with a fake/honeypot open network with a high rssi that blocks all traffic except the initial captive portal / connectivity check.

I mean yeah or they include a 5G modem because the ads are so lucrative. But then we can start discussing how to cut the red wire to disarm your spy rectangle.

That one I’m starting to lean on getting closer to happening because we now have 5G RedCap out there for the ‘cheaper’ moderate-speed IoT data market.

https://about.att.com/blogs/2025/5g-redcap.html https://www.t-mobile.com/news/network/5g-redcap-powering-sma...

Wouldn’t surprise me to see modems and eSIMs and embedded PCB antennas some day down the line.


Imagine if we could put this kind of innovation to work to solve actual problems and not find ways to bypass people attempting to not have capitalism screaming at them 24/7 to buy things.

The article lists several manufacturers of 4k dumb tv’s

The article also says why they suck:

> Dumb TVs sold today have serious image and sound quality tradeoffs, simply because companies don’t make dumb versions of their high-end models. On the image side, you can expect lower resolutions, sizes, and brightness levels and poorer viewing angles. You also won’t find premium panel technologies like OLED. If you want premium image quality or sound, you’re better off using a smart TV offline. Dumb TVs also usually have shorter (one-year) warranties.


Yeah, Sceptre's site shows a bunch of dumb TVs that max out at HDMI 2.0, 4K/60Hz. Basically, they are ten years out of date.

Some of the advice is a bit weird though. Get a 4k HDR TV and then connect it to an antenna? I mean, why do you even need a 4k HDR TV in that case?

Not to mention disabling the smart/ad features is an option on some smart tvs (ie. Sony).


Someone should start a blog where it's all clickbait titles and the articles are all once sentence with the obvious resolution to the bait.

You should preface this with some important information about what that does.

There are some trade-offs!

Changing that setting to 1 gives you weaker anonymity guarantees. Using multiple guards spreads your traffic across different IP addresses, making it harder for an adversary who controls a subset of the network to correlate your activity.

Reducing to a single guard concentrates all traffic through one point, increasing the chance that a hostile relay could observe a larger fraction of your streams...


The setting doesn’t exist.

There is plenty of places where it can improve. Not limited to:

- better HDR support across the board

- higher refresh rate support

- thinner panels

- energy efficiency

eventually, we are going to see "television" shift into a personal activity, with VR / AR subsuming some of the content produced now


Honestly, I like Comic Sans.

It’s clear, legible and whimsical.


> In an online survey of 371 people, mostly from Australia, the United States, and the United Kingdom

I'm pretty sure that's not a significant enough sample size to matter.


Depends, if they're representative / a good cross-section it'd be statistically significant enough. That said, I wonder how they get these surveys; there's a number of "get paid pennies to fill in a survey for studies" schemes out there, I can well imagine the quality of the responses of those is not great.

Can you show your work?

Perl6/Raku killed Perl.

Python 3 almost killed Python.

It's normal. Once a community loses faith, it's hard to stop them from leaving.


I'd take this a step further and say that the design flaws that motivated Perl6 were what really killed Perl. Perl6 just accelerated the timeline.

I do imagine a saner migration could've been done - for example, declaring that regexes must not start with a non-escaped space and division must be surrounded by space, to fix one of the parsing problems - with the usual `use` incremental migration.


Yep. Perl 6 was a wall that Perl 5 would never move beyond. It’s still Perl 5 25 years later.

Agree 100%. We were told to wait for any improvements or new features we wanted and just to wait for Perl 6, which never came

"Perl6/Raku killed Perl."

Perl was effectively "dead" before Perl 6 existed. I was there. I bought the books, wrote the code, hung out in #perl and followed the progress. I remember when Perl 6 was announced. I remember barely caring by that time, and I perceived that I was hardly alone. Everyone had moved on by then. At best, Perl 6 was seen as maybe Perl making a "come back."

Java, and (by extension) Windows, killed Perl.

Java promised portability. Java had a workable cross-platform GUI story (Swing). Java had a web story with JSP, Tomcat, Java applets, etc. Java had a plausible embedded and mobile story. Java wasn't wedded to the UNIX model, and at the time, Java's Windows implementation was as least as good as its non-Windows implementations, if not better. Java also had a development budget, a marketing budget, and the explicit blessing of several big tech giants of the time.

In the late 90's and early 2000's, Java just sucked the life out of almost everything else that wasn't a "systems" or legacy big-iron language. Perl was just another casualty of Java. Many of the things that mattered back then either seem silly today or have been solved with things other than Java, but at the time they were very compelling.

Could Perl have been saved? Maybe. The claims that Perl is difficult to learn or "write only" aren't true: Perl isn't the least bit difficult. Nearly every Perl programmer on Earth is self-taught, the documentation is excellent and Google has been able to answer any basic Perl question one might have for decades now. If Perl had somehow bent itself enough to make Windows a first-class platform, it would have helped a lot. If Perl had provided a low friction, batteries-included de facto standard web template and server integration solution, it would have helped a lot as well. If Perl had a serious cross-platform GUI story, that would helped a lot.

To the extent that the Perl "community" was somehow incapable of these things, we can call the death of Perl a phenomena of "culture." I, however, attribute the fall of Perl to the more mundane reason that Perl had no business model and no business advocates.


Excellent point in the last paragraph. Python, JavaScript, Rust, Swift, and C# all have/had business models and business advocates in a way that Perl never did.

Do you not think O'Reilly Associates fits some of that role? It seemed like Perl had more commercial backing compared to the other scripting languages if anything at that point. Python and JavaScript were picked up by Google, but later. Amazon was originally built out of Perl. Perl never converted its industry footprint into that kind of advocacy, I think some of that is also culture-driven.

Maybe until the 2001 O'Reilly layoffs. Tim hired Larry for about 5 years, but that was mostly working on the third edition of the Camel. A handful of other Perl luminaries worked there at the same time (Jon Orwant, Nat Torkington).

When I joined in 2002, there were only a couple of developers in general, and no one sponsored to work on or evangelize any specific technology full time. Sometimes I wonder if Sun had more paid people working on Tcl.

I don't mean to malign or sideline the work anyone at ORA or ActiveState did in those days. Certainly the latter did more work to make Perl a first-class language on Windows than anyone. Yet that's very different from a funded Python Software Foundation or Sun supporting Java or the entire web browser industry funding JavaScript or....


Thanks for detailed reply. Yes, the marketing budget for Java was unmatched, but to my eye they were in retreat towards the Enterprise datacentre by 2001. I don't think the Python foundation had launched until 2001. Amazon was migrating off Perl and Oracle. JavaScript only got interesting after Google maps/Wave I think, arguably the second browser wars start when Apple launches Safari, late 2002.

So, I guess the counterfactual line of enquiry ought to be why Perl didn't, or couldn't, or didn't want, to pivot towards stronger commercial backing, sooner.


Python 3 couldn't even kill Python 2!

> Python 3 almost killed Python.

People were being crybabies; the critics were extremely vocal and few. Python 3 improved the language in every way and the tooling to upgrade remains unmatched.


Python 3 was a disaster and enterprises were still undertaking pointless 2->3 upgrade projects 10 years later

A month ago I had to fix a small bug in Python 2.6 code in one of internal systems. It won't be ever migrated, no capacity and no value

It was annoying but if it hadn't happened Python would still be struggling with basic things like Unicode.

Organizations struggled with it but they struggle with basically every breaking change. I was on the tooling team that helped an organization handle the transition of about 5 million lines of data science code from python 2.7 to 3.2. We also had to handle other breaking changes like airflow upgrades, spark 2->3, intel->amd->graviton.

At that scale all those changes are a big deal. Heck even the pickle protocol change in Python 3.8 was a big deal for us. I wouldn't characterize the python 2->3 transition as a significantly bigger deal than some of the others. In many ways it was easier because so much hay was made about it there was a lot of knowledge and tooling.


> It was annoying but if it hadn't happened Python would still be struggling with basic things like Unicode.

They should've just used Python 2's strings as UTF-8. No need to break every existing program, just deprecate and discourage the old Python Unicode type. The new Unicode type (Python 3's string) is a complicated mess, and anyone who thinks it is simple and clean isn't aware of what's going on under the hood.

Having your strings be a simple array of bytes, which might be UTF-8 or WTF-8, seems to be working out pretty well for Go.


I can't say i've ever thought "wow I wish I had to use go's unicode approach". The bytes/str split is the cleanest approach of any runtime I've seen.

What you propose would have, among other things, broken the well established expectation of random access for strings, including for slicing, while leaving behind unclear semantics about what encoding was used. (If you read in data in a different encoding and aren't forced to do something about it before passing it to a system that expects UTF-8, that's a recipe for disaster.) It would also leave unclear semantics for cases where the underlying bytes aren't valid UTF-8 data (do you just fail on every operation? Fail on the ones that happen to encounter the invalid bytes?), which in turn is also problematic for command-line arguments.

With the benefit of hindsight, though, Python 3 could have been done as a non-breaking upgrade.

Imagine if the same interpreter supported both Python 3 and Python 2. Python 3 code could import a Python 2 module, or vice versa. Codebases could migrate somewhat more incrementally. Python 2 code's idea of a "string" would be bytes, and python 3's idea of a "string" would be unicode, but both can speak the other's language, they just have different names for things, so you can migrate.


That split between bytes and unicode made better code. Bytes are what you get from the network. Is it a PNG? A paragraph of text? Who knows! But in Python 2, you treated them both as the same thing: a series of bytes.

Being more or less forced to decode that series into a string of text where appropriate made a huge number of bugs vanish. Oops, forget to run `value=incoming_data.decode()` before passing incoming data to a function that expects a string, not a series of bytes? Boom! Thing is, it was always broken, but now it's visibly broken. And there was no more having to remember if you'd already .decode()d a value or whether you still needed to, because the end result isn't the same datatype anymore. It was so annoying to have an internal function in a webserver, and the old sloppiness meant that sometimes you were calling it with decoded strings and sometimes the raw bytes coming in over the wire, so sometimes it processed non-ASCII characters incorrectly, and if you tried to fix it by making it decode passed-in values, it start started breaking previously-working callers. Ugh, what a mess!

I hated the schism for about the first month because it broke a lot of my old, crappy code. Well, it didn't actually. It just forced me to be aware of my old, crappy code, and do the hard, non-automatable work of actually fixing it. The end result was far better than what I'd started with.


That distinction is indeed critical, and I'm not suggesting removing that distinction. My point is that you could give all those types names, and manage the transition by having Python 3 change the defaults (e.g. that a string is unicode).

I’m a little confused. That’s basically with Python 3 did, right? In py2, “foo” is a string of bytes, and u”foo” is Unicode. In py3, both are Unicode, and bytes() is a string of bytes.

The difference is that the two don't interoperate. You can't import a Python 3 module from Python 2 or vice versa; you have to use completely separate interpreters to run them.

I'm suggesting a model in which one interpreter runs both Python 2 and Python 3, and the underlying types are the same, so you can pass them between the two. You'd have to know that "foo" created in Python 2 is the equivalent of b"foo" created in Python 3, but that's easy enough to deal with.


Ok who would suggest this when the community could take a modicum of responsibility

Why would I ever risk inflicting a stack trace like

  Traceback (most recent call last):
    File "x.py", line 2, in <module>
      foo.encode()
  UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)
on a user of Python 3.x where it isn't possible? (Note the UnicodeDecodeError coming from an attempt to encode.)

> With the benefit of hindsight, though, Python 3 could have been done as a non-breaking upgrade.

Not without enormous and unnecessary pain.


It would absolutely have been harder. But the pain of going that path might potentially have been less than the pain of the Python 2 to Python 3 transition. Or, possibly, it wouldn't have been; I'm not claiming the tradeoff is obvious even in hindsight here.

I think you have causation reversed: it would have been at least two orders of magnitude greater to act like moving to python 3 was harder than staying. But you do you boo :emoji-kissey-face:

Pain on whose part? There was certainly pain porting all the code that had to be ported to Python 3 so that the Python developers could have an easier time.

Yes, exactly. customers need to stop acting like a bitch if they wanna be taken seriously

It was not a disaster in any way. People just complained about having to do something to upgrade their codebases.

Except that Python took the other path when migrating from Python 1 to Python 2 and ... guess what? That was a "disaster" too.

The only difference was that by the time of Python 3, Python programs were orders of magnitude bigger so the pain was that much worse.


Differences of scale do make a qualitative difference and must be considered when doing a migration.

The real problem here was releasing 3.0 as if it was stable, when the real usable version was 3.3/3.4

I recall 3.2 being okay. But it was definitely better by 3.4.

I loved Instagib.

It's the only way I ever played.

> When Windows 12 is announced, Windows 11 may finally be usable.

Knowing Microsoft, feels like they’ll just make it a mandatory security update.


Little did they know, this would lead to the production of armored canines bred and ready for war.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: