Yes, those are the "smart" features. Just plug in a Raspberry Pi and don't touch the TV after its initial setup. I'm still using the same Raspberry Pi 2 I've been using for more than 5 years now. Beats "smart" TVs that you can buy today.
It only has a 100Mbps Ethernet jack, yes, but so do both of my TVs.
I don’t have any HEVC media so I’m not sure there, but the lack of 4K output would be a big stopper for me.
I’m also not sure about the streaming services it would support, but chances are if your running off of a Pi2, you’re sailing the seven seas for media. Will that thing even play YouTube in a browser at this point?
Nah, I used to have a YouTube plugin that worked years ago but don't any more. I don't use it for "TV" purposes, though, it's more of a home cinema device. I don't have background screens in my house.
But my point wasn't literally to use a Raspberry Pi 2, just that you can get cheap low power devices that beat "smart TV" crap. You can of course get much more recent ARM-based boards that support all the latest HD standards etc. I don't do the hedonic treadmill, though, so I'm still happy with 1080p Blu-ray.
Can't disagree with that. If it's still fulfilling it's purpose, why change?
Smart TVs really aren't very smart and a nicely ripped 1080p Blu-ray often looks better than what the streaming services will stream you anyway.
I don't think I'd even have a TV if it were just me. Wife and kids seem to need one though, so simplicity counts. What would they do if they couldn't watch people who watch people play games?
I let one of my cheap smart TVs update for this reason (and not the other two identical ones I have) and now that one crashes and lags all the time, despite none of them being on the internet.
Embedded device software development quality is usually even worse than webapp software development quality.
Can’t help but think of the 2002 Ted Chiang novelette “Liking What You See” and its tech “Calliagnosia,” a medical procedure that eliminates a person’s ability to perceive beauty. Excellent read (as are almost all his stories, imho).
FWIW, I find the high-level overview more useful, because then I can write a script tailored to my situation. Between `bash`, `aws` CLI tool, and Powershell, it would be straightforward to programmatically apply this remedy.
It’s a small detail, but I appreciate when a command-line tool offers both short and long options for all the switches. Short so I can type them quickly for manual use, and long for when I use the tool in a script and want the command invocation to be somewhat self-documenting.
Fascinating topic. There are two ways the user/client/browser receives reports about the character encoding of content. And there are hefty caveats about how reliable those reports are.
(1) First, the Web server usually reports a character encoding, a.k.a. charset, in the HTTP headers that come with the content. Of course, the HTTP headers are not part of the HTML document but are rather part of the overhead of what the Web server sends to the user/client/browser. (The HTTP headers and the `head` element of an HTML document are entirely different.) One of these HTTP headers is called Content-Type, and conventionally this header often reports a character encoding, e.g., "Content-Type: text/html; charset=UTF-8". So this is one place a character encoding is reported.
If the actual content is not an (X)HTML file, the HTTP header might be the only report the user/client/browser receives about the character encoding. Consider accessing a plain text file via HTTP. The text file isn't likely to itself contain information about what character encoding it uses. The HTTP header of "Content-Type: text/plain; charset=UTF-8" might be the only character encoding information that is reported.
(2) Now, if the content is an (X)HTML page, a charset encoding is often also reported in the content itself, generally in the HTML document's head section in a meta tag such as '<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>' or '<meta charset="utf-8">'. Now just because an HTML document self-reports that it uses a UTF-8 (or whatever) character encoding, that's hardly a guarantee that the document does in fact use said character encoding.
Consider the case of a program that generates web pages using a boilerplate template still using an ancient default of ISO-8859-1 in the meta charset tag of its head element, even though the body content that goes into the template is being pulled from a database that spits out a default of utf-8. Boom. Mismatch. Janky code is spitting out mismatched and inaccurate character encoding information every day.
Or to consider web servers. Consider a web server whose config file contains the typo "uft-8" because somebody fat-fingered while updating the config (I've seen this in random web pages.). Or consider a web server that uses a global default of "utf-8" in its outgoing HTTP headers even when the content being served is a hodge-podge of UTF-8, WINDOWS-1251, WINDOWS-1252, and ISO-8859-1. This too happens all the time.
I think the most important takeaway is that with both HTTP headers and meta tags, there's no intrinsic link between the character encoding being reported and the actual character encoding of the content. What a Web server tells me and what's in the meta tag in the markup just count as two reports. They might be accurate, they might not be. If it really matters to me what the character encoding is, there's nothing for it but to determine the character encoding myself.
I have a Hacker News reader, https://www.thnr.net, and my program downloads the URL for every HN story with an outgoing link. I have seen binary files sent with a "UTF-8" Content-Type header. I have seen UTF-8 files sent with a "inode/x-empty" Content-Type header. My logs have literally hundreds of goofy inaccurate reports of content types and character encodings. Because I'm fastidious and I want to know what a file actually is, I have a function `get_textual_mimetype` that analyzes the content of what the URL's web server sends me. My program downloads the content and uses tools such as `iconv` and `isutf8` to get some information about what encoding it might be. It uses `xmlwf` to check if it's well-formed XML. It uses `jq` to check whether it's valid JSON. It uses `libmagic`. There's a lot of fun stuff the program does to pin down with a high degree of certainty what the content is. I want my program to know whether the content is an application/pdf, an iamge/webp, a text/html, an application/xhtml+xml, a text/x-csrc, or whatever. Only a rigorous analysis will tell you the truth. (If anyone is curious, the source for `get_textual_mimetype` is in the repo for my HN reader project: https://github.com/timoteostewart/timbos-hn-reader/blob/main... )