The connection between the furnace and the thermostat probably shouldn't go through the internet.
So it's perfectly reasonable for the furnace to turn off when it is disconnected, because disconnection would be a very strong signal for an error state instead of regular intermittent network/service issues.
They're not, "smart" thermostats have a WiFi frontend that allows devices to connect to it from the network but the thermostat itself is hardwired to the furnace/HVAC.
You could in theory put one next to the furnace in your machine closet but that would be dumb and expensive
Certainly, the standard smart thermostat set up is that your ecobee is connected to the Internet, but controls the furnace using good old-fashioned signal wires
Which is only extremely tangentially related to "if I pull my thermostat off the wall"
The overwhelmingly most common connection between a thermostat and furnace is a contact closure when calling for heat, with no ability to differentiate between “thermostat is present but not calling for heat” and “thermostat is not present” as both present as "these T-T contacts are not closed/shorted together".
the connection to my thermostat is via a cable, if I pull it out of the wall it won't be connected to anything at all. the whole furnace is not connected to anything but mains power.
yeah the default in this case has to be “off” to prevent damage from running blind, on that note other things in the house should be certified to be able to handle being frozen perhaps
Typically when people are concerned about their house freezing in cold climates, they are primarily worried about water freezing, expanding, and cracking pipes and fittings.
It is extraordinarily hard to design something that can withstand that pressure and still be fit for purpose. The item needs to be able to withstand pressures in excess of ~10k psi for -10c, with the pressure rising as temp decreases.
The standard solution for people that need to winterize a building that will not be heated is to drain as much water as possible from the lines, and then fill them with a liquid with a lower freezing point.
> Another "myth" is that Python is slow because it is interpreted; again, there is some truth to that, but interpretation is only a small part of what makes Python slow.
He concedes its slow, he's just saying it's not related to how interpreted it is.
I would argue this isn't true. It is a big part of what makes it slow. The fastest interpreted languages are one to two orders of magnitude slower than for example C/C++/Rust. If your language does math 20-100 times slower than C, it isn't fast from a user perspective. Full stop. It might, however, have a "fast interpreter". Remember, the user doesn't care if it is a fast for an interpreted language, they are just trying to obtain their objective (aka do math as fast as possible). They can get cache locality perfect, and Python would still be very slow (from a math/computation perspective).
The 200-100 times slower is a bit cherry picked, but use case does matter.
Typically from a user perspective, the initial starting time is either manageable or imperceptible in the cases of long running services, although there are other costs.
If you look at examples that make the above claim, they are almost always tiny toy programs where the cost of producing byte/machine code isn't easily amortized.
This quote from the post is an oversimplification too:
> But the program will then run into Amdahl's law, which says that the improvement for optimizing one part of the code is limited by the time spent in the now-optimized code
I am a huge fan of Amdahl's law, but also realize it is pessimistic and most realistic with parallelization.
It runs into serious issues when you are multiprocessing vs parallel processing due to preemption, etc .
Yes you still have the costs of abstractions etc...but in today's world, zero pages on AMD, 16k pages and a large number of mapped registers on arm, barrel shifters etc... make that much more complicated especially with C being forced into trampolines etc...
If you actually trace the CPU operations, the actual operations for 'math' are very similar.
That said modern compilers are a true wonder.
Interpreted language are often all that is necessary and sufficient. Especially when you have Internet, database and other aspects of the system that also restrict the benefits of the speedups due to...Amdahl's law.
I'm not so much cherry picking as I am specifically talking compute (not I/O,stdlib) performance. However, when measured for general purpose tasks, that would involve compute and things like I/O, stdlib performance, etc., Python on the whole is typically NOT 20-100x times slower for a given task. Its I/O layer is written in C like many other languages, so the moment you are waiting on I/O you have leveled the playing field. Likewise, Python has a very fast dict implementation in C, so when doing heavy map work, you also amortorize the time between the (brutally slow) compute and the very fast maps.
In summary, it depends. I am talking about compute performance, not I/O or general purpose task benchmarking. Yes, if you have a mix of compute and I/O (which admittedly is a typical use case), it isn't going to be 20-100x slower, but more likely "only" 3-20x slower. If it is nearly 100% I/O bound, it might not be any slower at all (or even faster if properly buffered). If you are doing number crunching (w/o a C lib like NumPy), your program will likely be 40-100x slower than doing it in C, and many of these aren't toy programs.
In the PVM binary operations remove the top of the stack (TOS) and the second top-most stack item (TOS1) from the stack. They perform the operation, and put the result back on the stack.
That pop, pop isn't much more expensive on modern CPUs and some C compilers will use a stack depending on many factors. And even in C you have to use structs of arrays etc... depending on the use case. Stalled pipelines and fetching due to the costs is the huge difference.
It is the setup costs, GC, GIL etc... that makes python slower in many cases.
While I am not suggesting it is as slow as python, Java is also byte code, and often it's assumptions and design decisions are even better or at least nearly equal to C in the general case unless you highly optimize.
But the actual equivalent computations are almost identical, optimizations that the compilers make differ.
i'll answer your argument with the initial paragraph you quoted:
> A compiler for C/C++/Rust could turn that kind of expression into three operations: load the value of x, multiply it by two, and then store the result. In Python, however, there is a long list of operations that have to be performed, starting with finding the type of p, calling its __getattribute__() method, through unboxing p.x and 2, to finally boxing the result, which requires memory allocation. None of that is dependent on whether Python is interpreted or not, those steps are required based on the language semantics.
Typically a dynamic language JIT handles this by observing what actual types the operation acts on, then hardcoding fast paths for the one type that's actually used (in most cases) or a few different types. When the type is different each time, it has to actually do the lookup each time - but that's very rare.
The CPU does have to execute both lines, but it does them in parallel so it's not as bad as you'd expect. Unless you abort to the interpreter, of course.
No, that's a quote from someone they interviewed who hasn't read the full paper (the full paper is not yet available). They are saying that the details of the study will determine how significant the results are.
> So correlation is likely just a data artifact of poor data analysis and nothing to do with intermittent fasting?
You have no idea whether the data analysis is good or not; the only thing that was released is the abstract.
> nobody in the 20th century imagined that within just two decades we'd be able to sequence the genome of a new pathogen within days, much less hours, or design a new vaccine within two weeks and have it in human clinical trials a month later