In the earliest days of getting people to pay for cable TV when OTA was free, the pitch was that you'd see fewer/no commercials. That didn't last long...
> In the earliest days of getting people to pay for cable TV when OTA was free, the pitch was that you'd see fewer/no commercials.
No, it was quality of reception, especially for people who were farther from (or had inconvenient terrain between them and) broadcast stations; literally the only thing on early capable was exactly the normal broadcast feed from the covered stations, which naturally included all the normal ads.
Premium add-on channels that charged on top of cable, of which I think HBO was the first, had being ad free among their selling points, but that was never part of the basic cable deal.
That varied by region. When cable came to my town in the early 1980s, HBO and Cinemax were part of the local cable provider's basic package. That lasted until the next provider bought them out.
Oh, sure, definitely some providers did some things like that early on to drive growth, especially when they were trying to drive into the areas less dissatisfied with existing broadcast quality then the initial cable markets. (And even once it stopped, it was common to bundle premium channels into the basic cost for a limited time for new customer acquisition.)
this doesn't ring true; TV has always been deeply linked with ads, it just seems that they moved to fractional ownership of a show via many advertisers vs. the (perhaps less intrusive) show sponsor where the advertising was woven into the plot.
I think I'm older than most HN commenters. I can't Google up a citation, but "no or fewer ads" was part of the pitch in the early-mid 1970s in my recollection. You are correct about TV and ads, so maybe I'm wrong.
Not really. Cable TV started as a better way for people to get OTA channels when they were in marginal reception areas. My family had cable TV in the 1970s and it was a maybe eight or ten OTA channels and except for the PBS station they all had commercials, between shows and during shows.
HBO was the first offering that didn't have ads during the show.
Catv originally stood for 'community antenna' and was for those who lived in a valley where tv signals couldn't reach. The community built one antenna at the top and ran a cable down to everyone. Of course it was an obvious addition after that to add extra channels.
Interesting, I grew up in an area with good reception, so the pitch was definitely fewer commercials on the cable channels (HBO, Nickelodeon, MTV), I remember standing in the living room as the salesman said this. It was true for a while, but eventually they caught up to OTA ad loads.
Yea, the no ads theory of the history is cable seems to be pervasive. The only ad free channels were the premium ones like HBO. It's like people think the OTA channels that were packaged together had some magic applied that eliminated ad breaks from the exact same feed as the OTA broadcast. The cable only channels like USA also had ads as well. I guess it's just another example if you tell a lie often enough people will accept it as truth
Yes, those using the tools use the tools, but I don't really see those developers absolutely outpacing the rest of developers who do it the old fashioned way still.
I think you're definitely right, for the moment. I've been forcing myself to use/learn the tools almost exclusively for the past 3-4 months and I was definitely not seeing any big wins early on, but improvement (of my skills and the tools) has been steady and positive, and right now I'd say I'm ahead of where I was the old-fashioned way, but on an uneven basis. Some things I'm probably still behind on, others I'm way ahead. My workflow is also evolving and my output is of higher quality (especially tests/docs). A year from now I'll be shocked if doing nearly anything without some kind of augmented tooling doesn't feel tremendously slow and/or low-quality.
I think inertia and determinism play roles here. If you invest months in learning an established programming language, it's not likely to change much during that time, nor in the months (and years) that follow. Your hard-earned knowledge is durable and easy to keep up to date.
In the AI coding and tooling space everything seems to be constantly changing: which models, what workflows, what tools are in favor are all in flux. My hesitancy to dive in and regularly include AI tooling in my own programming workflow is largely about that. I'd rather wait until the dust has settled some.
totally fair. I do think a lot of the learnings remain relevant (stuff I learned back in April is still roughly what I do now), and I am increasingly seeing people share the same learnings; tips & tricks that work and whatnot (i.e. I think we’re getting to the dust settling about now? maybe a few more months? definitely uneven distribution)
also FWIW I think healthy skepticism is great; but developers outright denying this technology will be useful going forward are in for a rude awakening IMO
The water argument rings a bit hollow for me not due to whataboutism but more that there's an assumption that I know what "using" water means, which I am not sure I do. I suspect many people have even less of an idea than I do so we're all kind of guessing and therefore going to guess in ways favorable to our initial position whatever that is.
Perhaps this is the point, maybe the political math is that more people than not will assume that using water means it's not available for others, or somehow destroyed, or polluted, or whatever. AFAIK they use it for cooling so it's basically thermal pollution which TBH doesn't trigger me the same way that chemical pollution would. I don't want 80c water sterilizing my local ecosystem, but I would guess that warmer, untreated water could still be used for farming and irrigation. Maybe I'm wrong, so if the water angle is a bigger deal than it seems then some education is in order.
If water is just used for cooling, and the output is hotter water, then it's not really "used" at all. Maybe it needs to be cooled to ambient and filtered before someone can use it, but it's still there.
If it was being used for evaporative cooling then the argument would be stronger. But I don't think it is - not least because most data centres don't have massive evaporative cooling towers.
Even then, whether we consider it a bad thing or not depends on the location. If the data centre was located in an area with lots of water, it's not some great loss that it's being evaporated. If it's located in a desert then it obviously is.
If you discharge water into a river, there are environmental limits to the outlet temperature (this is a good thing btw). The water can't be very hot. That means you need to pump a large volume of water through because you can only put a small amount of energy into each kg of water.
If you evaporate the water on the other hand, not only is there no temperature limit but it also absorbs the latent heat of vaporisation. The downside is it's a lot more complex and also the water is truly consumed rather than just warming it up.
Put that way, any electricity usage will have some "water usage" as power plants turn up their output (and the cooling pumps) slightly. And that's not even mentioning hydroelectric plants!
I've always used 2-3 monitors pretty comfortably but with high latency AI agents adding more concurrency to my workflows I'm feeling very crowded. I would love a VR experience with an arbitrary number of screens/windows as well as more clearly separated environments (like having a visually different virtual office per project) that I can quickly switch between.
My assumption is that it's a network bottleneck, and apple clutches their pearls when anyone suggests lowering resolution or allowing for some latency.
My take is like, make me tether with usb-c, reduce resolution and increase latency if I go over what the connection can handle. Use foveated rendering. All I want is more screens.
For now, I'm working with Virtual Desktop on my Quest 3. It's not ideal - pixel density at the edge sucks and even in center it's not quite good enough for text unless I enlarge my screens to be the size of barn doors, but I get 3 very large screens out of my m1 and that makes me happy enough. It's also lighter than an AVP, which after test driving I assume multi-hour sessions would become a literal pain in the neck.
Whatever the tradeoffs are, though, if apple offered infinite screens with text-readability I'd gladly throw money at them for the privilege.
Tinfoil hat moment - I do wonder if the AVP devs got a visit from a bat-wielding gang of monitor engineers. Apple screens ain't cheap.
Early in my career (which started in civil engineering) I was working with a man at the very end of his, which started in the 1950s. I was the young tech-focused intern who found a way to use a computer for everything even when printed and sometimes hand-drawn plans were the standard of the day. He asked me once if I knew how to use a slide rule, which I didn't.
"Well, what do you do when the power goes out?", he asked.
"I go home, just like you would.", I said with a smile.
He paused for a moment and nodded, "you know, you're absolutely right".
Nice story. I guess it can be looked at as some sort of parable. But if I take it literally: I never had a power outage at work, but SAAS downtime happens every year (probably multiple times).
You probably have to really try to take too much Vitamin D with any over the counter supplement (<=5,000 IU), especially if you live that far north. For reference, a prescription dose for someone who is low is usually 50,000 daily.
It should be part of your standard blood tests so you should know if you're running high or low and your doctor can recommend or prescribe a good dose.
With 5,000 IU sometimes taking several at a time I had blood levels of Vitamin D at the top of the range which wasn't dangerous it was just informative, "hey you're having enough, tone it down".
> Make a ballpark, even lowball, estimate for that risk, and simply require the people inside the vehicle to compensate others for the risk being imposed on them.
I see your point but we're all imposing risks on each other all the time. I'm sitting on the 6th floor of a 10 floor building, presumably I'm at some non-zero risk of it collapsing, which would be lower if this building was shorter, but I don't feel entitled to compensation from the owner for the marginal risk because they wanted more floors.
I think we've actually done alot better in reducing the externalities of direct vehicle deaths (insurance, safety standards, vehicle inspections, etc.) than we have in other areas (energy costs, environmental impact, city/street design, parking, etc.)
I understand there's some level of risk we all just have to accept from each other. I just don't agree that cars fall under that level. It's a huge step change from the risk that you or I will run into each other while walking down the street - suffering perhaps a bruise or two at worst - and the risk imposed on me as someone drives by in a large vehicle at high speed, with easily enough kinetic energy to kill & maim a dozen people and a building to boot.
I think the concepts are inevitable, not so much the specific implementations. PCs were an inevitable stop on the fairly standard adoption path from many workers to one machine (Mainframes), 1:1 (PCs), many machines to one worker (I have at least 10 computers within arms reach right now). IBM/Windows/Apple weren't inevitable, they were just manifestations of that. ICs weren't inevitable, but commoditized computer parts were. TCP wasn't inevitable, but a lingua franca for networks was. LLMs weren't inevitable, but AI is.
To your overall point though, and to the contrary of the type of thinking you're critiquing, the timing of these things is not inevitable. Computers didn't have to happen in the 50s, they could easily have waited 50 or 100 years if we didn't have things like wars or other technological breakthroughs that enabled them. For AI we might be stuck on incremental improvements on LLMs for a generation, or they might be obsolete in 5 years. They will be replaced by something better at some point, but confusing (intentionally or not) inevitable with soon is where the hype proves itself hollow.
> Computers didn't have to happen in the 50s, they could easily have waited 50 or 100 years if we didn't have things like wars or other technological breakthroughs that enabled them.
Not really. While all the Government-funded stuff was going on, International Business Machines was slowly advancing their business machines. There was a long path from the IBM 601 (1931, mechanical multiplication, plugboard programmed), the IBM 602 (1946, mechanical division), the IBM 602A ("a 602 that worked"), the IBM 603 (1946, multiplication and division with vacuum tubes, but still plugboard programmed), the IBM 604 (1948, with 1,250 tubes), and finally the IBM 650 (1954, true stored program, tube logic, drum main memory, Knuth's first computer.) The government-funded machines were more advanced but very low volume. All of the 600 series machines were mass-produced and had long operating careers making businesses go.
Transistors had to progress more before IBM business computers became transistorized. The IBM 1401. (1959, all transistor, 12,000 built) launched business computing in a big way. From then on, the business side, rather than the government side, drove the technology. All that would have happened without WWII. WWII held up the IBM 603 electronic multiplier by several years; IBM was trying out electronic arithmetic in the late 1930s.
That distinction is really useful. My critique is aimed at how often “inevitability talk” blurs those two levels together. It’s one thing to say “networks need a lingua franca,” it’s another to say “TCP/IP was inevitable.” When people collapse the concept into the specific implementation, that’s when the rhetoric becomes persuasive but misleading.
It's probably just ego on the one side. That person likes to be invited to feel like they are the more valuable person in the relationship. If I were the other person I would make sure that invitation is never extended.
Who cares if they feel like they are the more valuable person in the relationship? Do you decide your framework based on mental games other people might play? Decide if extending an invite that is declined will cost you something (food, space, etc.) and whether you want the person there.
It's understandable, but in no way nice. One side is going to bring their authentic shy and antisocial self, and stonewall the invitations, while the other side needs to keep smiling and send invitations no matter what. This sounds slightly lopsided, doesn't it?
If you would like the other side to do you a small favor every time, it's worth considering to do the same. At least respond to the invitation with gratitude and a hope to maybe do it next time.
You can overcome shyness to some extend. Not getting invited anymore can also be a sign that the shy person has to change something about their behavior, instead of all others just accepting that.
Oh wow that is foreign to me, but I’m sure you’re right - Collecting invites you never intend to answer just sounds like… I don’t know, some sort of weird social hoarding.
If somebody I don’t want to hang out with keeps inviting me that doesn’t make me feel good about myself, that makes me feel anxious, like I haven’t properly clarified our relationship with them.
> That person likes to be invited to feel like they are the more valuable person in the relationship.
For me, I would expect the opposite - if you get invited all the time but never come, it’s because you’re not actually involved in their life, you’re not actually all that valuable. In order to be valuable you’d have to be making the effort to be present, or at the very least, communicating your availability so the other person would better understand when it’s appropriate to expect you.
I believe they were implying they don’t get social cues due to neurodivergence, likely autism. Hilariously you’re also not picking up on their social cues and implications, which is likewise telling.
Not reading such motives is not a sign of neurodivergence. If people are jumping to these types of conclusions, it's their deficiency. Plenty of normal, non-neurodivergent people refuse to read much into these things.
I've read a number of books on effective communications, and they all emphasize not to read into these signals, and when you do, to go and have a conversation about it to confirm them. I found, as many have, that the error rate is about 50% (i.e. half the time you read the signals wrong).
These books are for normal people - not neurodivergent folks.
It’s perhaps even more maddening than that. Even if all these factors are at play, it doesn’t mean they actually matter all that much to anyone involved. These two coworkers might otherwise really get along and respect each other, but this is one of the games that they are playing with each other.
On the surface, implicitly negotiating over who is more important sounds horribly dramatic, but it’s a game that’s happening constantly among everyone. Usually folks push and pull over some equilibrium point, one person making concessions, then the other, in turns, with the actual hierarchy determining roughly how many turns each person should concede before making a demand of the other. This is where you get dynamics like “he’s a very demanding boss but he cares a lot about his employees” (high amplitude of switching between demand and concession) or “she’s very sharp but also hard to get along with” (doesn’t concede enough to make others feel important).
Concession in this game can be anything, small to large, from being the one who opens the door to let the other through, to offering help during personal problems, to letting someone take more credit on a collaboration.
But, again, these are all played in the implicit layer. They can be raised to the explicit layer by having a “heart to heart”, like “you’re always so kind. I appreciated when you did XYZ”, or “I’d really like if sometimes you did ABC”.
Here's some advice: There will be literally never, ever, be a situation in your life when it is okay or even remotely appropriate to tell somebody else that "they're autistic".
If you figure that someone is autistic just make the accommodation you notice they need, because if you don't you are in fact the one being demanding of them to do the work to make the social thing happen on account of two people.
reply