Hacker Newsnew | past | comments | ask | show | jobs | submit | redox99's commentslogin

Power cycling is not a solution. It's a crappy workaround, and you still had downtime because of it. The device should never get stuck in the first place, and the solution for that is fixing whatever bug is in the firmware.

If they want to reduce support calls, then have more reliable gear.


> Power cycling is not a solution. It's a crappy workaround, and you still had downtime because of it. The device should never get stuck in the first place, and the solution for that is fixing whatever bug is in the firmware.

I'm sympathetic to the argument that companies should make support calls less necessary by providing better products and services, but "just write bug-free software" is not a solution.


This isn't a case where you need bug free software. This is a case where the frequency of fatal bugs is directly proportional to the support cost. Fix the common bugs, then write off the support for rare ones as a cost of doing business.

The effect of cheap robo support is not reducing the cost of support. It is reducing the cost of development by enabling a more buggy product while maintaining the previous support costs.


Giving the device enough RAM to survive memory leaks during heavy usage would also be a valid option, as is automatic rebooting to get the device back into a clean state before the user experiences a persistent loss of connectivity. There are a wealth of available workarounds when you control everything about the device's hardware and software and almost everything about the network environments it'll be operating in. Fixing all the tricky, subtle software bugs is not necessary.

For a community full of engineers, I'm always surprised that people always take absolutionist views on minor technical decisions, rather than thinking of the tradeoffs made that got there.

The obvious trade off here is engineering effort vs. development cost, and when the tech support solution is "have you tried turning it off, then on again?" We know which path was chosen

You can't just throw RAM at embedded devices that you make millions of and have extremely thin margins on. Have you bothered to look at the price of RAM today? At high numbers and low margins you can barely afford to throw capacitors at them, let alone precious rare expensive RAM.

No, XFinity are the ones who decided their routers “““need””” to have unwanted RAM-hungry extra functionality beyond just serving their residential customers' needs. Their routers participate in an entire access-sharing system so they can greedily double-dip by reselling access to your own connection that you already pay them for:

- https://www.xfinity.com/learn/internet-service/wifi

- https://www.xfinity.com/support/articles/xfinity-wifi-hotspo...


We're talking about devices where the retail price is approximately one month of revenue from one customer, and that's if there isn't an extra fee specifically for the equipment rental. Yes, consumer electronics tend to have very thin margins, but residential ISPs are playing a very different game.

A memory leak will consume any amount of ram by definition, adding more ram is not a solution either.

You're implying all software/hardware is of equal quality. I've had many routers with years of uptime, never requiring a reboot.

And I'm sure they had a lot of bugs, but not every bug means hanging to the point of requiring a reboot during normal operation.

Even a proper watchdog would, after some downtime, recover the system.


IME ChatGPT is pretty mid at search. Grok although significantly dumber, is really strong at diligently going through hundreds of search results, and is much more tuned to rely on search results instead of its internal knowledge (which depending on the case can be better or worse). It's the only situation where Grok is worth using IMO.

Gemini is really good with many topics. Vastly superior to ChatGPT for agronomy.

You should always use the best model for the job, not just stick to one.


I'd be friends with you. Wish you had contact info in your profile.

Auto will never work, because for the exact same prompt sometimes you want a quick answer because it's not something very important to you, and sometimes you want the answer to be as accurate as possible, even if you have to wait 10 minutes.

In my case it would be more useful to have a slider of how much I'm willing to wait. For example instant, or think up to 1 minute, or think up to 15 minutes.


That's pretty close to what they have. They just named them Instant, Thinking (Standard), and Thinking (Extended), and they're discrete presets instead of a slider.

But the time it takes is too variable. Even standard can sometimes take 15+ minutes.

They have an "answer now" button that stops the reasoning and starts the reply. Same with Gemini.

Yeah I use that, but it's not really a solution that allows to only have auto. It doesn't help when it chooses Instant instead of Thinking, and it's also much slower than using Instant outright because the Skip button doesn't immediately show, and it's generally slow to restart.

I'm growing so tired of the typical vibecoded UI design. The overuse of cards, icons, emojis, and zero images.

Because it's pretty useful, for example to avoid refreshing data if the tab is unfocused and refresh immediately on focus.

> For a disease which (to my knowledge) can’t be slowed down or reversed

There's Lecanemab and Donanemab. The effects are modest however.


Trontinemab is in trials right now and has 92% of patients achieving low amyloid levels. And more people should be able to take it as it causes less brain swelling (ARIA-E). I'm unaffiliated, I just follow medical research in my free time. But I'm quite hopeful about this medication


I don't think there's much recursive improvement yet.

I'd say it's a combination of

A) Before, new model releases were mostly a new base model trained from scratch, with more parameters and more tokens. This takes many Months. Now that RL is used so heavily, you can make infinitely many tweaks to the RL setup, and in just a month get a better model using the same base model.

B) There's more compute online

C) Competition is more fierce.


Not really, these subscriptions have a clear and enforced 5h and weekly limit.


Wouldn't everything be on the internet archive? And common crawl?


Being on the internet archive and being able to pick up from a restored backup are two very different things


It's a wiki. Maybe you lose the edit history and stuff like that, but the actual content which is what matters should be very easy to recreate from those sources.


There's more compute now than before.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: