Hacker Newsnew | past | comments | ask | show | jobs | submit | coredog64's commentslogin

Excited to see this. Have some upcoming work projects that involve Parquet and Java. Fingers crossed I can get approval to use Java 21.

Probably something like NVLink Fusion. AWS has been doing deals with suppliers for which the smallest unit of deployable compute is a 44U rack (e.g. Oracle), so this is more of the same.

https://www.nvidia.com/en-us/data-center/nvlink-fusion/


I think the difference is that these are intended for automotive use and have a much longer range than the ones in your Roomba.

True, you have to go up to $120 for the 25m version, or $450 for Unitree's L2 which gets 30m in 3D. That's about as much you could possibly ever need unless you're making high speed vehicles that need more reaction time. In which case you probably shouldn't be relying on the cheapest thing on the market :)

You're already downvoted, but this quote from Fight Club always annoyed me as it misunderstands how recalls work.

1. Insurance companies price in the risk, and insurance pricing absolutely influences manufacturers (see the absolute crap that the Big 3 sold in the 70s) 2. The government can force a recall based on a flaw whether or not the manufacturer agrees


> this quote from Fight Club always annoyed me as it misunderstands how recalls work.

How excusable is it that maybe it's the narrator misunderstanding it, or making stuff up while talking to the lady?


v2.0- Tesla drivers insure with Tesla and the recalls are all OTA software fixes.

My take is it's the inference efficiency. It's one thing to have a huge GPU cluster for training, but come inference time you don't need nearly so much. Having the TPU (and models purpose built for TPU) allows for best cost in serving at hyperscale.

Yes potentially - but the OG TPUs were actually very poorly suited for LLM usage - designed for far smaller models with more parallelism in execution.

They've obviously adapted the design but it's a risk optimising in hardware like that - if there is another model architecture jump the risk of having a narrow specialised set of hardware means you can't generalise enough.


Prefill has a lot of parallelism, and so does decode with a larger context (very common with agentic tasks). People like to say "old inference chips are no good for LLM use" but that's not really true.

This era we'd like to return to, when did it end?

Gradually. The current unholy mess of money being able to legally buy politicians didn’t happen in one specific day or rule.

> Despite being so massive, their retail operation makes almost no money

You misunderstand the point of retail. It's now a marketplace where they use their name recognition and (alleged) consumer friendliness to collect fees from sellers. It costs to list, it costs to do FBA, and it costs to run ads so that your products appear in search results. Amazon ads is incredibly profitable.

That's also why Prime has such a grab bag of benefits. By keeping Prime membership sticky, the overall value of that marketplace supports the fees charged to sellers.


There are huge advantages to co-location within the same time zone (plus or minus 2 hours). India is practically half a day away from the US, meaning you only get overlap for an hour or two at the end of their working day.


Back when I had a Prius, I made a conscious effort to avoid using the brake pedal during the highway portion of my commute. It made a small difference to fuel economy, but treating it as a game reduced the frustration with stop&go traffic.


2 sides to hypermiling:

1) Maintain a safe distances so you touch the brake as little as possible

2) Follow so close to the other car that you draft behind them (not recommended!)


I'm contemplating GLP-1 treatment but I'm concerned that it will accidentally decrease the obsession that I have that makes me good at my job.


It's mostly appetite suppressing. Affects the perception of hunger and the brain/reward function of eating, which must also be part of what also helps for drug and alcohol addiction.

It IME doesn't act like an anti-depressant/SSRI which can affect your enthusiasm/desire for your job.

Absolutely life changing drug for me.


But what if this self-described obsession translates into burnout? Does it actually make you good or just work more compared to your peers? Can you maintain it for the rest of your career?

(I'm just concerned; I've seen many people good at and super into their job end up with burnout, often multiple times because they keep thinking "I used to be good at this!", "I enjoy this!", etc instead of accepting that it was never sustainable in the first place. I suspect people's nervous systems etc are more resilient in their 20's, which is why most people with burnout only start to run into it in their 30's)


It's a week-by-week injection - you can always stop taking it if you're unhappy with the effects.


Also in pill form now in the US. I’m on it, results were pretty quick.


Another N=1, I've noticed zero impact on my desire to engage in my normal obsessions while on GLP-1.

What GLP-1 did (initially) was give me horrible insomnia that peaked a couple of days after taking the injection so I had to time my dosage so that I suffered through that on the weekend. That got better over time and eventually went away after about 6 weeks.

Regardless, as another poster mentioned, it's a weekly injection and if you don't like the effects you can stop taking it.


I was excited about effects like this and think they’re entirely absent unless you’re obsessed with food-related app development or something else related to appetite.


N=1, I've been on GLP-1s for a long time and continue to be an obsessive.


I take it; still obsessed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: