Hacker Newsnew | past | comments | ask | show | jobs | submit | frompom's commentslogin

Do you have the same expectations for any company launching hardware that they cite the various papers related to how the tech was developed? EVERY piece of tech announced by ANY company relies on a variety of research out there yet it doesn't seem expected that every time anyone launches something they cite the numerous papers related to that. Why would products/services in this category be any different?


> Do you have the same expectations for any company launching hardware that they cite the various papers related to how the tech was developed?

If they try to market it with a seemingly unique or yet-unheard of name, then yeah. It is nice knowing what the "real world" name of an Apple-ized technology is.

Just ignoring it and marketing the technology under some new name is adjacent to lying to your audience through omission.


> Just ignoring it and marketing the technology under some new name is adjacent to lying to your audience through omission.

They don't market technology, they market solutions. E.g. afib detection on Apple Watch, rather than calling it a BNNS using a custom-built sensor for one-wire EKG.

This is the document where they describe how the solution works, and they clearly state adapters work based on LoRA.


I think that's still just an explanation of biases that go into development direction. I don't view that as a criticism but an observation. We use LLMs in our products, and I use them daily and I'm not sure how that's that negative.

We all have biases in how we determine intelligence, capability, and accuracy. Our biases color our trust and ability to retain information. There's a wealth of research around it. We're all susceptible to these biases. Being a researcher doesn't exclude you from the experience of being human.

Our biases influence how we measure things, which in turn influences how things behave. I don't see why you're so upset by that pretty obvious observation.


The full comment is right there, we don't need to seance what the rest of it was or remix it.

> Arguably, it is the other way around: they aren’t focused on appealing to those biases, but driven by them, in the that the perception of language modeling as a road to real general reasoning is a manifestation of the same bias which makes language capacity be perceived as magical

There's no charitable reading of this that doesn't give the researcher's way too little credit given the results of the direction they've chosen.

This has nothing to do with biases and emotion, I'm not sure why some people need it to be: modalities have progressed in order of how easy they are to wrangle data on: text => image => audio => video.

We've seen that training on more tokens improves performance, we've seen that training on new modalities improves performance on the prior modalities.

It's so needlessly dismissive to act like you have this mystical insight into a grave error these people are making, and they're just seeking to replicate human language out of folly, when you're ignoring table stakes for their underlying works to start with.


Note that there is only one thing about the research that I have said is arguably influenced by the bias in question, “the perception of language modeling as a road to real general reasoning”. Not the order of progression through modalities. Not the perception that language, image, audio, or video are useful domains.


Since foveated rendering would only send the resolution required for what the user could perceive then even logs in the peripheral space would be ok since they would be sent in much lower resolution. I think the challenge with some smart foveated rendering would likely be latency.

Another option would be handling rendering on the Vision Pro rather than the MacBook so pixels don't need to be streamed at all.


AVP already has foveated rendering so if MacOS had some awareness of what is being rendered it could potentially render unlimited windows since it would only need to render and stream enough to handle what is being looked at.

The primary problem I would guess here would be latency though so maybe not feasible. The other possibility is if the actual rendering happens on device instead of streaming pixels. It would dramatically decrease bandwidth required.

I think there's a solution somewhere for this.



Ultimately we need to pick some point. Someone doesn't undergo some transformation upon becoming the age of a legal adult on midnight when becoming of age. Nor does age of consent suddenly confer reasoning capabilities that didn't exist the day prior. Legal definitions don't do well with nuance, we end up having to define thresholds that up close seem ridiculous. Whether it's a certain number of weeks, or upon birth, up close it would always look ridiculous like any other threshold.

In your example 8.5 months vs 9.5 months they may have actually have been conceived quite close to each other as well since gestation time is not actually based on date of conception but date of the last menstruation. So this, as well, is a relatively arbitrary measurement.

It's the reality of having to have universal absolute measures and terms in a world of edge cases and grey areas.

Being a thinking/aware person wouldn't happen suddenly, it would be gradual. Gradual rights would be damn near impossible to codify, so like everything else we have to pick some point that feels reasonable and yet will feel silly and arbitrary at the same time.


More and more flights (maybe 25% of my flights lately) have internet that is fast enough for and allows streaming video. This trend will continue I'm sure. Obviously downloads are still really beneficial but it's becoming less and less of an issue on flights.


btw using statcounter for browser market share so you're comparing apples to apples says Safari usage is around 19%: https://gs.statcounter.com/browser-market-share


If OSX has 16% desktop operating system share how can Safari have 19% desktop browser share?


They aren't my stats. Just pointing out using one stat from source A and another stat from source B to draw a conclusion when source A has both stats seems like hunting for stats that support a predetermined conclusion.


Keep in mind that the lens distorts the pixels so the density is not uniform across the FOV. Word is the PPD is 50-70 in the center, and beyond 60 is not visible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: