Hacker Newsnew | past | comments | ask | show | jobs | submit | IntrepidPig's commentslogin

If you long press the volume bar in control center then it opens a larger version you can drag to adjust more precisely.


Also if you pull down the today center or whatever it is on iOS, it has a music player interface you can drag the volume there too


You are my hero! Does this get added to the list of worst controls though since it's so buried?


You can also just drag directly on the slider that appears on the side of the screen when you press the volume buttons


Lovely. Never would have thought about that. Thank you!


It's already included in the list, between the pricing UI and the Windows XP disks


Probably so. The table heading “Key Finding” smells rankly of LLM, plus the massive overconfidence that they’ve single-handedly figured out the problem with American healthcare with a little data science that only an LLM or a schizophrenic could be capable of (I haven’t read anything beyond the first part of the README because I don’t waste my time with slop, but I’m assuming they’re ignoring the incentive structures which encourage the system to stay this way), plus the simple fact that they call out a completely meaningless $3T gap that doesn’t account for population difference at all. It’s so strange because they mention the per capita difference right before that. That’s the number that matters. But still they go on and say $3T gap, and even measure the issues in terms of a percentage of that $3T gap. It’s nonsensical, right? I’m really tired of this.


> “The reason that tech generally — and coders in particular — see L.L.M.s differently than everyone else is that in the creative disciplines, L.L.M.s take away the most soulful human parts of the work and leave the drudgery to you,” Dash says. “And in coding, L.L.M.s take away the drudgery and leave the human, soulful parts to you.”

This doesn’t really make sense to me. GenAI ostensibly removes the drudgery from other creative endeavors too. You don’t need to make every painstaking brushstroke anymore; you can get to your intended final product faster than ever. I think a common misunderstanding is that the drudgery is really inseparable from the soulful part.

Also, I think GenAI in coding actually has the exact same failure modes as GenAI in painting, music, art, writing, etc. The output lacks depth, it lacks context, and it lacks an understanding of its own purpose. For most people, it’s much easier to intuitively see those shortcomings of GenAI manifest in traditional creative mediums, just because they come more naturally to us. For coding, I suspect the same shortcomings apply, they just aren’t as clear.

I mean, at the end of the day if writing code is just to get something that works, then sure, let’s blitz away with LLMs and not bother to understand what we’re doing or why we do it anymore. Maybe I’m naive in thinking that coding has creative value that we’re now throwing away, possibly forever.


Maybe they mean more soulful like a fellow that blacksmiths his own tools and metal fasteners prior to constructing something. I’d personally think this person was a badass, but until wwiii, it’s so impractical and seems arbitrary because why stop there - get more soulful and mine your own ore too.


God it infuriates me


Truly what is going on


I always felt like one of reasons LLMs are so good is that they piggyback on the many years that have gone into developing language as an information representation/compression format. I don’t know if there’s anything similar a world model can take advantage of.

That being said there have been models which are pretty effective at other things that don’t use language, so maybe it’s a non issue.


I will gladly take $10B to find out for you.


Another way to make the same point is to observe that every single society has language.

But only some groups have the ability to systematically encode language as writing.

Writing is a technological marvel.


There's a lot of info about the world in video and photographs. A lot of how we learn is seeing things. Plus interacting of course.


Maybe until the model outputs some affirming preamble, it’s still somewhat probable that it might disagree with the user’s request? So the agreement fluff is kind of like it making the decision to heed the request. Especially if we the consider tokens as the medium by which the model “thinks”. Not to anthropomorphize the damn things too much.

Also I wonder if it could be a side effect of all the supposed alignment efforts that go into training. If you train in a bunch of negative reinforcement samples where the model says something like “sorry I can’t do that” maybe it pushes the model to say things like “sure I’ll do that” in positive cases too?

Disclaimer that I am just yapping


This post is almost definitely a scam, but it does a great job of illustrating how much more dangerous scams are going to become with the advent of AI. Here we have a bunch of people who probably pride themselves in catching scams falling for it. Scary stuff


I think this is good that I should prove it, I now about 80 percent of anything in internet now days is fake, but can you find any scam project that offers real test demo (send me email and I send you a link pixelstreaming front-end in which you see real-time streaming and can talk to avatar EchenDeligani@gmail.com) or any zoom call so I can walk you through the project, or any one which specialy focuces on persain language


> Post-filter works when your filter is permissive. Here’s where it breaks: imagine you ask for 10 results with LIMIT 10. pgvector finds the 10 nearest neighbors, then applies your filter. Only 3 of those 10 are published. You get 3 results back, even though there might be hundreds of relevant published documents slightly further away in the embedding space.

Is this really how it works? That seems like it’s returning an incorrect result.


Yeah it feels similar to inventing the nuke. Or it’s even more insidious because the harmful effects of the tech are not nearly as obvious or immediate as the good effects, so less restraint is applied. But also, similar to the nuke, once the knowledge on how to do it is out there, someone’s going to use it, which obligates everyone else to use it to keep up.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: