Hacker Newsnew | past | comments | ask | show | jobs | submit | blihp's commentslogin

I gave up on heavily customizing the UI after a couple of top variants (where I would lose said customizations for a variety of reasons) over the years so I run a fairly vanilla config: I like both the look and the information density of btop over htop out of the box.


Not necessarily as even the factory produced optical discs have had issues with de-lamination, oxidation etc. Of course a lot of that had to do with companies cheaping out on manufacturing in order to make that last tenth of a cent of profit as they tend to do.


Not possible given the anemic memory bandwidth [1]... you can scale up the compute all you want but if the memory doesn't scale up as well you're not going to see anywhere near those numbers.

[1] The memory bandwidth is fine for CPU workloads, but not for GPU / NN workloads.


When I bought a PS 1 around 1998-99 I paid $150 and I think that included a game or two. It's the later in the lifecycle price that has really changed (didn't the last iteration of it get down to either $99 or $49?)


In 2002 I remember PS1 being sold for 99€ in Toys'r'Us in the Netherlands, next to a PS2 being sold for 199€.


Try to use AMD GPU's for AI and you'll understand. Unless you have lots of your own engineers to throw at making their stuff work, it's easier for most companies just to keep throwing money nVidia's way.


I understand that it's that way today. But I am talking about "potential". If OpenAI and AMD engineers get their heads together and make some new software etc, couldn't AMD in theory become as valuable as Nvidia or at least half as valuable?

It seems like to take a 350M market cap company to 2B+ or a 6x+ increase in stock price would be worth doing for a few hundred million dollar investment in software and such?


By the time that could feasibly come to fruition, I suspect the AI bubble will have long since popped. Despite making decent GPUs for graphics, AMD can't seem to get its act together on the GPU compute front.


There are very few recently launched pure open source projects these days (most are at least running donation-ware models or funded by corporate backers), none in the AI space that I'm aware of.


Well the real open source project is llama.cpp which Ollama basically wrapped and made a nice interface on top of. Now they do more things as they want to be a real business, but llama.cpp is now doing most things people wanted from something like ollama, like serving a REST API compatible with OpenAPI, downloading and managing local LLMs… while remaining an actual open source project without VC money as far as I know.


https://codingwithintelligence.com/p/meta-gets-behind-open-s...

This is a new umbrella project for llama.cpp and whisper.cpp. The author, Georgi Gerganov, also announced he’s forming a company for the project as he raised money from Nat Friedman (CEO GitHub) and Daniel Gross (ex-YC AI, ex-Apple ML).

Not sure if this is just a good faith support.


Take a closer look at the history of how they've been running things pretty much since the beginning. Even though they give away a lot of code under open source licenses (most of it they have to), to me it's always looked like they have run the project as if building a business out of it was the priority. I'm sure their recent IPO will result in much more openness... that's usually how things go, right? Nothing wrong with that, just don't be fooled into thinking they're something they're not.


I didn't down vote but I'll take a shot: A valid reason to consider the question is to determine to what degree the model was steered or filtered during training. This goes to can you trust its output beyond the obvious other limitations of the model such as hallucinations etc. It's useful to know if you are getting responses based just on the training data or if you have injected opinions to contend with.


> "steered or filtered during training"

All models are "steered or filtered", that's as good a definition of "training" as there is. What do you mean by "injected opinions"?


Yes all models are steered or filtered. You seem to get that, where many of the commenters here don't, e.g. "dur hur grok will only tell you what musk wants".

For whatever reason, gender seems to be a cultural litmus test right now, so understanding where a model falls on that issue will help give insight to other choices the trainers likely made.


[flagged]


My bad dawg. I didn't realize everyone in here is a professional hacker news commentator. I'm not even at the beer money level of commentating


>What do you mean by "injected

Examples:

DALL-E forced diversity in image generation, I ask for a group photo of a Romanian family in middle ages and I get very stupid diversity, a person in wheel chair in medieval times, the family has different races and also foced muslim clothing. Solution is to ensure you ask n detail the races of the people, the religion , the clothing otherwise the pre prompt forces the diversity over natural logic and truth

Remember the black nazis soldiers?

ChatGPT refusing to process a fairy tale text because it is too violent, though I think the model is not that retarded but the pre filter model is. So I am allowed to process only Disney level of stories because Silicon Valley needs to make happy the extreme left and the extreme right.


All trained models have loss/reward functions, some of which you and I might find simplistic or stupid. Calling some of these training methods "bias" / "injected opinion" versus other is a distortion, what people are actually saying is "this model doesn't align with my politics" or perhaps "this model appears to be adherent to a naive reproduction of prosocial behavior that creates weird results". On top of that, these things hallucinate, they can be overfit, etc. But I categorically reject anyone pretending like there is some platonic ideal of an apolitical/morally neutral LLM.

As it pertains to this question, I believe some version of what Grok did is the correct behavior according to what I think an intelligent assistant ought to do. This is a stupid question that deserves pushback.


You can argue philosophically that on some level everyone has a point of view and neutrality is a mirage, but that doesn't mean you can't differentiate between an LLM with a neutral tone that minimizes biased presentation, and an LLM that very clearly sticks to the party line of a specific contemporary ideology.

Back in the day, don't know if it's still the case, the Christian Science Monitor was used as the go-to example of an unbiased news source. Using that point of reference, it's easy to tell the difference between a "Christian Science Monitor" LLM and a Jacobin/Breitbart/Slate LLM. And I know which I'd prefer


Stupid is stupid, creating black nazi soldiers it is stupid, it might be a consequences of trying to fix some bad bias in the model but you can't claim it not to be stupid. Same with refusing to accept children stories because they are violent , if a child can handle that there are evil characters that do evil things then also a an extremist conservative/racist/woke/libertarian/MAGA should be able to handle it. Of couse you can say it is aa bug, they try to make happy both extreme and you get this stupidity , but this AI guys need to grab the money so they need to suck the d of both extremes.

Or we claim now that classical children stories are bad for society and we need to only allow the modern american Disney stories where everything is solved with songs and the power of friendship.


You seem to be fixated on something completely different than the question at hand.


Can you explain?

My point is that

1 they train AI on internet data 2 they then try to fix illegal stuff, OK 3 but then they try to put political bias from both extremes and make the tools less productive since now a story with monkeys is racist and a story with violence is to violent and soem nude art is too vulgar.

The AI companies could decide to have the balls to only censor illegal shit, and if their model is racist or vulgar then cleanup their data and not do the lazy thing of adding some lazy stupid filter or system prompt to make happy the extremists.


I don't think your take is incorrect. I give it a try from time to time and it's always been inferior to other offerings for me every time I've tested it. Which I find a bit strange as NotebookLM (until recently) had been great to use. Whatever... there are plenty of other good options out there.


Having watched some of his streams on the topic, I think you've captured it well. He's basically saying he's done wasting time on AMD unless/until they get serious. It's not so much that he wants free hardware from them, rather he wants to see them put some skin in the game as they basically blew him off the last time he tried to engage with them.


> He's basically saying he's done wasting time on AMD unless/until they get serious.

They are serious, they just don't respond to his demands.


Or anyone else for that matter, they simply do not care about software.


We do care about software and acknowledge the gaps and will work hard to make it better. Please let me know any specific issues that are an issue for you and Im happy to push for it to get resolved or come back with why it isn't.


... they do now thanks to Anush taking the reigns.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: