Made me wonder if there's a live-streaming equivalent for blogging... some platform that both ensures the reader knows the blogger is a person, and promotes a parasocial relationship.
There's live-coding, so it's not totally a crazy idea.
Your comment immediately made me think about the extreme opposite: a Davy Force-like, Infochammel-style livestream of a never-ending AI generated Ted Talk, offering delectable morsels of tech startup wisdom, but is ultimately zero calorie.
This question doesn’t apply to Sam, but since you made a general statement, I’m trying to understand.
When it comes to people who openly incite or directly use violence. why do you think it’s unethical to attack someone like that? If one responsible from directly or indirectly killing hundreds, what’s the ethical argument to not use violence against that person?
Not trolling or anything I’ve been just thinking about this for a while and trying to understand what am I missing in this argument.
We use a lot of euphemisms and have a number of myths around political violence. The fact of the matter, so far as I can see, seems to be that political violence is extremely effective, however also extremely destabilising if used at scale.
Force just works a lot of the time, assuming you can win, and often even if you can’t, as even imposing a cost on your opponent often gets you a better deal. There’s a reason we keep having wars.
Also realise that the government monopoly on force is ultimately the only reason that anybody follows laws. That following laws is good for us is beside the point - force must be threatened and used in order to maintain control.
So, force, a euphemism for violence, is ultimately the way anything gets done, and we all have an incentive to lie about this just for the sake of stability.
I don’t know if this answers your question, but it’s what comes to mind on the subject for me.
It's an interesting question. Here's my reductive, off-the-cuff take: violence is justified when defending oneself or another from imminent bodily harm, or even under threat of imminent, considerable property damage. When a threat is not imminent, or an action is past, we use the police and the courts, because we as a society–in the sense of subscribers of the US constitution or similar tracts–believe that it is better to have a judicial system and impartial officials determine whether it is worth depriving someone of their bodily liberty or taking their property, that is, jailing or fining. Taking some sort of extrajudicial action or applying corporal punishment (!) requires a much higher bar. How and when would one determine that the judicial system is so unreliable as to morally permit vigilantism? It requires a great deal of moral self-confidence to take matters into one's own hands.
I focus on the question of vigilantism because that I think is the issue. Many people feel an emotional impulse, that they want to side with the CEO killer, for example, and they find ways to rationalize. What I'd say is, if you think Joe Blow is so evil , why don't we take him to court? What kind of possible actions could we not jail or fine him for but for which we would accept Johnny Anarchy, y'know, igniting his lawn furniture? Of course, the justice system is imperfect, but nobody lawfully elected the next sexy assassin as judge, jury, and executioner.
Because life is not black and white, and people often agree, that humans who actively work towards the detriment of society should not be part of the society.
So I suppose we should burn the house down with a child inside.
Your response is a cop out and you should be disappointed in yourself. Further, people do not often agree another human should be murdered. No matter how you phrase it.
> Further, people do not often agree another human should be murdered. No matter how you phrase it.
I really wonder how much of a privileged bubble one must've lived their life in to come to this belief. Without much of a history education either.
It's _incredibly common_ for humans - maybe saying "humans" instead of "people" helps you snap out of the disbelief - to agree that another human should be murdered.
> Further, people do not often agree another human should be murdered
Have you ever heard of the French revolution, the World Wars, collapse of the Soviet Union, or maybe more recently - the Ukraine war?
People are more than happy to see someone who brings suffering to others dead.
Of course, I'm sure lots of people would also want to see people responsible for those events be locked away in a prison cell for the rest of their lives, and for their freedom and privacy to be taken away - do you perhaps want to guess why people would prefer that over instantly killing them?
To say that people often want others to be murdered is an overstatement.
Some people want others to be murdered. And those people do not need representation.
It's a bad take especially considering the context. And to be explicit - the context is a molitov cocktail being thrown at a home a child is sleeping in.
I find myself resenting him and his ilk on a daily basis for what they did to the computing space which was once sacred to me with their profiteering. But nothing justifies violence, not even close. Simple as that.
An appropriate response might be asking "Hey, I don't trust AI... what's the recipe?"
The described action seems performative and emotional, as it they were ideologically opposed to AI. Like spitting out food because it was prepared by a caste you found unclean.
It might settle into a situation where cutting edge LLMs are a service, while older and smaller LLMs are self-hosted. So you are not at risk of being cut off, but of being degraded.
I hope you're right. I played around with a bunch of AI stuff recently and that's kind of the conclusion I came to. Use local AI for mission critical stuff, if you're confident in that, and use the SOTA models for reviewing.
Tap the latest general knowledge for asking "could this be improved", but make the improvements with local systems and models. But then the obvious problem becomes finding new data to train the AIs. In my opinion, there's no way their plan doesn't involve stealing from everyone to keep training, so is it really going to be safe to use the cutting edge models at all?
There's live-coding, so it's not totally a crazy idea.
reply