It is far from trivial. What are you going to do if the money goes to an enemy country?
And while cryptocurrency are certainly popular with criminals, it is far from the only option for hiding transactions. As for the technology, if it exists, it is not very effective. The shadow economy is going strong even among average citizens, from drug trade to babysitting.
If governments can't stop even the most trivial kind of unreported work in their own country, how to you expect them to stop well organized international gangs, sometimes backed by nation states.
Why can you be ethical to your coworkers and not your employer? You boss is a living being too, and he is working for the company just like you are, even if his status is different. And your coworkers are the ones who make up the company, so being ethical to your coworkers is being ethical to your company.
And if filing your computer causes a problem to your IT department for whatever reason, and you do it without approval, then I think it is indeed not very ethical, it really depends on the situation. The IT department people are your coworkers too.
This is the kind of questions I think a LLM work well for, because people are going to have different opinions. I think that most of us will think about science, maths, etc... But what about, say, monotheism, Athenian democracy, banking and accounting, etc... I also see that Freud is in there, a controversial take as his ideas are considered pseudoscience today, but it certainly opened the way for modern psychology, so what do you make of that.
Using a LLM trained on what is most of human written knowledge and carefully aligned will hopefully give a reasonable consensus. It is not perfect of course, but I think it is better than personal guesses.
Note: your experience may differ, not all LLMs are the same and your prompt matter, but I get similar results: mostly scientific achievements, with the one I cited usually getting top spots. A bit of social (democracy, human rights) but spirituality in general seems to be absent.
Among the "scads of utter online shit" is the submitted website for which it mostly agrees with. It also agrees with most of the comments here.
But the thing is, I used a LLM because I want to see the "scads of utter online shit". The "greatest intellectual achievement" is an opinion reflecting what people think matters most, not an empirical fact. And what I want is something approaching a global consensus, not what the HN bubble thinks matters. And for that, I think LLMs have value.
And anyways, what LLMs say generally match what people are saying here, so unless you are implying that we are all talking shit, I don't really see the problem.
At least for now, AI sucks at creativity. There is an initial "wow" effect when you can generate an image of an astronaut riding a unicorn on the moon with a simple prompt, but as you try to play a bit more with it, you notice that unless you inject some of your own creativity, you won't get very far, no matter the medium.
Passed some point, if you are good at what you are doing, the AI will stop helping and become a burden, because you will want precise control, and AI in its current form (deep learning) is not good at it.
There is a reason we talk about "AI slop", you simply cannot let an AI make creative decisions and expect a good result.
By creative I don't just mean artistic. For code, AI works for the least creative tasks, like ports, generic-looking CRUD apps, etc...
As for work, we have already eliminated most of the need for human work. By "need", I mean survival: food, shelter, these kinds of thing. Most of human production goes to comfort, entertainment, luxury, etc... We will find stuff to do that isn't bloodshed. In fact, as times went on, we spend more on saving people than killing them, judging by a global increase in life expectancy. Why would AI reverse the trend?
When chess engines started becoming really good, some people worried that competitive chess would die. Today, grandmasters stand no chance against a smartphone, and yet, chess popularity is at an all time high.
Chess is an unusually poor example. When computers took over Chess, we didn't have something stupid like 30% of employment relying on playing Chess to eat and pay rent.
The analogy only makes sense if you're already convinced that we won't lose the majority of white-collar work to computers.
To those who are not convinced that we are looking at making 50% of the workforce redundant, Chess is an analogy that makes no sense.
It only makes sense if you're already a true believer.
I respectfully disagree with this statement in the sense that if the whole world ends up becoming like a chess tournament. It would become insanely more harder for us to live our lives peacefully. The life of a chess player is filled with stress.
(https://news.ycombinator.com/item?id=47587863) A comment I had written sometime ago. Aside from a very few at the top, I have seen some chess players regret in a very nostalgic way.
The chess industry continues to allege against each other and we lost a star (Rest in peace, Daniel Naroditsky) because of it. The current world champion himself is struggling from all the pressure put on a 19 year old boy.
We enjoy playing against each other but man it is competitive if you wish to feed families.
Most of us play chess out of leisure. I am unsure how a world where everyone does something akin to chess competitively (ie. for money, as we wish to feed our children and ourselves) would look like.
One can say something similar to UBI might be needed and then we all play chess in leisure, but I don't think that is what most people propose when they mention the example of chess.
Among the hundreds of thousands of lines of code that Anthropic produced was one that leaked the source code. It is likely to be a config file, not part of the Claude Code software itself, but it still something to track.
The more lines of code you have the more likely there is for one of them to be wrong and go unnoticed. It results in bugs, vulnerabilities,... and leaks.
Drinking alcohol is probably way worse, but you can choose to not drink, you can't choose to live a normal life and not get microplastics.
Also, alcohol has existed since forever and humans have been drinking it since the beginning of civilization. We have a pretty good idea of what it does and how to keep it under control. Microplastics are a recent thing, it may be a dud, but it may be a serious problem for future generations, so keeping an eye on them is a good thing.
Which might be the correct answer! Something that's extremely hard to undo should have us much more worried than keeping an eye. We should have tons of research projects running on this.
Soviet engineering wasn't sloppy. It was designed for robustness, loose tolerances and simplicity. It was well thought out design. In the same way that as much thought went into the cheap alarm clock than went into the Rolex watch, maybe even more so, the engineers just had different requirements.
It takes a lot of work to make cheap, low precision parts work together reliably. The Rolex has it easy, all the parts are precisely built at a great cost and everything fits perfectly. With the cheap alarm clock, you don't know what you will get, so you have to account for every possible defect, because you won't get anything better with your budget and the clock still needs to give you an idea about what time it is.
The parallel in software would be defensive programming, fault tolerance, etc... Ironically, that's common practices in critical software, and it is the most expensive kind of software to develop, the opposite of slop.
There's a narrative that gets passed around in physics circles about how the Soviets were better at finding creative and analytical solutions than Americans, because of the relative scarcity of computing versus intellectual labour resources.
It would make sense to me that a parallel mechanism could apply to Soviet engineering. If material and technologically advanced capital are scarce, but engineers are abundant, you would naturally spend more time doing proper engineering, which means figuring out how to squeeze the most out of what you have available.
The problem is more about how it is reported to the public. Science is ugly, but when a discovery is announced to the public, a high level of confidence is expected, and journalists certainly act like there is. Kind of like you are not supposed to ship untested development versions of software to customers.
But sometimes, some of the ugly science gets out of the lab a bit too soon, and it usually doesn't end well. Usually people get their hopes up, and when it doesn't live up to the hype, people get confused.
It really stood out during the covid pandemic. We didn't have time to wait for the long trials we normally expect, waiting could mean thousands of deaths, and we had to make do with uncertainty. That's how we got all sorts of conflicting information and policies that changed all the time. The virus spread by contact, no, it is airborne, masks, no masks, hydroxycholoroquine, no, that's bullshit, etc... that sort of thing. That's the kind of thing that usually don't get publicized outside of scientific papers, but the circumstances made it so that everyone got to see that, including science deniers unfortunately.
Edit: Still, I really enjoyed the LK99 saga (the supposed room temperature superconductor). It was overhyped, and it it came to its expected conclusion (it isn't), however, it sparked widespread interest in semiconductors and plenty of replication attempts.
> The problem is more about how it is reported to the public.
Yes and no.
From scientific communicators there's a lot of slop and it's getting worse. Even places like Nature and Scientific American are making unacceptable mistakes (a famous one being the quantum machine learning black hole BS that Quanta published)
But I frequently see those HN comments on ArXiv links. That's not a science communication issue. Those are papers. That's researcher to researcher communication. It's open, but not written for the public. People will argue it should be, but then where does researcher to researcher communication happen? You really want that behind closed doors?
There is a certain arrogance that plays a role. Small sample size? There's a good chance it's a paper arguing for the community to study at a larger scale. You're not going to start out by recruiting a million people to figure out if an effect might even exist. Yet I see those papers routinely scoffed at. They're scientifically sound but laughing at them is as big of an error as treating them like absolute truth, just erring in the opposite direction.
People really do not understand how science works and they get extremely upset if you suggest otherwise. As if not understanding something that they haven't spent decades studying implies they're dumb. Scientists don't expect non scientists to understand how science works. There's a reason you're only a junior scientist after getting an entire PhD. You can be smart and not understand tons of stuff. I got a PhD and I'll happily say I'll look like a bumbling idiots even outside my niche, in my own domain! I think we're just got to stop trying to prove how smart we are before we're all dumb as shit. We're just kinda not dumb at some things, and that's perfectly okay. Learning is the interesting part. And it's extra ironic the Less Wrong crowd doesn't take those words to heart because that's what it's all about. We're all wrong. It's not about being right, it's about being less wrong
And while cryptocurrency are certainly popular with criminals, it is far from the only option for hiding transactions. As for the technology, if it exists, it is not very effective. The shadow economy is going strong even among average citizens, from drug trade to babysitting.
If governments can't stop even the most trivial kind of unreported work in their own country, how to you expect them to stop well organized international gangs, sometimes backed by nation states.
reply