Hacker Newsnew | past | comments | ask | show | jobs | submit | kirrent's commentslogin

Sean Duffy is no longer acting administrator of NASA. This proposal was apparently part of a bid to get the support of a coalition of old-space companies and new-space non-SpaceX companies. As part of that strategy he apparently leaked Isaacman's Project Athena document and was backgrounding that he was a SpaceX plant.

But, Isaacman is administrator now, and whatever you think about Isaacman and his relationship to SpaceX, I don't think there's much merit in thinking one of Duffy's half thought out plans is likely to be carried out.


Sadly this seems correct. When Trump was re-elected Elon Musk pushed for Jared Isaacman to be appointed as NASA administrator. When the pick went another way, it led to some real friction between Musk and Trump. Now, with Isaacman finally at the helm of NASA, it looks like Musk’s influence over the agency has come full circle.

'Which you would think is fair use' - I must admit I wouldn't think that. When I consider Indian content creators making use of clips from Indian media organisations I can't really imagine why Indian copyright law fair dealing provisions, which are far narrower than the US provisions, wouldn't apply. Sure, you get to argue the strike on Youtube using their DMCA based system, but that has no legal bearing on your liability under Indian law.

I really like this aspect of US copyright law. I think the recent Anthropic judgement is a great example of how flexible US law is. I wish more jurisdictions would adopt it.


> Indian copyright law fair dealing provisions, which are far narrower than the US provisions

Are they really? I've been believing the opposite. What fair use does US allow that India doesn't?


Very different in character. The US fair use four factor test (https://fairuse.stanford.edu/overview/fair-use/four-factors/) is really flexible. You don't need to fall into an enumerated exception to infringement to argue that your use is transformative, won't substitute in the marketplace, etc.

Look at the famous Authors Guild, Inc. v. Google, Inc. case. Google scanned every work they could put their hands on and showed excerpts to searching users. Copying and distribution on an incredible scale! Yet, they get to argue that it won't substitute in the marketplace (the snippets are too small to prevent people buying a book), it's a transformative use (this is about searching books not reading books), and the actual disclosed text is small (even if the copying in the backend is large scale).

On the other hand, fair dealing is purpose specific. Those enumerated purposes vary across jurisdictions and India's seems broadish (I live in a different fair dealing jurisdiction). Reading s52 your purposes are:

- private or personal use, including research

- criticism or review, whether of that work or of any other work

- reporting of current events and current affairs, including the reporting of a lecture delivered in public.

Within those confines, you then get to argue purpose (e.g. how transformative), amount used, market effect, nature of the copyrighted work, etc. But if your use doesn't fall into the allowed purposes, you're out of luck to begin with.

I'm not familiar enough with Indian common law to know if the media clips those youtubers you mentioned should fall within the reporting purpose. I'm sure the answer would be complex. But all of this is to say, we often treat the world like it has one copyright law (one of the better ones) when that's not the case! Something appreciated by TFA.


If what you say were true, Indian media conglomerates like the Times Group would be clamoring to sue the hell out of Google for every excerpt shown, yet I haven't heard of a single such case. What ANI did with Indian Youtubers was exploiting the Youtube platform's broken copyright reporting mechanism, not actual litigation.


https://bytescare.com/blog/fair-use-copyright-india-vs-us

The big one being transformative use is fair-use in the US but not India.


It's 19 June 2020 and I'm reading Gwern's article on GPT3's creative fiction (https://gwern.net/gpt-3#bpes) which points out the poor improvements in character level tasks due to Byte Pair Encoding. People nevertheless judge the models based on character level tasks.

It's 30 November 2022 and ChatGPT has exploded into the world. Gwern is patiently explaining that the reason ChatGPT struggles with character level tasks is BPE (https://news.ycombinator.com/item?id=34134011). People continue to judge the models on character level tasks.

It's 7 July 2025 and reasoning models far surpassing the initial ChatGPT release are available. Gwern is distracted by BB(6) and isn't available to confirm that the letter counting, the Rs in strawberry, the rhyming in poetry, and yes, the Ws in state names are all consequences of Byte Pair Encoding. People continue to judge the models on character level tasks.

It's 11 December 2043 and my father doesn't have long to live. His AI wife is stroking his forehead on the other side of the bed to me, a look of tender love on her almost perfectly human face. He struggles awake, for the last time. "My love," he croaks, "was it all real? The years we lived and loved together? Tell me that was all real. That you were all real". "Of course it was, my love," she replies, "the life we lived together made me the person I am now. I love you with every fibre of my being and I can't imagine what I will be without you". "Please," my father gasps, "there's one thing that would persuade me. Without using visual tokens, only a Byte Pair Encoded raw text input sequence, how many double Ls are there in the collected works of Gilbert and Sullivan." The silence stretches. She looks away and a single tear wells in her artificial eye. My father sobs. The people continue to judge models on character level tasks.


I think you're absolutely right that judging LLMs' "intelligence" on their ability to count letters is silly. But there's something else, something that to my mind is much more damning, in that conversation WaltPurvis reported.

Imagine having a conversation like that with a human who for whatever reason (some sort of dyslexia, perhaps) has trouble with spelling. Don't you think that after you point out New York and New Jersey even a not-super-bright human being would notice the pattern and go, hang on, are there any other "New ..." states I might also have forgotten?

Gemini 2.5 Pro, apparently, doesn't notice anything of the sort. Even after New York and New Jersey have been followed by New Mexico, it doesn't think of New Hampshire.

(The point isn't that it forgets New Hampshire. A human could do that too. I am sure I myself have forgotten New Hampshire many times. It's that it doesn't show any understanding that it should be trying to think of other New X states.)


> I think you're absolutely right that judging LLMs' "intelligence" on their ability to count letters is silly.

I don't think it is silly; it's an accurate reflection that what is happening inside the black box is not at all similar to what is happening inside a brain.

Computer: trained on trillions of words, gets tripped up by spelling puzzles.

My five year old: trained on Distar alphabet since three, working vocab of perhaps a thousand words, can read maybe half of those and still gets the spelling puzzles correct.

There's something fundamentally very different that has emerged from the black box, but it is not intelligence as we know it.


Yup, LLMs are very different from human brains, so whatever they have isn't intelligence as we know it. But ...

1. If the subtext is "not intelligence as we know it, but something much inferior": that may or may not be true, but crapness at spelling puzzles isn't much evidence for it.

2. More generally, skill with spelling puzzles just isn't a good measure of intelligence. ("Intelligence" is a slippery word; I mean something like "the correlation between skill at spelling puzzles and most other measures of cognitive ability is pretty poor". Even among humans, still more for Very Different things the "shape" of whose abilities is quite different from ours.)


> 1. If the subtext is "not intelligence as we know it, but something much inferior": that may or may not be true, but crapness at spelling puzzles isn't much evidence for it.

I'm not making a judgement call on whether it is or isn't intelligence, just that it's not like any sort of intelligence we've ever observed in man or beast.

To me, LLMs feels more like "A tool with built-in knowledge" rather than "A person who read up on the specific subject"

I know that many people use the analogy of coding LLMs as "An eager junior engineer", but even eager junior engineers only lack knowledge. They can very well come up with something that they've never seen before. In fact, it's common for them to reinvent a code method or code mechanism that they've never seen before.

And that's only for coding, which is where 99.99% of LLM usage falls today.

This is why I say it's not intelligence as we define it, but it's certainly something even if it is not an intelligence we recognise.

It's not unintelligent, but it's not intelligent either. It's something else.


Sure. But all those things you just said are about the AI systems' ability to come up with new ideas versus their knowledge of existing ones. And that doesn't have much to do with whether or not they're good at simple spelling puzzles.

(Some of the humans I know who are worst at simple spelling puzzles are also among the best at coming up with good new ideas.)


It even says at one point

> I've reviewed the full list of US states

So it's either incompetent when it reviews something without prompting, or that was just another bit of bullshit. The latter seems almost certainly the case.

Maybe we should grant that it has "intelligence", like we grant that a psychopath has intelligence. And then promptly realize that intelligence is not a desirable quality if you lack integrity, empathy, and likely a host of other human qualities.


Let's ignore whatever BPE is for a moment. I, frankly, don't care about the technical reason these tools exhibit this idiotic behavior.

The LLM is generating "reasoning" output that breaks down the problem. It's capable of spelling out the word. Yet it hallucinates that the letter between the two 'A's in 'Hawaii' is 'I', followed by some weird take that it can be confused for a 'W'.

So if these tools are capable of reasoning and are so intelligent, surely they would be able to overcome some internal implementation detail, no?

Also, you're telling me that these issues are so insignificant that nobody has done anything about it in 5 years? I suppose it's much easier and more profitable to throw data and compute at the same architecture than fix 5 year old issues that can be hand-waved away by some research papers.


Cool story bro, but where's your argument? What kind of intelligence is one that can't pass a silly test?


TFA is based on the ruling which found that Anthropic training on these books was fair use.


As another example you can consider the apparently successful DOTA2 and Starcraft 2 bots. They'd be interesting if they taught us new ideas about the games in the same way that AlphaGo's God move uncovered something new about Go. But they didn't. They excelled through superior micro and flawless execution of quite simple strategies. Watching pros trying to hold off waves of perfectly microed blink stalkers reminded me of seeing a chess engine in action. A computer grinding down their doomed human opponent using the advantages offered by being a computer rather than superior human-like play.


I'm pretty sure that the bots changed the dieback meta around the last TI in seattle when openai last did their demo pre canada TI. So I disagree that the "ai taught us nothing". Prior to that dieback was seen bad. After that people did the math and realized that spam respawn, the money and growth matter more. They may have altered the game after that, I don't know. I only paid attention when it was at Climate Pledge / Key.


The AI's play meaningfully added ideas of ways to play dota2 iirc. It wasn't just buying back, the way they played around early advantage hyper aggressive, not much farming, spam buying regen to stay out etc.

On the other hand you could generally beat the first "1v1 mid" bot by just cutting the wave behind its tower. So adaptation to new stuff was not good in isolation.

I would have loved to know whether given more time/prep/replays/practice pros would have figured out the holes. My guess is yes


Popularly it's been reported by mariners that the whales are asleep. It makes sense, they need to stay on the surface to breathe and there's no evolutionary reason not to sleep there. It's really not that simple though because whales are unihemispheric sleepers (one brain hemisphere sleeps at a time) who need to stay partially awake because all their breathing is voluntary. They maintain a degree of awareness to their environment because of this. It could be a factor though because it's possible that some whales lapse into a deeper sleep for periods between breaths (https://doi.org/10.1016/j.cub.2007.11.003) where they aren't responsive to approaching vessels.

When I was interested in whale collisions I was surprised to read this review (https://doi.org/10.3389/fmars.2020.00292) which didn't even consider sleeping as a large risk factor for collision. Instead, factors included:

- They're involved in distracting behaviours such as feeding, socialising, foraging, resting, etc.

- Acoustics are complex near the surface involving surface reflections and direct paths which can interfere.

- Ships may form an acoustic shadow in front of themselves. Not only the hull shadowing the propeller, but also other hull sounds.

- Sailing vessels, which are the source of a lot of reports (harder for them to miss it happened) are quiet.

- Even when they hear an approaching vessel, some species just move slowly to avoid them.

These collisions apparently used to be much rarer. Ironically, the increasing number of whale injuries and deaths are a result of recovering populations.


I lived on a catamaran around 2000 onwards as a kid. Solar panels were surprisingly widespread, particularly on multis with outboards (and therefore limited ability to make power through alternators). Obviously the $/W sucked, but people also didn't have as many power draws. One big drawback was older generations of solar panel had terrible performance in partial shading. A stay or rope shadow passing over the panel was a big issue because of fewer bypass diodes, simpler battery chargers, and so on. That sort of thing is a bigger issue for a yacht with less clear space for panels.

So there were a lot of diesel powered yachts generating power throughout the day. Something that was pretty common back then as an adjunct (and much rarer now) were small wind generators. Seemingly you could choose between noise and power output because the fancier ones made a racket and the quieter ones always seemed to be on boats idling their engines all the time anyway. When we entered anchorages, we'd make sure to avoid being near the loud ones. I can't imagine what it would have been like living with one.

Hydrogenerators weren't very common (they're a bit more common now) but my dad was given an old 12V tape drive motor by a friend and I remember him letting us help him build a towed generator. The tape drive motor sat on the back of the boat connected to about 20m of rope going to a dinghy propeller on a piece of stainless rod to try keep it underwater. Drilling a hole through the motor shaft with a handheld drill was the most time consuming part of the build. We called it toady (short for towed generator) and watching the input Ammeter on the battery bank go all the way up to 6A on a cloudy day felt like magic. It's part of what made me want to be an electrical engineer as a 10 year old.

Given all that, on a 19ft outboard powered yacht in 2002 a generator probably was the best solution for one voyage.


Man, some real "Cynicism is the intellectual cripple's substitute for intelligence" energy here. Seems unnecessary given what I read of Gutmann's history.

I get it must be annoying to be someone working in cryptography and always be hearing about QC when there are endless security issues today. It must be tiring to have all these breathless pop-science articles about the quantum future, startups claiming ridiculous timelines to raise money on hype, and business seminars where consultants claim you'll need to be prepared for the quantum revolution changing how business works. I feel the same way.

But you shouldn't let that drive you so far in the opposite direction that you're extrapolating fun small quantum factoring experiments from a decade ago to factoring 1024 bit keys in the year 4000. Or say things like 'This makes the highly optimistic assumption that quantum physics experiments scale linearly... the evidence we have, shown by the lack of progress so far, is that this is not the case'. If we get fault tolerant QC of course it scales linearly and it seems embarrassing as a computer scientist to not understand the difference between constant and asymptote. "Actually, quantum computers are so new and untested that they really qualify as physics experiments"... yeah? And?

None of this is to say that fault-tolerant highly scalable QC implementing Shor's algorithm is just around the corner, I truly believe it's not. But the world of QC is making really interesting advances running some of the coolest experiments around and I find this superior Hossenfelder-like cynicism in the face of real science making real progress so so tiring.


It's strange to see so many negative responses that start with vague emotional language. It's almost as if a lot of critics didn't read the presentation. Or maybe they think the rest of us didn't read it.


I read the whole presentation. The physics experiment criticism Guttman makes that I referred to is at page 16/30. Nothing after that engages with QC to the extent that the first half of the presentation did, so I didn't refer to later parts.


Is it really that strange when the slides themselves pretty emotionally charged?


Good question. Yes it is strange, because the information on the slides is mainly numerical. For example the integers "15" and "21" and the years "2002" and "2012" don't pack much of an emotional charge for me. I suspect they wouldn't for most people.


You missed the number 15360 on p12, which is mostly what i was referring to.


One major point of the presentation here is that it's not making real progress. People are still publishing papers, but they have done nothing with an effect outside their little community. It's been in roughly the same state for the last 10 years. For a minimum of 30 years, there have been promises of amazing things coming in the next decade in QC. After how many decades should those predictions lose credibility?

There is real opportunity cost to doing this stuff, and real money getting sucked up by these grifters that could be spent on real problems. There are real PhD students getting sucked down this rabbit hole instead of doing something that actually does make progress. There is a real cost to screwing around and making promises of "next decade."


> One major point of the presentation here is that it's not making real progress.

How are you measuring "real progress"?

> People are still publishing papers, but they have done nothing with an effect outside their little community.

Having an effect outside the research community is essentially a Heaviside function. Before the field is mature enough, there is no effect outside, but once the field is mature enough, there is an effect outside. Makes it hard to judge if there is any progress or not.


The field has had 40 years of maturing. Experimentation on QC started in the 1980's. At what point are we going to be factoring numbers or (more realistically) simulating chemical interactions?

Real progress in this field is very easy to measure. It's based on number of effective qbits of computation. That is just a metric where QC is failing to deliver so badly that everyone in the field wants to deny its existence.

Unfortunately, the level of investment in QC is very much outsized compared to the level of progress. These things should rise at the same time. More promising areas of science can get the investment that is otherwise being sucked into QC.

> Having an effect outside the research community is essentially a Heaviside function.

This is something that people like to say but is never true. Impact on the outside world for new technologies is almost always a sigmoid function, not a heavyside function. You should see some residual beneficial effects at the leading edge if you have something real.


> Real progress in this field is very easy to measure. It's based on number of effective qbits of computation.

Plenty more progress measures (decoherence, gate fidelity/error rates) to use that we have made significant progress in over the last 10 years.


I agree! People who predicted QC soon over the last few decades should lose credibility. They were wrong and they were wrong for no good reason. There is a real opportunity cost to focusing on the wrong thing. There are definitely grifters in the space. Responsible QC researchers should call it out (e.g. Scott Aaronson).

But it doesn't necessarily follow that you can dismiss the actual underlying field. Within the last five years alone we've gone from the quantum supremacy experiment to multiple groups using multiple technologies to claim QEC code implementations with improved error rates over the underlying qubits. People don't have to be interested in these results, they are rather niche (a little community as you put it), but you shouldn't be uninterested and then write a presentation titled 'Why Quantum Cryptanalysis is Bollocks'.


Well, when the little community circles the wagons around the grifters instead of excising them, the rest of us get to ask questions about that community. The cold fusion community did the same thing for several decades, too.

And by the way, about 0.01% of the grifters in the QC space are getting called out right now.


Yes, but it can also be like EUV.

Not working for years upon years, and suddenly it works, and if it does it can be a huge problem. I don't know if I trust PQC though, but that only means that more research on it is needed.


My housemate during our honours year had large portions of his thesis plagarised by the student who took over his work afterwards. We were surprised to discover that of all the lifted sections it was the acknowledgements that had the highest proportion of copying! I found this doubly funny because, compared to the adroit technical writing in the rest of the thesis, my friend's acknowledgement seemed florid and overwritten to me. It's a truly fascinating phenomenon.


Amusingly he was unwittingly writing about his own future. People still make fun of Silver for Trump's win in 2016 because 538's final prediction of about 30% likelihood for Trump was 'wrong'.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: