Politics is not and was never barred from HN, if that was your point. And, rising fascism would/does directly and massively affect every tech worker in the US.
But we're talking about HN censoring topics, in general - not just politics. I'll give you an example with a tech story I commented on just 3 hours ago [0].
Sourced from the BBC, with a correct headline, not a dupe, generating discussion, upvoted, relevant, important, and in every possible way squarely within HN's remit: But it mentioned Musk in a bad light.
Not only was it flagged, but it was some new kind of uber-flagged. It no longer shows up in new. It doesn't show up in the OP's submissions list. It doesn't show up in my favorites list. You can't comment on it. The link and even the title were completely removed.
That's sheer insanity. Absolutely extraordinary and wholly, completely unjustifiable.
And if you or I were to make a post about this wild level of censorship of a legitimate and important tech story, it would be rapidly removed also. Most likely, you'd be banned if you kept trying (for something completely different, no doubt).
So can we please not pretend that stories about Musk and fascism are being removed for being 'political'. The YC people have picked their dog in this fight, and are very much trying to tip the scales in their favour by censoring the users of this platform.
Since when can you not comment on flagged stories though? Or see them in your own favorites??
Like, I can still comment on other flagged stories even now, but on that one I couldn't. It's since been unflagged, so I guess I can't prove it, but I've never seen this site act like that... Real memory hole stuff.
It was since unflagged, and people can comment on it again, but look at the stamps. There's a good two hours there after the first few comments where no one could say anything.
And during that time it was removed from the submitter's page, the new page, and my favorites (it's back now).
Would have to have been a pretty weird and consistent glitch for a site that hasn't changed much in like 20 years.
Half tempted to email HN and ask what was going on there, only I wouldn't expect honesty.
The guidelines have on-topic stuff that gratifies one's intellectual anything that gratifies one's intellectual curiosity
>Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon.
In the last 24 hours there have been three Musk stories and seven Trump.
I can see both sides but Americans in particular tend to turn any forum for statups and intellectual curiosity into Trump Musk Trump Musk Dems Repubs Trump etc. There are other forums where you can of course do that.
I’ve sent Claude back to look at the transcript file from before compaction. It was pretty bad at it but did eventually recover the prompt and solution from the jsonl file.
I did not know it was SQLite, thx for noting. That gives the idea to make an MCP server or Skill or classical script which can slurp those and make a PROMPTS.md or answer other questions via SQL. Will try that this week.
I was also disappointed by the lack of Jupyter notebooks support: I ended up not using Jupyter notebooks that much anymore, and when I do, well, I run them in Jupyter
For me, Safari sometimes randomly refuses to execute the search for the terms I entered: at that point I need to bring the search bar back up -> search terms are gone -> x -> bring search bar back up -> search terms are back there -> enter
I wish they stopped adding features, especially useless UI “improvements” and AI stuff nobody asks for, and focused on making the system rock solid as we’re used to.
I feel like it’s the opposite: the copy-paste issue is solvable, you just need to equip the model with the right tools and make sure they are trained on tasks where that’s unambiguously the right thing to do (for example, cases were copying code “by hand” would be extremely error prone -> leads to lower reward on average).
On the other hand, teaching the model to be unsure and ask questions, requires the training loop to break and bring a human input in, which appears more difficult to scale.
> On the other hand, teaching the model to be unsure and ask questions, requires the training loop to break and bring a human input in, which appears more difficult to scale.
The ironic thing to me is that the one thing they never seem to be willing to skip asking about is whether they should proceed with some fix that I just helped them identify. They seem extremely reluctant to actually ask about things they don't know about, but extremely eager to ask about whether they should do the things they already have decided they think are right!
> The extra typing clarification in python makes the code harder to read
It’s funny, because for me is quite the opposite: I find myself reading Python more easily when there are type annotations.
One caveat might be: for that to happen, I need to know that type checking is also in place, or else my brain dismissed annotations in that they could just be noise.
I guess this is why in Julia or Rust or C you have this stronger feeling that types are looking after you.
I think the face they fundamentally don't look after you is where my resistance comes from. Will try and evaluate some newer code that uses them and see how I get on a bit more :)
Flagging this: that’s fascism.
reply