> Was this physically difficult to write? If it flowed out effortlessly in one go, it's usually fluff.
Probably my best and most insightful stuff has been produced more or less effortlessly, since I spent enough time/effort _beforehand_ getting to know the domain and issue I was interested in from different angles.
When I try writing fluff or being impressive without putting in the work first, I usually bump up against all the stuff I don't have a clear picture of yet, and it becomes a neverending slog. YMMV.
My most successful blog post was written about something I felt strongly about, backed by knowledge and a lot of prior thought. It was written with passion.
People asked for permission to repost it, it got shared on social media, it ended up showing higher in Google than a Time magazine (I think) interview of Bill Gates with the same title.
my take on this book is that 1) it contains a lot of foundational knowledge/wisdom about design as interpreted broadly that is very useful across contexts, and 2) it is itself, ironically, an example of poor design. Not in the visual sense, but in that it's structure and writing do a pretty bad job actually conveying that knowledge to the reader and being navigable.
I tried reading it and hated it, then I came back knowing bits and pieces of its contents from elsewhere and was like "yup, this is the only place I've seen all of this together".
"Now, one day back at Data General, his weariness focused on the logic analyzer and the small catastrophes that come from trying to build a machine that operates in billionths of a second. On this occasion, he went away from the basement and left this note on his terminal:
I'm going to a commune in Vermont and will deal with no unit of time shorter than a season."
— from "The Soul of a New Machine" by Tracy Kidder
As the first author on the salmon paper, yes, that was exactly our point. Researchers were capitalizing on chance in many cases as they failed to do effective corrections to the multiple comparisons problem. We argued with the dead fish that they should.
Curious what you find to be "bs" about the results of this paper? That statistical corrections are necessary when analysing fMRI scans to prevent spurious "activations" that are only there by chance?
I guess there's various reasons, ranging from "it's hard to make auto-layout algos produce stuff as dense as painstakingly handcrafted maps" to "let's make it harder to scrape/copy data"
Back then it was dedicated map makers that created maps. Now it's mainly programmers. So its not surprising that quality tanks when you go from disciplinary expert staff to IT day laborers.
I've been occasionally using futureme.org since ~15 years ago, in case you're a believer in the Lindy effect. FWIW I don't think I've ever used it for anything more than ~1 year ahead, that always seemed fun/interesting enough. Of course there's other considerations entering the picture if you plan ten years ahead, but then again this seems like the kind of fun/light-hearted thing where it doesn't really bother me that I might not end up reading it again --- life happens...
They do not deserve a shred of recommendation. This is just damage control, pretending that it did not happen never was an option. Instead they tried to claim that it was just a one of mistake. What it really shows is that nobody even bothers to read their articles before hitting publish and that AI is widely used internally.
You're absolutely right! but they can shove this euphemism. Just say that chatgpt wrote the article and no one read it before publishing, no need for all the fluff.
>> Just say that chatgpt wrote the article and no one read it before publishing
This is so interesting. I wonder if no human prompted for the article to be written either. I could see some kind of algorithm figuring out what to "write" about and prompting AI to create the articles automatically. Those are the jobs that are actually being replaced by AI - writing fluff crap to build an attention trap for ad revenue.
Very likely this already happens on slop websites (...which I can't name because I don't go there), which for example just republish press releases (which could be considered aggregation sites I guess), or which automatically scrape Reddit and translate them into listicles on the fly.
Fair play to them for owning up to their mistake, and not just pretending like it didn't happen!
That's what the legitimate media has done for the last couple of hundred years. Every issue of the New York Times has a Corrections section. I think the Washington Post's is called Corrections and Amplifications.
Bloggers just change the article and hope it didn't get cached in the Wayback Machine.
The editors were laid off and replaced by an LLM. Or more likely, the editorial staff was cut in half and the ones who were kept were told to use LLMs to handle the increased workload.
Probably my best and most insightful stuff has been produced more or less effortlessly, since I spent enough time/effort _beforehand_ getting to know the domain and issue I was interested in from different angles.
When I try writing fluff or being impressive without putting in the work first, I usually bump up against all the stuff I don't have a clear picture of yet, and it becomes a neverending slog. YMMV.