Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm so torn about this article. On the one hand, it's great to see technologists engaging more with philosophy—arguably the technological landscape we currently have would h ave been much better if they had done so more deeply and more frequently.

On the other hand, this is a pretty shallow article and does not, on my read, offer anything to anyone even vaguely familiar with technology and Nietzsche's philosophy. A more interesting integration is Nolan Gertz's Nihilism and Technology.

I think the ACM would do better to invite guest authors from philosophy departments to author a piece or coauthor a piece.



Gertz concludes:

>But passive nihilism is also leading us to see in technologies a way to become sicker humans, humans who are trapped in an endless cycle of never being satisfied with how much “better” we have become. In other words, passive nihilism is leading us toward active nihilism, toward being able to question if we know what “better” means; to question if we know what purpose such betterment is meant to serve; to question whether we are trying to become better only for the sake of being better, for the sake of being different, for the sake of not being who we are; to question whether our pursuit of the posthuman is leading us to risk becoming inhuman because of our nihilistic desire to be anything other than merely human. It is through exploring such questions that we can destroy in order to create, in order to create new values, new goals, and new perspectives on the relationship between human progress and technological progress.

TFA (charitably): they're exploring one such question, they're at least trying to gerrymander Nietzsche's "inventing value" with the Millennial trope "creating value"


Yeah but we need to distinguish between different types of nihilism. Existential nihilism (which it is talking about) is different from moral nihilism, for example. Moral nihilism is in essence: nothing is inherently morally wrong or right, with which I used to agree.


I hate to be the one to say this, but this article reads as though it was written by an LLM. The shallowness is one reason. Another is the lack of any individual voice that would suggest a human author.

And there are the unsupported citations and references:

The sentence “The World Economic Forum’s 2023 Future of Jobs report estimates 83 million jobs may be displaced globally, disproportionately affecting low- and mid-skill workers” is followed by a citation to a book published in 1989.

Footnote 7 follows a paragraph about Nietzsche’s philosophy. That footnote leads to a 2016 paper titled “The ethics of algorithms: Mapping the debate” [1], which makes no reference to Nietzsche, nihilism, or the will to power.

Footnote 2 follows the sentence “Ironically, as people grow more reliant on AI-driven systems in everyday life, many report heightened feelings of loneliness, alienation, and disconnection.” It links to the WEF’s “Future of Jobs Report 2023” [2]. While I haven’t read that full report, the words “loneliness,” “alienation,” and “disconnection” yield no hits in a search of the report PDF.

[1] https://journals.sagepub.com/doi/10.1177/2053951716679679

[2] https://www.weforum.org/publications/the-future-of-jobs-repo...


A positive outcome of LLMs. Regardless if the specific article is AI generated or not, we become increasingly intolerant of shallowness. While in the past we would engage with the token effort of the source, we now draw conclusions and avoid the engagement much faster. I am expecting the quality of real articles to improve to avoid the more sensitive reader filters.


I now notice myself cringe internally whenever I say anything that has become a ChatGPT-ism, even if it's something I always used to say.


I used to write very formally and neutrally, and now I don't, because it comes across as LLM-ish. My sentences used to lack "humanity", so to speak. :(


I'm a member of the ACM, so I would report this article.

However, I think the author may just have made some mistakes and mixed up/-1'd their references, since the 2023 report is actually #2

2. Di Battista, A., Grayling, S., Hasselaar, E., Leopold, T., Li, R., Rayner, M. and Zahidi, S., 2023, November. Future of jobs report 2023. In World Economic Forum (pp. 978-2).

Similarly, Footnote 7 probably should probably point to #8

8. Nietzsche, F. and Hollingdale, R.J., 2020. Thus spoke zarathustra. In The Routledge Circus Studies Reader (pp. 461-466). Routledge.


The Communications of the ACM no longer has an editor?


Suppose you've managed to get a job as an editor at Communications of the ACM. As "Editor, Communications of the ACM" what do you think your job is?


Possibly displaced by an LLM.


Nope, I'm still the Editor-in-Chief, and the last time I checked, I'm not an LLM. Nor are the other 100+ associate editors of the magazine.

I want to point out that this is a blog post appearing on the CACM website. It was not reviewed or edited by CACM, beyond a few cursory checks.


Now that makes more sense.

I guess it doesn't help that the post is formatted as a typical article with the bio blurb. It's worth distinguishing the blog entries more and perhaps posting a disclaimer. After all when people think of CACM they don't generally have blogs in mind.


In addition to this, another telltale sign of LLM authorship are the repeated forced attempts to draw connections or parallels where they're nonsensical, trying to fulfill an essay prompt that doesn't - in those instances - have much meat to it.

    > As AI systems increasingly mediate decisions [...], decisions once 
    > grounded in social norms and public deliberation now unfold within 
    > technical infrastructure, beyond the reach of democratic oversight.

    > This condition parallels the cultural dislocation Nietzsche observed in 
    > modern Europe, where the decline of metaphysical and religious authorities 
    > undermined society’s ability to sustain shared ethical meaning. In both 
    > cases, individuals are left navigating fragmented norms without clear 
    > foundations or frameworks for trust and responsibility. Algorithmic 
    > systems now make value-laden choices, about risk, fairness, and worth, 
    > without mechanisms for public deliberation, reinforcing privatized, 
    > reactive ethics.
Note how "algorithmic systems" making "value-laden choices" reinforcing "privatized, reactive ethics" has absolutely nothing to do with the spiritual value collapse that Nietzsche, who was uninterested in or even opposed to critiques of power structures, and who wasn't much impressed by the whole idea of democracy, saw in 19th century Europe. While the criticism of AI systems being beyond the reach of democratic oversight is a common and perfectly valid one, it just simply doesn't touch on Nietzsche's philosophy; yet the LLM piece uses language and turns of phrase ("parallels", "in both cases") to make it sound as if there were a connection.

If I am strongly opposed to anti-democratic opaque AI surveillance machines, then I am not an individual "left navigating fragmented norms without clear foundations", on the contrary, my foundations are quite clear indeed; and on the other hand, increased automation causing the erosion of "frameworks for trust and responsibility" seems more likely to be welcomed by Nietzsche, who had little patience for moral affectations like responsibility, than opposed.


At this point I regularly see front-page HN articles that are LLM written (amusingly sometimes accompanied by comments praising how much of a breath of fresh air the article is compared to usual "LLM slop").

I worry about when I no longer see such articles (as that means I can no longer detect them), which likely will be soon enough.


Love the optimism couched as pessimism. LLM training data doesn't do that.


Beyond the cringe of posting AI slop that 'argues' about eroding social norms and declining trust due to AI there's also this:

"The prestige and unmatched reputation of Communications of the ACM is built upon a 60-year commitment to high quality editorial content"

Hmmm. Ok whatever you say folks


It was written by an LLM because it’s another hype piece for AI.


thanks for pointing this out. The concepts in the article are important to me, but yeah thats weird.


Indeed. The attempt to weave Nietzsche into the article seems forced, almost as if the author just read a commentary on Nietzsche or didn't actually have anything of substance to say to begin with. We have some observations about the effects people expect AI to have on things like work and some bullet points that looked like they've been cribbed from an LLM summary of Nietzsche's works. And then, at best, what? Hand-wavy imperatives that suggest we overcome or adapt the shift AI will cause in Nietzschean fashion?

Pretty vacuous.


Thanks for the book recommendation. Adding the link here so it shows in the monthly book suggestion:

https://a.co/d/iR7sxnU



where is the monthly book suggestion?


Probably referring to the mailing list from https://hackernewsbooks.com


Gertz speaks at tech conferences and that’s how I came across his work https://youtu.be/mcPFs5im2bs?si=DeEcWROAWBjR-RVu


Any book about technology's effects on society written before 2022 needs a massive update since the LLM revolution. Sadly, Gertz's "Technology and nihilism" was published in 2018.


He has an updated 2024 edition

Edit: a link for the interested.. https://www.bloomsbury.com/us/nihilism-and-technology-978153...


I didn't read anything about a connection between technology and philosophy. Its more about finding meaning in a world that has been over technologised.


Finding your humanity in a world that is seeking to strip you of that humanity is very Nietzsche.

So perhaps tfa didn't do a good job of explaining that?


How is "finding meaning in a world that has been over technologised." anything other than an intersection of philosophy and technology?


It sounds like one begets the other and it appears to me both are independent.


It is almost like there is a book by Heidegger called "The Question Concerning Technology" that basically talks about all the things mentioned in the article.


Heidegger has a reputation of explaining his opinions poorly. That would still be fine if I didn't also suspect that he was a pseudo-intellectual.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: