Hacker Newsnew | past | comments | ask | show | jobs | submit | patching-trowel's commentslogin

As of now Azure Status page still shows no incident. It must be manually updated, someone has to actively decide to acknowledge an issue, and they're just... not. It undermines confidence in that status page.


I have never noticed that page being updated in a timely manner.


It shows that some people have issues accessing the portal.


For me it’s not a nihilist pit. The fact that I’m gonna die and nothing I do matters that much alleviates some dread. It’s not a deep and endless ocean I’m swimming in, just a comfy little pool.


Lots of studies show that people who give to others (time, money, etc.) live better lives when measured by happiness, low stress, physical health. This requires you to believe what you do can matter.

Parenting is the ultimate exercise in giving.

There may be more than a comfy little pool to swim in :)

https://www.andrews.edu/services/development/annual/the-joy-...


You can act without belief in meaning. Trees and Earth for example works just fine.


If it’s good enough Socrates it’s good enough for Socra-me.


My gut says no because of the way language relates to meaning. In language, a “chair” is a chair is a chair. But in meaning, a chair is not-a-stool, and not-a-couch, and not-a-bench etc. We understand the object largely by what the object is similar to but not.

In order for the LLM to meaningfully model what is coherent, empathetic, free from bias, it must also model the close to, but NOT-that.


That’s a compelling point.

If you’ll indulge me I’m going to think out loud a little.

What makes sense to me about this point:

- Having zero knowledge of “non-good” could lead to fragility when people phrase questions in “non-good” ways

- If an LLM is truly a “I do what I learned” machine, then “good” input + “good” question would output “good” output

- There may be a significant need for an LLM to learn the “chair is not-a-stool” aka “fact is not-a-fiction”. An LLM that only gets affirming meanings might be wildly confused. If true I think that would be a an interesting area to research not just for AI but for cognition. … now I wonder how many of the existing params are “not”s.

- There’s also the question of scale. Does an LLM need to “know” about mass extinction in order to understand empathy? Or can it just know about the emotions people experience during hard times? Children seem to do fine at empathy (maybe even better than adults in some ways) despite never being exposed to planet-sized tragedies. Adults need to deal with bigger issues where it can be important to have those tragedies front of mind, but does an LLM need to?


I admire your impulse to be more sensitive to people’s dreams.


Check out Sebastian Lague’s Digital Logic Sim too. Comes with some of my favourite videos!

https://sebastian.itch.io/digital-logic-sim


Biomimicry at its finest. Soon we will have wacky flying machines like dragonfly ornithopters with eagle talon landing gear and giant blue monkey testicle fuel tanks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: