Well, if you come at it from the mindfulness angle, there are real studies showing that mindfulness works. https://pmc.ncbi.nlm.nih.gov/articles/PMC8083197/ and similarly, if you come at it from the religious angle, you can trace a lot of the aspects of mindfulness back to the Buddha's original teachings as recorded in canon. And if you ask if there is a fundamental point beyond those, I think the answer is that there is none recorded - the best description I have been able to get of Nirvana is that it is a state of perfect mindfulness.
There's a specific writing style for globalized English that AI's use. And then this post also had none of the stylistic flourishes that a real author might add. And then simple things like constructing a table of 68 libraries or whatever organized by relatively subjective categories. That is something that nobody is going to do by hand.
There is a new term "load-bearing" which is used a lot in my usage of AI. Has anyone else encountered this term being used a lot in their conversations? Or is it a quirk of personalization?
I use load-bearing all the time in conversation. People need to be careful that just because they don’t use certain phrases, it doesn’t automatically mean AI.
Both you and parent are making a lot of load-bearing assumptions.
As someone who likes to use a lot of em dashes in writing -- the 'heuristics' that AI 'hunters' like to use need a lot of further refinement before I would trust them with anything. And yet there are legions of anti-AI crusaders out there wielding them like weapons.
These folks are reinforcing a bias against all kinds of people, particularly those who are not native English speakers and were very likely taught 'globalized' English in their language training.
I've heard it a lot from podcasts that are towards the abundance movement. I think its common within the rationalise movement.
Personally I really like it for "load-bearing assumptions". Because it let's you work with assumptions whilst pointing out the potential issues of that assumption.
There are also fashions. So people could be using "load-bearing" more because it's fashionable. Like "lets double-click on that", or "spinning rust", etc
Well, I mean, you can certainly say economic value doesn't capture all of the value. But you can also say that there are metrics of value that do capture everything. Thermodynamic entropy, for example - its steady march to zero is statistically unstoppable. You can't measure a child's economic value without making a lot of assumptions, but you can measure a child's thermodynamic heat production with a few simple experiments. It might sound a little out there, but I've been looking at the maximum entropy production principle and some books on thermodynamics, and there really is a lot that is applicable to calculations about human systems. Viewing humans as dissipative structures designed to maximize entropy production really explains a lot about how the world works. Notably, some questions about our energy usage patterns. AI may not be useful economically yet, but it's excellent at dissipating heat.
In fact entropy is relative to what we define as chaos vs what we define as ordered. When we can't explain the order, we define it as being chaotic, and for convenience we model it statistically in stead of in absolute terms. I learned that few months ago from a HN posted article.
At least with Gemini, I found the trick is to add anything in any system instruction about a task list. Then the follow-up prompt will always be, do you want to add a task for that? Which is actually useful most of the time.
https://doi.org/10.1038/s41591-025-03622-w this is the paper they're basing the research on. So in primary care, the accuracy rates are in the 80s. So that's something like a 17% false positive rate. That's still like 5 to 1 odds of getting a correct result though. It's much better than nothing.
as the PRA outlines (and the article goes through), publishing notice to the Federal Register does not suffice to get around the PRA, it is just a step in the process.
It is just "notice" of their intention to do it. They still have to do the other pieces, including getting their OMB control number.
Of course, as the article points out, all of this is pretty moot, if they're going to get the police to drag you away and not let you fly, irrespective of the position in law.
If you are flipping through the reading to find a quote, then printed readings are hard to beat, unless you can search for a word with digital search. But speed reading RSVP presentation beats any kind of print reading by a mile, if you are aiming for comprehension. So, it is hard to say where the technology is going. Nobody has put in the work to really make reading on an iPad as smooth and fluid as print, in terms of rapid page flipping. But the potential is there. It is kind of laughable how the salesman will be saying, oh it has a fast processor, and then you open up a PDF and scroll a few pages fast and they start being blank instead of actually having text.
I get that this is essentially vibe coding a language, but it still seems lazy to me. He just asked the language model zero-shot to design a language unprompted. You could at least use the Rosetta code examples and ask it to identify design patterns for a new language.
I was thinking the same. Maybe if he tried to think instead of just asking the model. The premise is interesting "We optimize languages for humans, maybe we can do something similar for llms". But then he just ask the model to do the thing instead of thinking about the problem, maybe instead of prompting "Hey made this" a more granular, guided approach could've been better.
For me this is just a lost of potential on the topic, and an interesting read made boring pretty fast.
This was mainly an exercise in exploration with some LLMs, and I think I achieved my goal of exploring.
Like I said, if this topic is interesting to you and you'd like to explore another way to push on the problem, I highly recommend it. You may come up with better results than I did by having a better idea what you're looking for as output.
I tried a thread, I got that both LLMs and humans optimize for the same goal, working programs, and the key is verifiability. So it recommended Rust or Haskell combined with formal verification and contracts. So I think the conclusion of the post holds up - "the things that make an LLM-optimized language useful also happen to make them easier for humans!"
I update Linux maybe once a year. Sure, there are security vulnerabilities. But I'm behind a firewall. And meanwhile, I don't have to spend any time dealing with update issues.
But Windows is made for the big masses. It's definitely a good thing that Microsoft forces Auto-Updates, because otherwise 95% of people would run around with devices that have gaping security holes. And 90% of these people are not being a firewall 100% of their time.
Side effect unfortunately is that they are shoving ad- and bloatware down your throat through these updates.
But that is, because Microsoft does not care about the end user at all. It's not the fault of auto-updates.
Logic minimization is kind of boring? I had to solve a problem once and the answer was still to use the espresso software from the 1980s. It is a pretty specialized problem and honestly I don't see how you would improve on it, besides integrating the digital circuit design research. But in terms of software, there is not really any reason to use a Boolean logic formula instead of just passing around the truth table directly.
reply