Somewhat related: I really like Erlang's docs about handling time. They have common scenarios laid out and document which APIs to use for them. Like: retrieve system time, measure elapsed time, determine order of events, etc.
Hi! Beeminder cofounder here. We do have a charity option but only in our most expensive premium plan. My own feeling is that a commitment contract with a charity as a beneficiary is less effective because what kind of jerk is motivated to avoid donating to charity? Unless you set the stakes so high that you can't really afford it, I guess?
The first part of this https://karthinks.com/software/fifteen-ways-to-use-embark/ also talks about how inverting action and object can be useful, and how Emacs's embark package makes it possible to even go back and forth between object and action as needed.
> So if you positively encourage it to “think out loud”, it stands a much better chance of being able to use its deduction and reasoning capabilities (which are quite significant).
It's well known prompt engineering technique to ask LLMs to articulate each step.
One way to think about it is that LLM's output is pure function on input. You need to chain them to arrive at final answer. Each step can't escape its comprehension capability. One step's output appends to next step's input so you can ask it to do it itself.
i wonder if this is a result of the GPT part (i.e. the distribution of 'next tokens' is such that it tends to be logically coherent), or the RLHF part (i.e. it tends to steer the GPT to logical coherence)
None are directly related to solving it in general.
You can't solve it in general because you can't have answer to arbitrarily complex question with a single pass on limited data.
However I do think that current methods are extremely inefficient. There is a lot of nonsense, noise and duplication in training data. It can be compressed to be orders of magnitude smaller. Imagine having dense, close to optimal way to kick off learning with logic, reasoning, things like set theory, category theory etc. so it is later reused when learning ie. programming language - where learning programming language is also efficient, based on cleaned up specification, standard library learned from code of standard library itself, possibly access to compiler/interpreter for feedback. I mean quality of data slider pushed to the limit.
But this single pass, void of inner dialogue approach will always require articulating steps, no matter how good it is quality wise.
The just internet trained models don't think out loud, so I assume it was RLHFed conditioned to that behavior since it makes it able to solve much more complicated questions, though it's sometimes irritating to read.
I think it can definitely help overcome the "mental block" that we (I?) face sometimes when staring at a task list. If something is too hazily defined, even an incorrect breakdown of the task can help me spur into action.
https://www.erlang.org/doc/apps/erts/time_correction.html#ho...