Hacker Newsnew | past | comments | ask | show | jobs | submit | srijan4's commentslogin

Somewhat related: I really like Erlang's docs about handling time. They have common scenarios laid out and document which APIs to use for them. Like: retrieve system time, measure elapsed time, determine order of events, etc.

https://www.erlang.org/doc/apps/erts/time_correction.html#ho...


A much better alternative for this is: https://www.beeminder.com/


I like this, but I feel like the money should go to charity with a transaction fee going to Beeminder.


Hi! Beeminder cofounder here. We do have a charity option but only in our most expensive premium plan. My own feeling is that a commitment contract with a charity as a beneficiary is less effective because what kind of jerk is motivated to avoid donating to charity? Unless you set the stakes so high that you can't really afford it, I guess?


Everyone has to start somewhere, right?


My personal blog with not much content, but I try: https://www.srijn.net/


The first part of this https://karthinks.com/software/fifteen-ways-to-use-embark/ also talks about how inverting action and object can be useful, and how Emacs's embark package makes it possible to even go back and forth between object and action as needed.



Or roll up your sleeves and dig into xset, xautomation (a.k.a. XAUT), wmctrl, xte, xbindkeys, and others, which can all be scripted.


The part that was interesting for me was:

> So if you positively encourage it to “think out loud”, it stands a much better chance of being able to use its deduction and reasoning capabilities (which are quite significant).


It's well known prompt engineering technique to ask LLMs to articulate each step.

One way to think about it is that LLM's output is pure function on input. You need to chain them to arrive at final answer. Each step can't escape its comprehension capability. One step's output appends to next step's input so you can ask it to do it itself.


i wonder if this is a result of the GPT part (i.e. the distribution of 'next tokens' is such that it tends to be logically coherent), or the RLHF part (i.e. it tends to steer the GPT to logical coherence)


Both and none at the same time.

Both contribute to overall performance.

None are directly related to solving it in general.

You can't solve it in general because you can't have answer to arbitrarily complex question with a single pass on limited data.

However I do think that current methods are extremely inefficient. There is a lot of nonsense, noise and duplication in training data. It can be compressed to be orders of magnitude smaller. Imagine having dense, close to optimal way to kick off learning with logic, reasoning, things like set theory, category theory etc. so it is later reused when learning ie. programming language - where learning programming language is also efficient, based on cleaned up specification, standard library learned from code of standard library itself, possibly access to compiler/interpreter for feedback. I mean quality of data slider pushed to the limit.

But this single pass, void of inner dialogue approach will always require articulating steps, no matter how good it is quality wise.


The just internet trained models don't think out loud, so I assume it was RLHFed conditioned to that behavior since it makes it able to solve much more complicated questions, though it's sometimes irritating to read.


> We realized (...) that we can dramatically improve its ability in math and tutoring if we allow the AI to think before it speaks.

This is from Sal Khan's TED talk on how they're integrating ChatGPT into Khan Academy - https://youtu.be/hJP5GqnTrNo?t=745


That was my first thought as well. I'm sure a 4th prompt to choose different colors for the graphs would work.


I think it can definitely help overcome the "mental block" that we (I?) face sometimes when staring at a task list. If something is too hazily defined, even an incorrect breakdown of the task can help me spur into action.


I used ssh_config's `Match` recently to route only okta-managed servers via their auth proxy command. That was quite powerful.


We recently started using buildjet, and it has been nothing short of awesome. So I can highly recommend them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: