Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article is about committing knowledge to long term memory. I've been thinking about ways to actually augment long term memory with external storage. I'm unsatisfied with current software (mind maps, OneNote/Evernote, WorkFlowly, org-mode...). They all feel like building a document. I noticed when I come back to my notes in those formats it takes too much time to retrieve information and to add new information without re-structuring existing information. The existing solutions are not optimized for quickly putting down some facts/ideas/goals for later retrieval, without worrying about layout and structure.

To efficiently augment memory, the software (especially UI) should mimic the way we think. Each captured thought should be in some way related to one or more previous thoughts and stored in the correct context. Inputted notes should be fairly short and they should be organized in graph structure. Edges should define relations, for example: contains, depends on, implies, follows.

When retrieving a thought it should be visualized in context with other related thoughts. This graph should have same layout each time it's retrieved, for better visual navigation. If the system limits the visualization of graph up to 2nd neighbors, there should always be enough space on 2D plane to expand the graph with new thoughts. Manual layout should be discouraged because it wastes too much time.

The basic storage/retrieval model could also be expanded with additional processing to further offload brain activities: logical processing (if an assumption proves to be incorrect, all dependent thoughts should be flagged as incorrect/uncertain), goal prioritization, future event reminders.

The hard problem is entry method. Because such system should always be accessible, the candidate devices are smartphone and smart watch. So far all input methods except keyboard are too slow and error prone. Most input methods also obscure much of screen space and require visual feedback to verify that text is correct (swiping and auto-correct). I'm researching gesture-based and chording virtual keyboards, but there's nothing suitably fast.

This is on my side-project back log, but hopefully I'll find something close enough I can use instead of building from scratch. Any suggestions?



I was thinking about something similar this morning.

This last year I've been making a special audio player for studying languages. There's no screen, but audio input/output and 15 buttons that can glow different colors. I've programmed it to navigate through a tree structure of 'cards', where each card is (an audio clip of) a sentence in the target language and a recorded translation. The interface is all audio-based, with buttons.. hence 'TAPIR' player. Tactile Audio Player/Instructional Resource. Also it's a cute animal. :-).

The SOC has an Arm Cortex M3 with 128kB of ram, so you can run JS or lua interpreter (started looking into adding Duktape JS interpreter to the firmware a couple of days ago), and program your own interface in JS. Also planning to make a bluetooth version, where the code runs on your smartphone, and the player is rather just a special keyboard with controllable lights and a mic. Maybe it's useful for creating new notes quickly. If you wanted a visual representation of your notes however, still need a screen.

I'll make a proper demo video in the next few days, maybe I'll make a 'Show HN.' I've only got one working board atm, but I'm planning to solder up some more soon, could drop one in the post to you, if you'd like to mess around with one.

Couple of photos here:

https://photos.app.goo.gl/XWsxtQieKtixfEEJ9 https://photos.app.goo.gl/uvyfh2EYEz1G8kh88


You are describing a Memex, and I agree there are no satisfying options.

Some related keywords / projects (from my natural memex): IdeaFlow, Mark Carranza, The Brain, Org-brain.

For input methods, I'm excited for a combination of text-to-speech and neural interfaces like Ctrl-Labs.

Given the level of augmentation users of Anki and Org-mode tout, I'd be first in line for a better interface (like the one you described).


Great list of related projects! I missed some of them in my research. Now I'll deep dive into each and steal all the good ideas.

Text-to-speech is very promising, but I could use it only about 20% of time during the day. Other problem is I would constantly have to check the recognized text for errors and have a way to delete and re-entry. So far it's been frustrating to use.

Neural interfaces are really the end goal. AFAIK current commercially available solutions are able to recognize only a dozen actions after training, not enough for text input.


M Eifler is working on this. They have amnesia so a prosthetic memory has more potential impact for them them for most people. https://www.youtube.com/playlist?list=PLN5yV7QdHaK5sqJqKCrfn... Note the playlist is in reverse-chronological order and the most interesting one is probably "How Amnesia Works".

There's also a Patreon which has a lot more information, among other things an "Amnesia Diaries" series where M makes videos for future-M to remember. https://www.patreon.com/BlinkPopShift




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: