Hacker Newsnew | past | comments | ask | show | jobs | submit | ran3000's commentslogin

Reading the thread, I definitely overlooked language learning solutions. Thanks for sharing!


This specific problem gave us lots of headache while building https://rember.com We don't have a good solution yet. My hope is that something like content-aware memory models solve the problem at a lower level, so we don't have to worry about it at the product level.


It would be awesome to work on that data. I'm afraid of the privacy implications though.


What sort of privacy implications? I'd imagine that Anki data would be relatively privacy-concern free, as it contains no PII, and for the AnKing decks, all of the content is standardized and so wouldn't contain personal notes. Though, having never worked with this data, please let me know if I'm wrong!

Also, having used those decks in the past, and downloaded the add-on/look at the monetization structure of developers like the AnKing, I would be very surprised if aggregate data on review statistics wasn't collected in some way. I.e., if the AnKing is collecting this data already to design better decks/understand which cards are the hardest—probably to target individual support—then I imagine that collecting some de-anonymized version of that data wouldn't be too much of a stretch.

Plus, considering that all of the developers of AnKing-style decks are all doctors, they probably have a pretty good grasp at handling PII and could (hopefully) make pretty sound decisions on whether to give you access :)


You're right, it might work by restricting to just AnKing data. My concern was around other, possibly personal, cards making their way into the dataset.


Amazing work! In https://rember.com the main unit is a note representing a concept or idea, plus some flashcards associated to it, hsrs would fit perfectly! I'll look more deeply into it.


yeah! hsrs elements are the notes, and their learnable properties would be the flashcards.

however, individual grammar outputs aren't their own cards, you get a fresh example every time you see a card. this requires a very different scheduling approach, since you have to estimate how all the cards in the 'call tree' contribute to the overall result and reschedule them as well https://github.com/satchelspencer/hsrs/blob/main/docs/overvi...


Pretty much all spaced rep systems except for Anki structure their data this way - an editable data atom with flashcards auto-derived from it, on template or otherwise.


Phrasing looks amazing!

There's a lot of UX work to do for SRS. Do you have a sense of how well the ideas behind Humane SRS translate outside of language learning? I imagine the main challenge would be identifying a steady influx of new cards.

I agree that gains in scheduling accuracy are fairly imperceptible for most students. That's why, over the past few years building https://rember.com, we've focused on UX rather than memory models. People who review hundreds of card a day definitely feel the difference, doing 50 fewer reviews per day is liberating. And now that LLMs can generate decent-quality flashcards, people will build larger and larger collections, so scheduler improvements might suddenly become much more important.

Ultimately, though, the biggest advantages is freeing the SRS designer. I'm sure you've grappled with questions like "is the right unit the card, the note, the deck or something else entirely?" or "what happens to the review history if the student edits a card?". You have to consider how review UX, creation/editing flows, and card organization interact. Decoupling the scheduler from these concerns would help a ton.


I would say probably 50% of the learnings from Humane SRS would be applicable in other fields/schedulers. There is another half that is language-specific though - at the end of the day, if you try to learn a language the same way you cram for a med school exam, you're probably not going to succeed. The inverse is also true, please nobody use Phrasing to cram for their med school exam XD

I agree most peoples collections get unwieldy and something needs to be done, so props to Rember! I take the opposite approach - instead of helping people manage large collections, I try to help people get the most out of small collections. This sort of thing is not possible in most fields outside of languages (I don't think — I cannot say I've given it any real thought though).

For example, the standard tier in Phrasing is 40 new Expressions per month. This should result in 2,000-3,500 words in a year, which would be a pretty breakneck pace for most learners, and is considered sufficient for fluency. Of course, users can learn Expressions other users have created for free, or subscribe to higher tiers, or buy credits outright, but it's often not needed.

Indeed Phrasing does not really use the idea of "cards," we reconstruct pseudo-cards based on the morphemes, lemmas, and inflections found within the Expression. So "cards" are indeed not the boundary I use.


Being easy to integrate is an underappreciated feature of FSRS.

Using decks to draw semantic boundaries is likely overly constraining. I think we want to account for finer differences between cards. Decks are coarse and people differ in the ways they use them, some people recommend having just one global deck. Notes are too fine. We explored something in between: a note capturing an idea or concept, plus an associated set of cards. Turns out it's hard to draw idea boundaries. That's why I think it's easier to relate cards by semantic embeddings or more rigid but clearer structures, like the DAG of dependencies suggested elsewhere in this thread.


Ah, I totally missed this, thanks for sharing it.

Since in Anki the "note" is the editing unit, that works for some cloze deletions but not for QA cards (only for double-sided QA cards). A content-aware memory model would allow you to apply "disperse siblings" to any set of cards, independently of whether they were created together in the same editing interface.


Yes, that reminds me of knowledge tracing and methods like 1PL-IRT.

I think you can do both and get even better results. The main limitation is that the same flashcards must be studied by multiple students, which doesn't generally apply.

I also love the idea of the market, you could even extend it to evaluate/write high-quality flashcards.


> The main limitation is that the same flashcards must be studied by multiple students, which doesn't generally apply.

I think only a kernel of the same flashcards, because in my mind new cards would quickly find their position after being reviewed a few times, and might displace already well-known cards. I see the process as throwing random cards at students, seeing what's left after shaking the tree, and using that info to teach new students.

The goal, however, would definitely be a single standard but evolving set of cards that described some group of related ideas. I know that's against Supermemo/Anki gospel, but I've gotten an enormous amount of value out of engineered decks such as https://www.asiteaboutnothing.net/w_ultimate_spanish_conjuga....

> I also love the idea of the market, you could even extend it to evaluate/write high-quality flashcards.

It's been my idea to drive conversational spaced repetition with something like this.


I would be valuable for shared decks, like the one you mentioned. As far as I can tell, the majority of Anki users are medical school students or language learners. Both groups benefit from shared decks. So I think it's a good idea to pursue.

My personal interest is more on conceptual knowledge, like math, cs, history or random blog posts and ideas. It's often the case that, on the same article, different people focus different things, so it would be hard to collect even a small number of reviews on a flashcard you want to study.


I explored memory models for spaced repetition in my master's thesis and later built an SRS product. This post shares my thoughts on content-aware memory models.

I believe this technical shift in how SRS models the student's memory won't just improve scheduling accuracy but, more critically, will unlock better product UX and new types of SRS.


I've been playing with something similar, but far less thought out than what you have.

I have a script for it, but am basically waiting until I can run a powerful enough LLM locally to chug through it with good results.

Basically like the knowledge tree you mention towards the end, but attempt to create a knowledge DAG by asking a LLM "does card (A) imply knowledge of card (B) or vice versa". Then, take that DAG and use it to schedule the cards in a breadth first ordering. So, when reviewing a new deck with a lot of new cards, I'll be sure to get questions like "what was the primary cause of the civil war", before I get questions like "who was the Confederate general who fought at bull run"


I'd love to see it.

What I like about your approach is that it circumvents the data problem. You don't need a dataset with review histories and flashcard content in order to train a model.


Andy also tested this idea. You can read his notes here:

GPT-4 can probably estimate whether two flashcards are functionally equivalent

https://notes.andymatuschak.org/zJ7PMGzjcgBUoPjLUHBF9jn

GPT-4 can probably estimate whether one prompt will spoil retrieval of another

https://notes.andymatuschak.org/zK9Y15pCnRMLoxUahLCzdyc


Thanks for the write-up!

I've got a system for learning languages that does some of the things you mention. The goal is to be able to recommend content for a user to read which combines 1) appropriate level of difficulty 2) usefulness for learning. The idea is to have the SRS system build into the system, so you just sit and read what it gives you, and review of old words and learning new words (according to frequency) happens automatically.

Separating the recall model from the teaching model as you say opens up loads of possibilities.

Brief introduction:

1. Identify "language building blocks" for a language; this includes not just pure vocabulary, but the grammar concepts, inflected forms of words, and can even include graphemes and what-not.

2. For each building block, assign a value -- normally this is the frequency of the building block within the corpus.

3. Get a corpus of selections to study. Tag them with the language building blocks. This is similar to Math Academy's approach, but while they have hundreds of math concepts, I have tens of thousands of building blocks.

3. Use a model to estimate the current difficulty of each word. (I'm using "difficulty" here as the inverse of "retrievability", for reasons that will be clear later.)

4. Estimate the delta of difficulty of each building block after being viewed. Multiply this delta by the word value to get the study value of that word.

5. For each selection, calculate the total difficulty, average difficulty, and total study value. (This is why I use "difficulty" rather than "retrievability", so that I can calculate total cognitive load of a selection.)

Now the teaching algorithm has a lot of things it can do. It can calculate a selection score which balances study value, difficulty, as well as repetitiveness. It can take the word with the highest study value, and then look for words with that word in it. It can take a specific selection that you want to read or listen to, find the most important word in that selection, and then look for things to study which reinforce that word.

You mentioned computational complexity -- calculating all this from scratch certainly takes a lot, but the key thing is that each time you study something, only a handful of things change. This makes it possible to update things very efficiently using an incremental computation [1].

But that does make the code quite complicated.

[1] https://en.wikipedia.org/wiki/Incremental_computing


Interesting, I've been surprised to see how many language learning apps already include some of the ideas I've discussed in the blog post!

How far along are you in developing the system?


It started out as a side project just for myself to study Mandarin in 2019.

There's an open beta of the system ported to Biblical Greek here:

https://www.laleolanguage.com

I've got several active users without really having done any advertising; working on revamping the UI and redesigning the website before I do a big push and start advertising. Most of the people using the site have learned Biblical Greek entirely through the system.

There are experimental ports to Korean and Japanese as well, but those (along with the Mandarin port) aren't public yet. The primary missing pieces are:

1. Content -- the system relies on having large amounts of high-quality content. Finding it, tagging it, and dealing with copyright will take some time

2. On-ramp: It works best to help people at the intermediate level to advance. But if you start at an intermediate level, it doesn't know what you know.

Another thread I'm pursuing is exposing the algorithm via API to other language learning apps:

https://api-dev.laleolanguage.com/v1/docs

All of that needs a better funnel. I'll probably post some stuff here once I've got everything in a better state.

(If anyone reading this is interested in the API, please contact me at contact@laleolanguage.com .)


Just watched the video, great work!


Is there a way to be notified when you launch Mandarin?


Yes, I do this sometimes for math proofs.

The point of writing atomic flashcards is to prevent the loss of resolution reviewing questions about wholes (sentences in your case). Mind that atomic does not mean that it has to be about details, I usually create flashcards for each abstraction, that is, besides asking for details, I also write a question for the full sentence. This is one way to prevent the loss of resolution, but sometimes it is time consuming to write all those flashcards (think about a long proof). Another way to do it, is to write down the answer pen-and-paper, that way you are forced to focus on the details, not just the big picture: no loss of resolution.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: