At this context length and temperature I imagine it diverges quickly but still it could be cool to see a giant tree of diverging paths or an ngram similarity
So basically it’s a transcription and then sequential classification problem. Im glad this is working for them. Id rather nurses attend to patients rather than paperwork.
Maybe they’re just using 4o transcribe but I do wonder if they've fine tuned for medical terminology.
> So my dystopian prediction for 2031 is that if that form of AGI has come to pass it will be accompanied by extraordinarily bad economic outcomes and mass civil unrest.
I guess six years is the longest time horizon in the question but that "if" around AGI and its impact does a lot of work. This maybe assumes it'll happen sooner? Or, is it similar to the AI art prediction where the 6 year horizon is just a long enough period of time such that we'll see if we're headed towards AGI?
1. _Actually_, not practicing a talk is insane to me unless people are regular speakers. From a cold start, it takes me 10x the amount of time to prep as it does to actually give a speech.
2. It's a mix of ego and motivation. When the work is going out to the public, for me it's just the fear people will end up viewing my work the same as they'd view a Neil Breen film.
To be fair to word2vec (rather, word embeddings) I think both require a fair amount of sentence context.
On a semi-related note, one of the reasons I avoided tackling smells yet is because so much written about smell is in the form of perfume/cologne marketing speak. Asking gpt-4o for smells lists that "the smell of jasmine and tuberose [...] evokes the mystery and elegance of a moonlit garden". I'd hope modern models would understand that this is nonsense but I can imagine a word2vec model would end up with bizarre associations.
Ah, well good to know. I need to read more. Thanks for taking a look.
When you refer to averaging embeddings together, do you mean averaging a bunch of sentences/words for "male" to get a general concept vector or do you mean averaging two different words, like "royal" and "adult male", to get to the combined concept, say "king"?
I’d love to use this as a base for a math model. Let’s see how far it can get through the last 100 years of solved problems