Hacker Newsnew | past | comments | ask | show | jobs | submit | pavelboyko's commentslogin

"Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return. [...] The core barrier to scaling is not infrastructure, regulation, or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time."


This is the best thing I discovered in this thread! Please do not give up on this. The idea closely reminded me of Ello (https://www.ycombinator.com/companies/ello), which was started with custom speech-to-text models trained to understand kids. You're doing a similar thing but for an even more underserved niche. This thing could be life-changing if you manage to navigate marketing in the niche.


Thanks Pavel! For now I have no more power to go through the marketing. I am open for a collaboration with anyone who can help with that


I mentored junior SWE and CS students for years, and now using Claude as a coding assistant feels very similar. Yesterday, it suggested implementing a JSON parser from scratch in C to avoid a dependency -- and, unsurprisingly, the code didn’t work. Two main differences stand out: 1) the LLM doesn’t learn from corrections (at least not directly), and 2) the feedback loop is seconds instead of days. This speed is so convenient that it makes hiring junior SWEs seem almost pointless, though I sometimes wonder where we’ll find mid-level and senior developers tomorrow if we stop hiring juniors today.


Does speed matter when it's not getting better and learning from corrections? I think I'd rather give someone a problem and have them come back with something that works in a couple days (answering a question here or there), rather than spend my time doing it myself because I'm getting fast, but wrong, results that aren't improving from the AI.

> though I sometimes wonder where we’ll find mid-level and senior developers tomorrow if we stop hiring juniors today.

This is also a key point. While there is a lot of short term thinking these days, since people don't stick with companies like they used to. As a person who has been with my company for close to 20 years, making sure things can still run once you leave is important from a business perspective.

Training isn't about today, it's about tomorrow. I've trained a lot of people, and doing it myself would always be faster in the moment. But it's about making the team better and making sure more people have more skill, to reduce single points of failure and ensure business continuity over the long-term. Not all of it pays off, but when it does, it pays off big.


Years of experience doesn't correlate to a good developer either. I've seen senior devs using AI to solve impossible problems, for example asking it how to store an API key client side without leaking it...


Please consider adding Simplified English as an output language option, preferably with a level, e.g., A2, B1, etc. This way, I can adjust the language complexity to my kids' level and then gradually remove the crutches as they improve in English.


Yes! I love this!

So you'd be translating English to Simplified English? Or are you talking from another source language?

I've already been playing with this concept w.r.t. books:

I take a non-fiction book. I'll have an LLM translate it with a specific audience in mind (say, a 7 year old girl with a certain background), explaining concepts and words that are likely unknown to that audience. And then converting the whole thing into an audiobook. Optional parental controls built in ("exclude violence", etc.). Nowhere near showtime, though.

Another thing I'd love to work on is filtering existing content. There are millions of videos on YouTube. Right now, finding quality stuff that's fun to watch with my kid depends a lot on dumb luck. But what if I could filter by topic (semantic whitelist/blacklist, i.e. not keyword dependent), personality traits (OCEAN, MBTI), values (e.g. "curiousity") and language (reading level, vocabulary, words per minute, etc.)? I'd love that.


I'd love what they suggested as well, for other languages. I'm working on improving my French (and occasionally German), and I'm at a stage where I can follow along some French shows reasonably well if they're not speaking too fast (one of the first French phrases my French teacher in school taught us was "plus lentement, s'il vous plaît" - "slower/slowly please", for a reason), and if they're not speaking any particularly difficult accents, and not too much slang, but it's limiting and I'm often forced to keep English subtitles on as a consequence and it's sometimes too much of a crutch. It doesn't help that my hearing isn't what it was.

Being able to "step down" the difficulty so that I can either turn off subtitles entirely or rely on French subtitles, or even much "difficult speech" and "simple subtitles" or vice versa seems like it'd be very useful in getting over that hump faster.


I have the same experience. I'm sure they are constantly finetuning the model on real user chats, and it is starting to understand low-effort "on the go" prompts better and better.


Spaced repetition, especially when using tools like Anki, is effective for memorizing facts. However, memorization represents the most basic level of learning objectives, see e.g. [1] and [2]. Are there any recommended tools for practicing more advanced levels of knowledge, such as relational analysis, synthesis, and critical evaluation?

[1] https://en.m.wikipedia.org/wiki/Bloom%27s_taxonomy

[2] https://en.m.wikipedia.org/wiki/Structure_of_observed_learni...


I think you are minimizing something that is very important.

Understanding deeply and completely any topic is much easier when you understand deeply and completely the fundamental components and concepts--which starts with memorization.


This idea was the foundation for the Quantum Country book [0], which was essentially an experiment in combining reading with spaced repetition learning to let you do just that: understand a topic deeply.

Anyone interested in the theoretical underpinnings of the idea can read about it in the blog post [1] the co-authors wrote. For anyone who objects to the idea that rote-memorisation can aide learning rather than simply let you mechanically repeat facts, you should read the paragraph "How important is memory, anyway?" [2].

[0]: https://quantum.country/ [1]: https://numinous.productions/ttft/ [2]: https://numinous.productions/ttft/#how-important-is-memory


Indeed. Having a good memorization of "building blocks" of whatever higher minded thing you're looking into allows a lot more native synthesis of ideas.

It also keeps you from having to interrupt your flow to stop and go look something up.


Yea, i thought that was studied, but perhaps not. Ie the knowledge we retain help us formalize larger more complex thoughts. Perhaps facts are a too small unit? Though i don't see why you couldn't also use Spaced Rep to memorize larger relationships.


The best tool for that kind of thing is developing good note-taking skills.

When I'm doing language study, for example, the backbone of my memory reinforcement strategy is a plain old pen-and-paper notebook. Anki is only used for specific, targeted reinforcement.

Most my time with Anki these days is learning Chinese characters. That's arguably an ideal use case for brute-forcing with SRS, and that is indeed a very popular way to learn them. But I prefer to start with good note-taking there, too. I keep a notebook where I write down new characters and make some notes about their composition, etymology, and the nature of any relationship they might have with other characters that have a similar appearance or show up as components in this new one. IME even the simple act of physically jotting down a handwritten note to not confuse 买 with 卖 or 找 with 我 is worth some large number of flashcard repetitions all by itself.

I also create cards for hanzi in Anki, but typically only after I've already encountered it a few times in my reading and it's starting to feel familiar. Leech cards are a huge waste of time when tackling a large subject, so I don't really like to add a card to my deck until I'm reasonably confident that I won't be hitting the "again" button on it more than once or twice, if ever.


Is there something that you would personally recommend in order to develop better note taking skills?


Read about different note-taking methods, try them, and decide which one you like best.


You can use incremental reading, which is built on top of spaced repetition. Lots of people have invented it independently [1] and it works amazingly! Once you get the hang of it, it changes the way you think about learning stuff.

[1] https://supermemo.guru/wiki/Michael_Nielsen_re-discovers_inc...


You can use Anki to learn subjects more deeply - it's just harder.

Check out this experimental work to learn Quantum Physics using SRS: https://quantum.country/


Immersion is the only "tool" I know that tries to make the next leap formalized. For language learning at least, saturating yourself with input lets the brain build the deep relational maps it needs to use language, and no amount of rote learning of arbitrary grammar and vocab can replace that process.


Supermemo (an earlier-established piece of Anki-adjacent software) and its fans are big pushers of something called "Incremental Reading":

https://supermemo.guru/wiki/Incremental_reading

https://www.help.supermemo.org/wiki/Incremental_reading


Initial deck: simple facts. More advanced decks, after simple facts are memorized: concept-based.

Example:

Front: Rotator cuff tear - diagnosis? Why would this make sense?

Back: MRI is generally better for soft tissue; you wouldn't use a CT scan, which is much better for bone. Remember that X-rays / CTs generally work by shooting high-frequency electromagnetic radiation "X-rays" at tissue - if the tissue has dense elements like Ca++ in bone, the X-rays will be reflected back by the dense elements. This is why X-rays and CTs are good for detecting dense things like blood (iron in heme is dense), bone (calcium is dense) or even why we use contrast (things like barium or iodide are dense elements)


Do doctors keep this much in their heads? I would have this in a notes system that I could recover by searching for "rotator cuff" during the appointment. I know I'd never remember it all, but maybe with Anki I would.


If this is your field, it's really not any different from:

Front: Computer noise: top differential diagnosis? How dos this make sense?

Back: Check the fans for dust. Plastic...static electricity...etc

The point is that it's not necessary to review individual facts anymore because the concepts require their utilization, so it's just "common sense" knowledge to physicians. Of course, the knowledge from one physician to the next can vary greatly.


I work in a university hospital as a neuroradiologist. I've used Anki extensively during my studies and my specialist training. Now I use it to memorize thousands of image diagnosis patterns and differential diagnosis. 'You can't diagnose (or see) what you don't know', which means using an extensive note system is insufficient. I recommend Anki to all the radiologists (and doctors in general) training in our institute.


I think the closest thing would be concept mapping software or knowledgebase software like Obsidian. But that's more about storing concepts and not so much about learning concepts.

If that software existed, it would be incredible.


Obsidian has a spaced repetition community plugin that I use when I write documentation. I add a flashcard for everything that I want to learn.

So you just put #flashcards inline (or whatever you tell it to look for) at the bottom if your file.

And then:

This is a question::this is the answer. This is another question::this is another answer.

I run through my cards about 30 minutes before I start work. It works very well.


Going to school.


techniques >> tools

Work on encoding information, understand things deeply, relate concepts, and then create your cards.

For language learning, for instance, this means that if you have 10 new words you want to learn, you create f.ex. 15 sentences so that each sentence contains 2 of those words, and each words appears in 3 sentences. Then you run them eg. by your a tutor to validate them, and add it to your Anki deck.

Now you will only review each sentence 5-8 times, and you can do it very fast.

Doesn't need to be exactly like that, but you get the idea


Looks great! On a related note, I developed a similar free tool [1] designed for K12 teachers, primary focusing on curation and discovery of educational videos for classroom use.

1. https://hulahoop.ai


Well, if I want my universe to be manageable (don't ask me why), introducing the hierarchy of scales from the beginning is a natural solution.


This reminds me of a story that an old professor of theoretical physics told me. In the early nineties, he left the former Soviet Union for the United States to teach physics at one of the top universities. There he encountered the fact that American students were fantastically good at solving all his standard problems for integrals. It quickly became clear that the students were using the then-new program for symbolic calculations, Mathematica. As a result, our professor also mastered Mathematica and spent half the night finding such integrals that it still could not calculate for assignments.

I use Copilot every day and I can assure you it makes a lot of mistakes. I think that, at least in the short term, CS teachers will still find assignments where it makes mistakes.


The fundamental equations for QED (electromagnetic force) and QCD (strong force) are similar, but the solutions are radically different as gluons interact with themselves in a peculiar way. And we only know how to solve QCD equations at short distances where the force is small. Solving QCD at large distances is one of the unsolved "Millenium problems" [1].

[1] https://www.claymath.org/millennium-problems/yang%E2%80%93mi...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: