Even with git-prime reducing the address space by a few orders of magnitude, there's still (effectively) zero chance for collision. The difference between 10^-29 and 10^-27 isn't that great in practice.
Actually there are π(N) ~ N / ln(N) primes less than N per the Prime Number Theorem, so π(2 ^ 160) ~ 2 ^ 153.2 - this only drops 7 bits. So that does increase the odds of collision but much less than what I expected!
I view current LLMs as new kinds of search engines. Ones where you have to re-verify their responses, but on the other hand can answer long and vague queries.
I really don't see the harm in using them this way that can't also be said about traditional search engines. Search engines already use algorithms, it's just swapping out the algorithm and interface. Search engines can bias our understanding of anything as much as any LLM, assuming you attempt to actually verify information you get from an LLM.
I'm of the opinion that if you think LLMs are bad without exception, you should either question how we use technology at all or question this idea that they are impossible to use responsibly. However I do acknowledge that people criticize LLMs while justifying their usage, and I could just be doing the same thing.
Exactly. Using them to actually “generate content” is a sure fire way to turn your brain into garbage, along with whatever you “produce” - but they do seem to have fulfilled Google’s dream of making the Star Trek computer reality.
I only reached the 100s back in the day. What amazed me was that it seemed like every problem had a paper solution, when it would take any computer algorithm thousands or millions of computations to solve the same problem.
I played around with some of the easier problems, my favorite was a couple times when starting with the obvious brute force solution in code and then refactoring and simplifying it iteratively ended up getting me the paper solution.
Keywords should definitely be highlighted. It's part of the structure of the code. Being highlighted makes it very quick to distinguish between keywords and variables and helps readability by making them easier to skim over and jump to. Maybe they could be the same color as punctuation, if number of colors is a problem.
I also like minimal themes (and light mode!) but keywords are precicesly the thing I want highlighted. The "Visual Studio(Light)" theme in VSCode gets it pretty close to what I want but still has some inconsitencies that bug me but I haven't bothered making my own to fix them yet. It primarily just highlights keywords, comments, strings.
But then you can have something like public async Task<byte[]> SomeMethod(DateTime date, int someNumber) and int is highlighted but DateTime isn't...
Sounds lovely, I'd love to hear what it's like when the number of living cells on screen controls the length of the note so it's not just a constant rhythm, even though it is hypnotizing.
>The discipline required to use AI tools responsibly is surprisingly difficult to maintain
I don't find that this requires discipline. AI code simply requires code review the same as anything else. I don't feel the need to let AI code in unchecked in the same way I don't feel the need to go to my pull request page one day and gleefully hit approve and merge on all of them without checking anything.
Prime numbers are a pattern; take the natural numbers - starting after 2, exclude every number that isn't 2, starting after 3, exclude every number that isn't 3, etc.
It repeats like this predictably. Even though it changes, the way in which it changes is also predictable. Their repetition and predictability make prime numbers a pattern.
Out of the fundamental pattern of prime numbers, higher-level patterns also appear, and studying these patterns is a whole branch of math. You can find all kinds of visualizations of these patterns, including ones linked in this thread.
It's not that you're seeing a pattern that's not there, it's that you're seeing a pattern that gradually becomes infinitely complex.
I've often thought that this (and every problem where a manual process is required that is tough to automatically enforce) an AI code reviewer could be very useful.
It's the type of thing you might add to a long checklist of things to make sure you do (or don't do) in an MR template that quickly becomes difficult, if not impossible, for MR authors and especially reviewers to reliably follow.
Tests is another example - you can check that coverage doesn't slip over time, but not that every change is tested. And a human can maybe remember to check if there are tests, even if there are good tests, even if there are tests for every change if coverage tools are well integrated in your system, but not if every change is tested well, and not reliably.
AIs are great at sorting through lots of data to check for errors that a human would miss. Letting it add MR review comments, not letting it make any changes it wants, would allow for a human to provide checks and balances.
So I like the idea, I'm not sure how I feel about limiting it to docs or letting it write changes itself.
I don't have a problem with needing to memoize props passed to child components for their memoization to work.
If your parent component doesn't need the optimization, you don't use it. If it does need it, your intention for using useMemo and useCallback us obvious. It doesn't make your code more confusing inherently.
The article paints it as this odd way of optimizing the component tree that creates an invisible link between the parent and child - but it's the way to prevent unnecessary renders, and for that reason I think it's pretty self-documenting. If I'm using useMemo and useCallback, it's because I am optimizing renders.
At worst it's unnecessary - which is the point of the article - but I suppose I don't care as much about having unnecessary calls to useMemo and useCallback and that's the crux of it. Even if it's not impacting my renders now, it could in the future, and I don't think it comes at much cost.
I don't think it's an egregious level of indirection either. You're moving your callbacks to the top of the same function where all of your state and props are already.
Thank you, that was the example I needed to hear to see why this could be an issue.
I will still say though, I have not actually had this happen to me yet with all the years of using hooks. Generally when I'm fetching when X prop changes, it's not in response to functions or objects, and I guess if it's ever happened it's been fixed and never broke or hasn't caused problems.
Not to say it isn't an issue - it is - but the number and degree of issues I saw with lifecycle functions was much worse. That was with a less experienced team, so it could just be bias.