It comes from the world of systems operations. Something long-lived and trusted, so high emotional attachment (pet), vs. something short-lived that thus does not need to be trusted, so comparatively low emotional attachment (cattle).
For example, Bob's one-of-a-kind trusty server from which Bob is nigh inseparable, vs. a Docker container with a version controlled config you routinely tear down and bring up instances of, maybe even in an automated fashion.
Here this would map to trusty aged codebases you don't touch out of fear and caution, vs. codebases you can confidently touch because the spec, the code, the tests, the tooling, and the processes are solid.
A different mapping: to Microsoft, the users's computers are cattle, but to each individual user, the computer is a pet. Which is why the users keep getting mad when their pet feature gets euthanized.
Pets are projects that you toy with and keep adding new features, even when the main objective has been met.
Cattle are projects that do what they are supposed to and are left alone.
I'd much rather have Notepad fall into the cattle category.
The SMPY page is essentially an annotated bibliography: fulltext + abstract + commentary. Since the papers are of highly varied quality, it doesn't much make sense to try to put any particular confidence on it; I am convinced of some things, but definitely not others - particularly the early papers when they had little data and were still experimenting a lot. If I had an essay making a specific claim about SMPY then sure, but I don't really.
The fact that 'certainty' ratings don't make sense for pages like that is part of why these days, I wouldn't have a page like that at all. An annotated bibliography is not an 'essay' and shouldn't be shoehorned into my framework meant for that kind of opinionated writing. I realized that if I was going to 'annotate' a paper, I would either have to go without, or copy-paste it all around indefinitely and it'd violate DRY and be a nightmare. Long story short, https://gwern.net/doc/iq/high/smpy/index is closer to what that page should be, but it's a lot of work to sit down and convert the legacy page over to pure annotations, so, it is what it is. Maybe a LLM can do it for me soon - it seems within the ability of Claude Code.
Cultivating that passion is an art. A modern tool which I've found great to let my kids grow their math ability is the game Prodigy Math. Worth checking out - it's fun (do math to gain spellcasting ability in the game) and gently pushes the envelope of what they can do. It emails parents with details on what math problems the child didn't get right and with sample exercises to address those areas. I have no connection to them other than being a customer.
This brings back great memories of a game I played as a kid called 24. Not so much modern, just cards with four numbers that you would add, subtract, multiply and divide to get the center number. Then you would slap the card and explain. It did something to my brain as even the thought of those cards makes me smile.
I've been thinking about this for a minute, and I think if an American were to say "why", and take only the most open vowel sound from that word and put it between "k" and "m", you get a pretty decent Australian pronunciation. I am an Australian so I could be entirely wrong about how one pronounces "why".
C++, Linux: write an audio processing loop for ALSA
reading audio input, processing it, and then outputting
audio on ALSA devices. Include code to open and close
the ALSA devices. Wrap the code up in a class. Use
Camelcase naming for C++ methods.
Skip the explanations.
```
Run it through grok:
https://grok.com/
When I ACTUALLY wrote that code the first time, it took me about two weeks to get it right. (horrifying documentation set, with inadequate sample code).
Typically, I'll edit code like this from top to bottom in order to get it to conform to my preferred coding idioms. And I will, of course, submit the code to the same sort of review that I would give my own first-cut code. And the way initialization parameters are passed in needs work. (A follow-on prompt would probably fix that). This is not a fire and forget sort of activity. Hard to say whether that code is right or not; but even if it's not, it would have saved me at least 12 days of effort.
Why did I choose that prompt? Because I have learned through use that AIs do will well with these sorts of coding tasks. I'm still learning, and making new discoveries every day. Today's discovery: it is SO easy to implement SQLLite database in C++ using an AI when you go at it the right way!
That rely heavily on your mental model of ALSA to write a prompt like that. For example, I believe macOS audio stack is node based like pipewire. For someone who is knowledgeable about the domain, it's easy enough to get some base output to review and iterate upon. Especially if there was enough training data or you constrain the output with the context. So there's no actual time saving because you have to take in account the time you spent learning about the domain.
That is why some people don't find AI that essential, if you have the knowledge, you already know how to find a specific part in the documentation to refresh your semantics and the time saved is minuscule.
Write an audio processing loop for pipewire. Wrap the code up in a
C++ class. Read audio data, process it and output through an output
port. Skip the explanations. Use CamelCase names for methods.
Bundle all the configuration options up into a single
structure.
Run it through grok. I'd actually use VSCode Copilot Claude Sonnet 4. Grok is being used so that people who do not have access to a coding AI can see what they would get if they did.
I'd use that code as a starting point despite having zero knowledge of pipewire. And probably fill in other bits using AI as the need arises. "Read the audio data, process it, output it" is hardly deep domain knowledge.
Can you setup entra authentication with PgAdmin? I'm more of a MS Sql person so I don't know, but if not the security improvement from this would be a huge improvement
I would be curious about context window size that would be expected when generating ballpark 20 to 20 tokens per second using Deepseek-R1 Q4 on this hardware?