Customer pays me to make it work, not make a pretty thing that doesn't work and is over budget - but pretty.
I optimise for "make it work", that's what the deal says.
If there's extra time, I might go to step two which is "make it pretty". Meaning that I go through the code and see that it's all good and proper if we need to add features later on.
100% not what Camp 1 is or does. Their #1 goal is make it work. It is your #1 priority. So quite the opposite, Camp 2 will spin and make 100 "useful" (not) abstraction with the slickest imaginable code doing things you go "OMFG, how on Earth did you come up with this, insane" while during that development Camp 1 shipped 37 new features for its customers
Interesting that your first counterexample is Charlie Parker. I've been listening to a lot of Phil Schaap's Bird Flight recently (https://www.philschaapjazz.com/sections/bird-flight). It's funny to see how many of the episodes are Phil describing a recording session more or less like this:
"The Bird showed up two hours late to a three and a half hour recording session. They recorded one take each of six tracks, but the recording engineer was surprised when they started so he missed the first half of the first track. And that's how we got the five tracks on <INSERT CRITICALLY-ACCLAIMED ALBUM HERE>."
FWIW It's not at all clear to me how this requirement would be implemented in practice: "This syntax would explicitly be limited to orphan implementations."
Maybe I'm missing something, but the compiler can tell whether an implementation is an orphan. That's how you get an error message today if you try to write one. So I don't know what difficulty you have in mind.
The classic AVR instruction set does not include multiplication, you have to be targeting a device that supports AVRe+, such as an ATmega rather than an ATtiny. Try adding -mmcu=avr5 and it will show up pretty quick. Example: https://godbolt.org/z/x951M8fn8
The new ATtiny 0/1/2-series parts are full AVRxt cores with a few instructions removed due to lack of need for large memory access. The classic terminology differentiating product lines isn't particularly useful anymore. ATtiny can multiply now.
What an incredibly on-the-nose anecdote, I love it!
The term of art for this strategy is "size to the horizon". Imagine you're looking across an open plain. The trees and rocks closer to you are bigger and you can make out more detail. The ones further away are still abstract.
You have to know exactly what to do with the things right in front of you, but you have to keep only a general awareness of that which is distant.
On the one hand, we like to encourage learning here. On the other, we prefer not to copy-paste a bunch of possibly-irrelevant content. Well, forgive me for pasting in a StackOverflow answer that may be relevant here: https://stackoverflow.com/questions/11276259/are-data-races-...
> Are "data races" and "race condition" actually the same thing in context of concurrent programming
> No, they are not the same thing. They are not a subset of one another. They are also neither the necessary, nor the sufficient condition for one another.
The really curious thing here is that the Nomicon page also describes this distinction in great detail.
I apologize if my comment came off as snark. Your comment was nothing but pasted text which ommitted relevant detail, so it was not clear what the intent was. In context, to me, it did not seem to be illuminating. It actually seemed to be introducing confusion where there previously was none.
Data races are not possible in safe Rust. Race conditions are. The distinction is made clear in the Nomicon article, but commenters here are really muddying the waters...
Clearly there is still confusion since we don't agree (as does other the aforementioned poster).
I could have also belittled your comment as "bunch of possibly-irrelevant content" since most of the content was and still is unnecessary snark.
But then it would have said more about my own etiquette and capability to debate objectively than about the topic at hand.
Our definition of data race seems to differ, and because you don't seem to be able to separate objective discussion from personal attacks, I'll stop here.
> I could have also belittled your comment as "bunch of possibly-irrelevant content"
That doesn't really make sense because there are other witnesses, so everybody who knows about this topic can see immediately that you're wrong and the other person is right.
Test code is code. It's as much of a burden as every other piece of code you are troubled with, so you must make it count. If you're finding it repetitive and formulaic, take that opportunity to identify the next refactoring.
Just churning out more near copies is not a good answer.
Absolutely this! I was very guilty of over complicating test code to use abtractions and reduce boilerplate, but it certainly resulted in code which you could not always tell what was being tested. And, you'd result in nonsensical tests when the next developer added tests but didn't look deeply to see what the abstractions were doing.
I now find it is best to be very explicit in the individual test code about what the conditions are of that specific test.
> If you're finding it repetitive and formulaic, take that opportunity to identify the next refactoring.
It doesn't really matter how many helper functions you extract from your test code, in the end you have to string them together and then make assertions, and that part will always be repetitive and formulaic. If you've extracted a lot of shared code, then it might look something like "do this high-level business thing and then check that this other high-level business thing is true". But that is still going to need to be written a dozen times to cover all the test cases, and you're still going to want test names that match the test content.
There's a certain amount of repetition and formulaism that will never go away and that copilot is very good at.
LLMs are pretty good at anything that follows a pattern, even a really complex pattern. So unit tests often take a form similar to the n-shot testing we do with LLMs, a series of statements and their answers (or in the case of unit tests, a series of test names and their tests). It makes sense to me that LLMs would excel here and my own experience is that they are great at taking care of the low-hanging fruit when it comes to testing.
I agree. A very high impact change I made for an application my team is working on was allowing easy creation of test cases from production data. We deal with almost unknowable upstream data and cheaply testing something that was not working out has reduced the time to find bugs tremendously
reply