The linked GitHub page mentions “Language Learning Models”, so GP is saying that it seems like the authors of this project wrote the whole thing while not knowing what LLM actually stands for.
While not exactly what you asked for and instead a single article containing a collection of advice, albeit concretely illustrated in an arguably useful fashion, I can attest at least that I've found this one useful:
Ol' Gen-Xer here - Oh, we do remember Flash only sites... I remember having to build one or maybe two of them even, of course after trying ultimately in vain to explain to the client just how bad an idea it was. Though this was still in the times before the security issues, before the Adobe years.
It kinda looks, to my eyes at least, like it has a matte finish in photos, so my thinking would be that it would have perhaps an almost shark skin roughness or something? Though you'd imagine surely not given the bit of extra drag when factored over the whole airframe...maybe?
But then again, at those speeds I wonder if there would be a difference or if boundary layer effects take over or something of that nature.
Disclaimer: am obviously not an aerospace engineer :)
Actually, with one of the problems being faced in that space at present, especially for applications involving factual answers, being "hallucinations" ( ie. essentially as I understand it the level of "creativity" in responses) such "creativity" may well be quite suited to finding those various unusual edge cases.
You know this common trope in fiction and jokes, that advanced computers & AIs can be easily defeated by feeding them a logical inconsistency?
Guess what, we now have a chatbot AI that not only doesn't mind working with nonsense input, it will happily produce logically-inconsistent statements on its own, and can do it so sneakily and convincingly, that it's the human operator who could end up believing logically inconsistent statements, without even realizing it, and possibly end up in serious trouble some time later.