Yes and no. I've found the order in which you give instructions matters for some models as well. With LLMs, you really need to treat them like black boxes and you cannot assume one prompt will work for all. It is honestly, in my experience, a lot of trial and error.