Honestly, I haven't worked on cost reduction yet. I have a backlog, but I haven't done anything from there.
One idea is to use other models to shorten text by throwing out meaningless words. I estimate this will reduce the length of the text (and thus the GPT cost) by 30%.
My feeling is that she understands much better than people do. If you understand a sentence from which some of the conjunction words have been taken out, she will understand 100%. And I think this will all be smoothed out on a transcript of 150,000 characters (that's the average size of a podcast).