I think I'm misremembering. I read through some of the introductory material in the second edition of his book and found it less critical than I recalled.
But in some places, it definitely comes across as hostile (e.g. footnote 107).
Also, the sentence "Bayesian probability is a very general approach to probability, and it includes as a special case another important approach, the frequentist approach" is pretty funny. I know the exact technical result he's referring to, but it's clearly wrong to gloss it like that.
He does mention consistency once, page 221, but (unconvincingly) handwaves away concerns about it. (Large N regimes exist that aren't N=infinity...)
Honestly I think it is a little hostile. Not towards frequentist directly, but towards the mis-use of frequentist methods in science. He works in ecology and I think he comes across a bunch of crap all the time. He talks at length about the statistical crisis in science and I can't really blame him.
But I could see how someone might take this as an attack on the methods themselves.
I agree. The golem is presented as an analogue to any statistical inference: powerful but ultimately dumb, in the sense that it won't think for you. That's in my opinion the major theme of the book---you have to think and not rely on algorithms/tools/machines...or golems to do that for you.
Second in the lectures he said that he uses frequentist techniques all the time and that it's often worth looking at it from each perspective.
I interpreted it as his problem is not with the methods themselves, but with how they are commonly used in science. To me this made a lot of sense.