It means that the correctness properties that were formally proved are not necessarily what you would intuitively understand as "correct". E.g. you can formally prove that a buggy factorial(n) always outputs (n - 1)! by incorrectly defining what you want to prove. The function behaves according to what you proved, it's just not the expected behaviour.
More pointily: "correct" is not a formal property of a program, so you can't actually prove it in the first place. Rather you have proved some actual formal properties of the program (eg, always terminates, does not branch dependent on cryptographic secret data, etc[0]), which may, or may not, have any relevance to it's actual intended behviour.
0: Strictly speaking, even those are only English-language descriptions of formal properties, and you still need to worry about whether the formalization actually means what you think it means.
It's a joke. You'd expect that a proof of correctness to be the Holy Grail. Nothing could be wrong, since it's proved correct. And yet, after running it, you could still find a bug in it, because you may have made a mistake in your proof.
To the other comments here I have to add - proofs (in math, science, computer science, any subject area..) never reflect absolute reality, but rather just your view of reality or the parts of reality that you have chosen to model.
So the reality of your "proven" program will never 100% match the reality of it running on hardware or some other software context than you anticipated. Let's call this the Hacker's Postulate...
As for math specifically I think this is a very deep and interesting philosophical tangent to go down.