The funny thing is that it's almost true by default. The mere fact of measuring something is often enough to drive improvements.
A couple of years ago, I installed the RunKeeper app on my phone and started tracking how long it took me to walk places. Over the course of a few months, my walking speed increased from a pokey 5 km/h (3mph) to 8 km/h (5 mph). I didn't install the app with the goal of walking faster; it just happened as a side-effect of regular feedback on how fast I was walking.
My understanding of the Hawthorne effect is that it is a temporary increase in performance due to the subject's awareness of being studied. I stopped using RunKeeper over a year ago but my default walking speed has remained much higher than it was when I started measuring it.
Except the Hawthorne effect is poorly studied, probably doesn't exist as its own effect, and is useful only as a metaphor for this very "What we measure we can fix" effect.
You do, however, have to be very careful that the "it" that is being improved is really the same as the "it" that's being measured. I've lost count of the times that I've seen a well intended metric end up driving exactly the wrong sort of behaviour.
(For example: I recently sat through an excellent lunch time rant from somebody whose boss is prioritising shorter simple-to-implement stories over longer ones with a higher business value, because their team is having their efficiency judged by the cycle time (length of time between story-in and story-out)).