No, you can't. There is barely any correlation between the amount of new lines of code I wrote in a day and how productive I was that day.
As a senior technologist who bills his time as a consultant, I can tell you that I (correctly) get paid the most for days where I write 0 lines of code.
But also, my most valuable code was code where I spent a week thinking and writing maybe 5 lines of code a day, and my least valuable code was when my output was limited by my typing speed.
> As a senior technologist who bills his time as a consultant, I can tell you that I (correctly) get paid the most for days where I write 0 lines of code.
Maybe convince management not to rewrite their core system.
Or listen to the requirements from a new system and propose an existing tool that solves 90% of them. Or explain why the library they were going to use solves a different problem and how they'll regret choosing it.
Or talk to folks and figure out why my proposed new design for a system doesn't cover all the needs of the old one (and eventually that some iteration of it _is_).
Or sketch possible algorithms for a difficult problem on a whiteboard / notepad until I figure out a good-enough solution.
Or talk to stakeholders about their needs and concerns regarding a new system I'm helping design.
Or talk to customers. Or help prepare a pitch or customer demo.
It's very rare that a customer feels it's worth paying me rate for writing code. Though it does happen, and when it does it's usually very interesting code with a very interesting story behind it.
Alternative proposal: Count the number of commits devs are making. Since some devs make more frequent commits than others, for each dev review some fraction of their commits at random, and rate how substantial the randomly sampled commits are. Then multiply average commit importance by number of commits to get an overall productivity score.
Thoughts? I think the biggest counterargument is that it incentivizes churn like making a commit and then reverting it later.
Programming is the reification of decision making processes.
Therefore, a language that allows a programmer to turn thoughts into correct code faster is a win.
The size of the code as writter is only slightly interesting: almost everyone can type faster than they can think clearly. A more compact notation is preferred for reducing the delay between typing and testing, but not to the point of impairing readability.
Yeah, and that has another problem: developer who code fast but don't think things through would have plenty of issues opened against their code. They would probably be fixing them as well so their close count would be much higher than for a programmer that made the right choices to begin with.
You could then go with features only tickets but those vary in size and complexity. Now, if we would put complexity as a measure (on which the team of developers should agree) and then measure the number of issues on that implementation we might get a better measure of that person's contribution.
That said, once that would be in place, even that can be gamed.
I think there is no silver bullet here, as much as we would like it to exist.
As a senior technologist who bills his time as a consultant, I can tell you that I (correctly) get paid the most for days where I write 0 lines of code.
But also, my most valuable code was code where I spent a week thinking and writing maybe 5 lines of code a day, and my least valuable code was when my output was limited by my typing speed.