Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Depending on what you mean by the average developer, 1000x is plausible. If you think in terms of reliability, it's the difference between a four nines engineer and a one nine. That guy who pushes untested code that brings the site down for a day vs. the woman who has to roll back her code once in five years because she made a fencepost error is an example of how such variances in developer "productivity" can be measured by an organization, especially one that does not consider itself a technology company.

Think of floor personnel in stores. There are some who start off several steps behind because they have a lot of sick days, come in right at the start of their shift (so end up starting "late"), and take dozens of smoke breaks throughout the day without clocking out. They may chisel a minute or two extra from every break even when the store is busy. On top of this, they may drag their way through their workday. You may need 3-5 of this person to do the job of one, then you need someone to manage them.

They're going to be significantly less productive than the scarily chipper go-getter who wants to be store director, gets in early to start work on time, skips breaks or takes them only when things are slow, is very upbeat and engages customers, and offers to take extra shifts to help out. You can leave this person alone in the store and they'll keep things moving along.

Measuring productivity is a difficult science. In some environments, it's screwing up less than other people. In some, it's a simple measure of something silly, such as function points. Scientific Management has seeded a lot of bad ideas in managers' minds about measuring the value of individual contributors.



I'm skeptical that any engineer can independently be responsible for four nines. HA is a big-picture deal. One engineer can break it, but one engineer can't make it.

Put differently, a good HA organization does not need 1000x engineers- a good system ensures even garden variety engineers will deliver.

There are lenses through which you can look and say "Engineer A was 1000x as productive as Engineer B", but those lenses are things like leveraged work, which is not what people are thinking of when they talk about "rockstar programmers".

When you say someone is 1000x as productive, that means that they sit down and do 3 years of work in 8 hours. (Unless the average developer contributes negative net productivity)


"When you say someone is 1000x as productive, that means that they sit down and do 3 years of work in 8 hours. (Unless the average developer contributes negative net productivity)"

Productivity is value to the organization. If it is an ecommerce entity, Mr. one nine cost the company 1000x in sales vs. Ms. four nines.

Your skepticism is noted. As I've said, sometimes productivity is measured as not screwing up.

Would you claim a developer who replaced 1,000 lines of messy, poorly written code with a 10-line implementation of a more efficient algorithm is less productive than the person who wrote the 1,000 line mess?


sometimes productivity is measured as not screwing up

At which point I guess we get down to what is an average developer. Does the average developer make such big screw-ups that simply not screwing up is 100x increase in performance?

I guess when I think "average", I'm not thinking about what is actually the measured average, but more like "acceptable competency". Never destroys shit, but also never advances the project a month with something clever. The sort of standard grade you would hope your rank-and-file would be made of.

What does lines of code really mean? If the messy code has a lower bug rate and is easier to debug, which is entirely possible if the 10-line solution is painfully elegant, it is the more productive of the two. Remember that old quote about how it takes twice the cleverness to debug code as it does to write it...


The average developer, in my mind, is a person right in the middle of the group of people who can maintain a position as a "programmer" or "developer" or any equivalent terms.

You've made a number of assumptions that I was trying to squash.

Most developers do not work on "projects". They're not doing startups or even working for software companies. They're anyone from the guy maintaining a FoxPro database for a dry cleaning chain to a jedi ninja who poops better, tighter, inventive code than the rest of us can dream of.

How do you measure someone's productivity in an environment where they write or patch code based on their manager coming around and asking them to do make tiny changes or write reports? As an analogy, how would you measure the productivity of the "hero" of Office Space? He turns two character dates into four character dates. He's a programmer, but he's not actually "producing" anything. He's keeping the world from ending in 2000! His manager would probably count his productivity as how many lines/files/whatever he updates in a given period of time.

The lines of code metric was an attempt to illustrate that productivity is a difficult task to measure without a goal-oriented context. You see that measuring productivity is not just "lines of code," but you also missed that the developer spent time refactoring 1,000 lines of code. Is that productive? Measured on that day, the work done was zero, or negative, and productivity can only be measured in the medium to long term. Yet I hope we would both agree that it was a productive effort.


In regards to 1,000 LOC vs. 10 LOC, I still maintain that I'd have to see it. Most of the time you'll be right, but that refactoring is not necessarily productive as a rule.


"a good system ensures even garden variety engineers will deliver."

I assume this points to the idea that test procedures can replace individual expertise ? My experience is that testing systems get bypassed by "garden variety" engineers (mostly with PM/Manager support), usually in exactly the cases where you wouldn't want them to do that.

Even when this is not the case, tests, no matter the coverage, don't cover everything. The 100% test coverage demands that are in vogue these days make sure tests are usually written specifically to minimize interactions between pieces of code, which is of course exactly what a good engineer would test for.

Of course with tests testing interaction, there are pieces of code (the ones that worry me) that get tested 1000x (in one case literally iterating over all possible calls and verifying constraints afterwards, almost fuzzing), and there's pieces of code that just don't get attention (usually an if out_of_range check at the beginning of functions, since I would just test for the same range, with potentially the same mistake in both the used number and the tested number, but other things happen too. If the consequences of the function screwing up is a slight UI aberration ... well I barely test UI code at all, really. MVC for the app MC for the tests). Also there's pieces of code where I know they're potentially not thread safe and so, just to make sure big screwups get caught, I test firing off the same test 1000x in parallel. If the function works entirely based of the stack and return values, no such test is done.


Good points.

Most requirements do not include error or exception handling requirements, so when developers write their test cases (in the very few cases when they do so), they usually do not write negative test cases.

Obviously, NFRs need to cover those, but in many organizations, they're seen as being "in the way of getting work done," just like comprehensive automated tests.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: