I used to love these kinds of articles three decades ago. Then you get a programming job with budgets and deadlines and even stupid decisions based on politics and you hate all that at first until through experience you realize that even poorly engineered cars can get product from point a to point b and do so all over the world. Free markets eventually only have time for perfect solutions. And a perfect solution according to markets is the solution that does the job for the lowest cost over time. For a website that will support a two week marketing campaign, you don't need anything talked about in this article. In fact the only responsible decision is to ignore this type of approach and just build the damn thing with someone that has a track record. And then throw it out and move on. Based on empirical evidence, there isn't really a problem at all with the way people program. Markets have already mostly figured out the rare cases when such robustness is really needed. And it's rare. The only "right" way to program is to take as many stakeholder requirements into consideration as possible. And those requirements are rarely around program correctness. So this article is good (although I think you're really looking at functional programming by this time in history) but first make sure a top priority is program correctness before getting into the mode suggested by the article. One final stakeholder requirement that's always a priority: you have to be able to find qualified developers, and what developers learn is based on popularity and fashion. It's a real world constraint even if it's distasteful to the idealist and well intentioned types who write these articles.
And then, five years after you've left the company and some system inevitably collapses with nobody having a clue as to what went wrong you'd finally realize the wisdom of all that.
But it's no longer your problem.
So, please don't take it personal, but 'the well intentioned types who write these articles' tend to be the people that then get called in to clean up the mess.
And then - belatedly - the job gets done properly to keep the company in business, assuming there is still time enough to do so.
Just this week I had a nice inside look at the kind of mess gets left behind when the original duct-tape-and-spit guy leaves the company and lets his former co-workers clean up their mess. It isn't pretty, and a chapter 11 isn't an unlikely scenario, so forgive me if I take a harsher than usual look at the attitude that causes this sort of thing.
Note that the 'free market' doesn't have a horizon much longer than the next quarterly shareholder report, and that your typical software product lives a multiple of that interval. So software made with short term goals in mind will create long term headaches.
Your two week marketing campaign gets a pass. But your decade long backend project does not, nor does your real time medical device controller, ECU, database system or operating system.
> And then, five years after you've left the company and some system inevitably collapses with nobody having a clue as to what went wrong you'd finally realize the wisdom of all that.
But it's no longer your problem.
If that were a problem in reality, the markets would be punishing companies where that happens. It's not a real problem that that happens. Management pretends to be upset, but in reality it's not a huge deal. Entropy is normal in apps and everything else. To continue the analogy from my original comment, do companies really go into crisis mode when one of the many cars in their fleet inevitably "collapses"? No. They build or buy a new one and life goes on.
> nor does your real time medical device controller, ECU, database system or operating system.
Yep. Those are the rare cases I talked about. It's a tiny fraction of total programmers building databases and stuff like that.
> If that were a problem in reality, the markets would be punishing companies where that happens.
Oh they do, I can show you plenty of examples. But it is never the problem of the people that created the issue in the first place.
Think of these things as time-bombs of technical debt. They'll blow up sooner or later, usually later, and that makes it that much harder to deal with the fall-out.
Also: for all the lessons about economy made here: I would happily argue that doing things right is actually cheaper in the long run and possibly also cheaper in the short term, by applying the techniques described in proper measure you can save yourself a ton of headache.
But of course that would first require a basic understanding of what the article is trying to put across, which if your time horizon is short and your deadlines are looming likely isn't going to be on your agenda.
> It's a tiny fraction of total programmers building databases and stuff like that.
Software you build tends to live longer than you think and tends to be incorporated into places that you can not foresee when you make it.
The 'tiny fraction of total programmers building databases' should include the huge fraction of programmers building embedded systems, APIs, operating systems, libraries and so on. All of those will have life-spans in the decades if they're done halfway right.
You seem to have a poor understanding of both entropy and markets. Even the perfectly built program will soon become useless. Car companies are quite profitable building cars that "collapse" far before the actual potential which might be a car that lasts 50 years. But no longer can pass emissions tests... Zuck is about to surpass Buffet as the third richest person in the world. On an app built with PHP! I don't think much more needs to be said to support my original point.
> You seem to have a poor understanding of both entropy and markets.
And you're a bit assuming and rude. Your argument also isn't as bulletproof you want to make it sound. What is the argument here, anyway? There's no need to improve technique for an average programmer because an outlier system (Facebook) is written in a language commonly associated with poor programming practices, with some handwaving about markets and entropy sprinkled on top?
Sorry if I came off that way. I was in a rush on the way to an event and I thought I was just being honest about the weakness in his argument.
What's the argument here? That stakeholders have requirements that don't have to do with robustness like budget and deadlines and that your software has a shelf life and sometimes it's ok if it eventually breaks, just like cars and even the laptop I'm typing this on will. Is that an unreasonable perspective?
And Facebook is an outlier? Really? Even when we add Wordpress, Wikpedia, Flickr, MailChimp and a long list of the most successful websites in the world to that list?
> And Facebook is an outlier? Really? Even when we add Wordpress, Wikpedia, Flickr, MailChimp and a long list of the most successful websites in the world to that list?
Yes, FB is an outlier -- one out of million companies. Only 5-10 companies out of those millions made this current model work. So their existence and "success" proves absolutely nothing.
You have a strange understanding of the word "successful".
Facebook is certainly not "successful" because it neglects good tech. If anything, they rewrote PHP itself so as not to have to rewrite their customer-facing software. How is that for your "tech excellence is not important" argument? They rewrote the damned runtime and even added a compiler.
So please define what "successful" means to you. "A lot of people using FB" is a temporary metric, even if it lasts for decades. It's not sustainable per se. It relies on hype and network effect. These fade away.
@jacquesm's points are better argued than yours. Throwing words like "free market" and "entropy" does not immediately prove a point.
I will give you the historical fact that there are many throwaway projects but he's also right that the fallout from the tech debt they incurred is almost never faced by the original author. Throw in the mix the fact that many businessmen are oblivious on what do the techies do in their work hours exactly and one can be easily misled that technology perfection is not important. Seems that you did.
Final point: I am not arguing for 100% technical excellence. That would be foolish. We would still be refining HTTP and the WWW in general even today and internet at large would not exist. But the bean counters have been allowed to negotiate down tech efforts to the bare minimum for far too long, and it shows everywhere you look.
(My local favourite restaurant waiters' smartphone-like devices for accepting and writing orders are faulty to this day because some idiot bought a cheap consumer-grade router AND made the software non-fault-tolerant, being an everyday example.)
> Only 5-10 companies out of those millions made this current model work
Stats? Evidence? I mean hundreds of of thousands of companies use PHP and other forms of less than perfect tech.
Websites all over the world seems to get the job done even when JavaScript with all its warts is used. I like JS for the record, but it does have warts.
> even if it lasts for decades.
You're saying the same thing I said. That stuff breaks. That companies come and go in and out of fashion. I also think it's interesting that you're calling FB an example of tech excellence but saying it's going to fade away. Choose one?
> How is that for your "tech excellence is not important" argument?
I never made any such argument. Not even close. I only said quality is not the only requirement and might sometimes not be a requirement at all.
Most of the code I write is high quality. I put a lot of effort into code reviews too. I mentor more junior devs around quality. My original post is actually much more nuanced than you are claiming.
> Final point: I am not arguing for 100% technical excellence. That would be foolish. We would still be refining HTTP and the WWW in general even today and internet at large would not exist.
Exactly. That's in the spirit of my original post. Maybe re-read it to see that we mostly agree instead of making my position into something it really isn't?
> Stats? Evidence? I mean hundreds of of thousands of companies use PHP and other forms of less than perfect tech.
Oh, I meant companies at the scale of Facebook. There aren't too many of them, would you not agree?
> I also think it's interesting that you're calling FB an example of tech excellence but saying it's going to fade away. Choose one?
FB does a lot of open-source projects. Their devs are excellent. That doesn't mean that their main value proposition is not comprised of code of the kind you speak about. No need to choose one, both can coexist in such a huge company like FB.
> I never made any such argument. Not even close. I only said quality is not the only requirement and might sometimes not be a requirement at all.
Well alright then. I am not here to pick a fight, you should be aware that you came off a bit more extremist to me and a bunch of others than you claim. But these things happen, I can't claim your intent because of a few comments, that's true.
Me and several others' point is that quality plays a part bigger than what you seem to claim. I also knew many devs that decided they won't ask for permission to take the [slightly / much] longer road and this decision paid off many times over in the following months and years.
Sometimes businessmen simply must not be listened to. I can ship it next week alright. But I can skip a few vital details, namely that I did not take into consideration some stupid micromanagement attempts to teach me how to do my job ("nobody cares about this arcane thing you call 'error logger' or 'fault tolerance', just get on with it already!"). Such toxic work places should be left to rot, that is a separate problem however.
> You seem to have a poor understanding of both entropy and markets.
You're hilarious. On an annual basis I end up being the deciding factor in the allocation of a fairly large sum of VC money and tech quality is a big deciding factor in that.
Fortunately there are plenty of successful companies that do a much better job than what you are describing you are doing.
What I described is taking stakeholder requirements into consideration like budget, deadlines, and the expected useful life of the software. That's not best practice? If it's not could you describe what I should be doing differently? I thought it was a step up when I finally realized software quality is not the only requirement competing for attention, but since I've spent my entire career paying close attention to what's best practice I'm also willing to learn from you and improve.
Every piece of software should be high quality even if it's a throw-away website used for 2 weeks? You'd expect the programmer to give a mathematical proof for the website code?
I also described that stuff does eventually break like the laptop I'm writing this on and it's not end of the world. We expect stuff to break.
Can you explain why my wife and almost every accountant in the world amortize intangible assets like in-house developed software and give it a useful life span?
> Can you explain why my wife and almost every accountant in the world amortize intangible assets like in-house developed software and give it a useful life span?
That's learned behaviour post factum. Had we (the software industry at large) done better then they wouldn't have the countless examples to learn from and turn them into a habit. Don't conflate things.
> I also described that stuff does eventually break like the laptop I'm writing this on and it's not end of the world. We expect stuff to break.
You are arguing extremes. The fact that a physical object will eventually suffer wear and tear no matter what has zero correlation to the fact that most software can be much more robust and long-lived but extreme time and money constraints prevent it from being so.
Our points of view can meet but not until you admit that a learned behaviour is something that can be changed if enough people with money stop turning cutting corners into an olympic sport.
> Had we (the software industry at large) done better then they wouldn't have the countless examples to learn from and turn them into a habit.
Nonsense. Stuff breaks. Everything. Even stuff made to a very high standard.
> most software can be much more robust and long-lived
It can't. Not because it can't be much more robust. But because most software is simply obsolete after a few years or maybe ten years if you're lucky. Software doesn't live in a static world where nothing changes. Laws changes, accounting practices are modernized, entire industries come and go, and everything is in a constant state of change. Maybe you haven't been around long enough to see it. I have. I've seen perfectly built in-house custom inventory software replaced a few years later by something like SAP because upper management decides the pros of having an integrated logistics system
far outweighs the features of one in-house app. Sorry, but I've been in the business world far too long to fall for the idea that software is ever long-lived. There are some rare cases that it can be. The 40 year old COBOL programs some banks run to process massive amounts of transactions over night. And guess what? As long lived as they are, they are being re-written little by little because it's damn hard to find anyone under 65 who is actually interested in maintaining them. Software does not live in a vacuum.
Oh well, guess I better go into research and find academic investments because all this "make this web app yesterday" crap is getting very old and annoying...
tl;dr I have thousands of lots of horse carriage wheels built to the highest standard that will last another 500 years. Any interest? Quite a few AM radios too...
They didn't know that. Things that last also retain their usefulness. My entire point really. As an aside, let's hope you didn't use up extra resources that are not renewable in making something so durable but eventually useless. Would I favor regulations that prevent the opposite? Stuff that breaks before the end of its useful life just to create another purchase? I would consider it. Both ends of the spectrum create problems.
You give too much credit to the businessmen. They don't care about environment, they care about more sales and there do not exist any lengths they would not go to in order to achieve those.
I'd say producing one durable piece of tech vs. 5-10 non-durable pieces of tech is still more sustainable for the environment, would you not agree?
I love strong regulations that try to reduce or eliminate negative externalities (like pollution and toxic waste), especially when the cost of the those externalities are collected at the point of purchase. They aren't very popular though. When we try things like a sugar tax to reduce the $$$ billions a year that diabetes costs us, people starting screaming about nanny state and their freedom.
I'm not sure how we would require products be durable though? Who would be Czar of how durable things must be? Seems like that person would have a huge amount of power over which industries are profitable and by how much. I'm open to ideas though.
As an Eastern European I would definitely entertain the idea that totalitarianism is not entirely wrong. IMO the world needs politicians with stronger will who are less obsessed by next elections and are religiously persecuting criminals and opportunists... however the politicians are parts of these criminals so yeah, welcome to 21st century.
In any case, regulations have proven time and again to mean absolutely nothing unless enforced very strictly and with a very heavy hand (flat percents of the offender's gross income, and I mean from 20% and up, not some petty 1-2%). But that won't happen -- lobbies, rings of companies, "anonymous" donations, things like that... the status quo is too deeply entrenched. But we can dream, right?
My programming career has gone from "I don't know the best way to write something", to "I know the best way to write something and I'll do it this way every time", to "often it doesn't fucking matter, just get it done and move along to the next problem".
It depends on the part of the industry one works in. I'm basically a backend developer of web and mobile applications. I don't only code them, I also design or help designing most of the ones I work on.
Most of what we do in this market is reading some form data, JSON, XML, parse it, read/write to a database or some other API, gather results, send them back to the browser as either HTML or JSON.
I checked right now when it was the last time I had to devise some clever algorithm. I was at the end of 2015 in a Rails project. I remembered all the stuff about invariants from my CTO back in 1995 and it did really help. However in this market that's a once in 10 years occurrence, unless one keeps grinding through coding interviews on artificial problems (because those companies are affected by the streetlight effect.)
So, from my point of view "for most applications it does not matter, but for a few of them it does." It probably matters when writing some of the algorithm in the web browser I'm using right now.
It's clear a lot of people are taking my initial posts and comments wrong. I work really hard to not create technical debt. All I said was that there are competing requirements you should be weighing and not all of those requirements are technical. And that even the highest quality stuff eventually does break. Of course you should always strive to build high quality and even beautiful code. I take pride in my work. But part of that pride comes from being able to juggle multiple competing requirements and make the best decision for the company.
Sometimes creating technical debt is the right decision. Sometimes it's "get over this budget hump using two devs instead of five or we go out of business". And then you do get over it and all of a sudden the company is hugely successful and it's a good problem but you're working really long hours just trying to keep up with helping all that cash flow in... the real world rarely makes conditions so perfect you can write perfect code.
I strongly dislike it when I make the decision to create tech debt, but I will at least leave comments or documentation for the next guy on the parts I think could use more love.
And it's rare I do create it. I actually spend a large part of my time refactoring code and making it better and reducing technical debt. It's one of my favorite things to do. And that points right back to my original post. You know what's interesting about the code I refactor? It's working code. It solves the problem. I wouldn't want to build something bigger on it without refactoring. But I also wouldn't curse out the guy who wrote it. He solved the problem at the time within the budget and time constraints and other competing requirements he was juggling. Good for him.
> Sometimes it's "get over this budget hump using two devs instead of five or we go out of business". And then you do get over it and all of a sudden the company is hugely successful...
Seems you are judging from the SV / startup / USA bubble. Most of the world works in VERY different conditions than that.
I mean yeah, bosses whipping their devs for maximum throughput happens everywhere but the combination of factors you describe seems to be specific for USA.
I've worked in Europe, Asia, the USA, for quite a few startups, huge corps, and have started a couple of successful companies here and there myself. I have a long and varied career that I am very grateful for.
A lot of startups are actually obsessed with quality to the point of failing. I've seen that. I've also seen the opposite. There is a lot of variation in the myths founders create that they follow like a religion because they believe it's the one trick they need for success ;-)
You do eventually run out of money. It's so much more important to ship something to customers to get some feedback than it is to get it perfect. It's a very tricky balance to get right though. It has to look and work good enough to not scare potential customers away.
> It's so much more important to ship something to customers to get some feedback than it is to get it perfect.
I don't disagree, that's unequivocally true.
As a programmer myself however I know that "I'll get back to it later and fix it" is usually a lie...
Haven't founded a business yet and I think I'll do that eventually but it also seems that "do your market research first and foremost" is an universal rule.
EDIT: As an European I should add that most of us never start a business unless they already have several customers willing to pay lined up in an orderly queue. I feel that too many Americans (probably not only them) start a business based on pure enthusiasm and hand-waving that motivated them during a few business events where people vaguely expressed an interest in their idea.
Sometimes you just don't get a choice to do it the right way, or the most elegant way.
You've got a project manager breathing down your neck, pressure to deliver functionality, you've explained the technical debt issues etc - and you're still told to do it the wrong way.
Then years later, the only thing on the code is your name and some very dodgy looking decisions. That project manager is still probably kicking around blaming people and earning a fortune hehe.
> You've got a project manager breathing down your neck, pressure to deliver functionality, you've explained the technical debt issues etc - and you're still told to do it the wrong way.
Do the right thing and quite that toxic work place. You will help natural selection: extreme corner-cutters should go out of business. I knew a guy (programmer) who destroyed a company by simply leaving.
You can only tell a shoe craftsman to create shoes out of cow dung and grass for so long.
"If that were a problem in reality, the markets would be punishing companies where that happens."
Quite the opposite: markets have been rewarding it for some time. The richest companies mostly had buggy software. What got them revenue was everything but flawless quality. Then, once their customers were locked in via other tactics, the customers kept paying them so long as the software continued to work with a switch costing too much. They also often patented anything that could block competitors.
Even quality-focused customers often want specific features even if it leads to occasional downtime. Also, releases improving on features fast. I think Design-by-Contract with automated testing can help plenty there with the pace necessary for competitiveness in a lot of product areas. The markets don't care about perfection, though. The company's priorities better reflect that.
Why should it be a priority? Who should pay for it if customers are ok with the status quo? Where is the competition offering to fill the market gap with products that are security minded? I'm not in love with free markets as the end all to solve all problems worth solving, but I think these questions are worth answering. It's either customers willing to pay for something or taxes. Security will probably end up being much like national defense. No one willing to voluntarily pay for it, but it being in the best interest of all to be "forced" to pay for it.
Because otherwise one day you might find yourself facing bankruptcy.
I'm a strong advocate for liability for software producers because it seems we as an industry are categorically incapable of doing the right thing. Until it directly affects the bottom line this likely won't change.
Customers are not 'ok with the status quo', they're clueless, and the only thing that changes is corporate profits.
In the end the difference between doing it right and doing it wrong is more related to long term vs short term thinking than that it would affect the bottom line in a more dramatic fashion (such as would be the case with liability).
> Because otherwise one day you might find yourself facing bankruptcy.
> I'm a strong advocate for liability for software producers because it seems we as an industry are categorically incapable of doing the right thing. Until it directly affects the bottom line
These two statements seem to contradict each other. If it's not directly affecting the bottom line today, how would one go bankrupt?
I do agree with you there should be some force pushing to eliminate this negative externality. We could compare poor security practice with toxic waste. In general the force I'm talking about is government that creates smart regulations. You'd like to do it by allowing consumers to sue after the damage has already been done. I'm not going to get into that debate, but both of us have proposed solutions and I agree either would be an improvement over what we have today.
That's exactly my point. The markets pay for what they care about and ignore/punish what they don't. They rarely pay for security. They rarely punish insecurity. Even in security, it's usually just enough to not look incompetent when a breach or lawsuit happens. Both consumers and businesses care very little about software quality or security if assessing by what they buy, use, and stick with. You can easily prove this by giving them choices between feature- and security-focused products. Even when the latter are free/cheap and highly usable, the market still decides against them massively. The voters also don't push for regulation or liability of this stuff. Many straight-up vote against it.
So, the management at these companies operates in a market that barely cares about security or mostly cares about appearances/perception. The incentive structure rewards working against quality or security. The costs are externalized with little happening to counter that. So, the rational actors ignore quality/security as much as they can. Programmers should act no different in a system if maximizing selfish gain or minimizing work.
Personally, I'm a utilitarian that considers security a public need. I strongly favor regulations and liabilities to increase the baseline of our security. Just cover the basics like memory safety, input validation, secure admin interfaces, error handling, backups, and recovery if nothing else. The stuff we can already do today with free tools that the suppliers just don't care about. That's not what the market is, though. So, I can't blame people in it for giving it what it wants if they risk losing money or perishing focusing on idealistic goals. I do encourage those doing business with utilitarian style, though. It ranges from easy to hard work they don't even have to do. Also especially glad when I'm one of their customers. :)
Mothers used to die from doctors not washing their hands. The lack of a price signal didn't mean it wasn't a problem, it meant none of the doctors understood how to solve the problem (and neither did the patients).
Just to add, when Lister introduced antiseptic methods he was met with strong resistance from those same doctors who were equal parts annoyed with the messenger, and the message. It’s a hard thing to realize that you’d be killing thousands of people in your ignorance after all. It took quite a long time for his methods to be widely accepted and put into practice. Even when understanding emerges, you have to watch out for the entrenched interests defending themselves against change.
The truth is somewhere in the middle. What I’ve noticed over the years is that if you allow yourself to get in the habit of writing quick and dirty code you learn the wrong habits and gradually lose the ability to write correct code for complex problems. So I do favor the correctness approach.
But ... code has to be maintainable, meaning it should be simple to the person that maintains it next. Typically that means no cleverness and no obscure languages or frameworks. Choosing eiffel only makes sense if you know the next maintainer will be proficient at eiffel.
There is no need to resort to tools such as Eiffel to take some very good lessons about what the article is trying to say. Time has moved on since then, Eiffel has had its day, but just like Smalltalk and other obscure languages there is some underlying truth that is well worth studying.
> Choosing eiffel only makes sense if you know the next maintainer will be proficient at eiffel.
Choosing Eiffel makes the next maintainer proficient in Eiffel by definition, because that will be the job requirement for the maintainer position. Unless the people responsible for hiring cheap out, that is.
What you're advocating here is optimizing solutions strongly towards being maintainable by cheap, interchangeable workforce. It's a valid goal - presumably one the management would like - but sometimes (often) it's not worth the extra cost in complexity, both early and later on.
(Tangentially: programming is a profession. It should be entirely expected of people to be able to learn new things on the job.)
There's another issue here: many companies, when finding themselves in need of a proficient Eiffel programmer will just take one of their good programmers in other languages and ask: Would you like to learn Eiffel? And programmers usually enjoy learning new things, so someone will say yes.
I personally did that few times in my career: accepting job with technology I barely knew at the time, because I thought it would be fun to learn it.
You can imagine how that usually works out: software written by someone who was learning the programming language on the job.
> What I’ve noticed over the years is that if you allow yourself to get in the habit of writing quick and dirty code you learn the wrong habits and gradually lose the ability to write correct code for complex problems
As the original commenter who started this thread, I'd like to make it clear that I agree with you and I don't write quick and dirty code. Or at least very rarely. Even for stuff that has a very short shelf life, I write code that usually has very few bugs and that I'm usually proud of because of exactly what you said: I've done it so often it's a habit now.
I've always strived to write the best quality code possible within the constraints. Sometimes those constraints were even my own lack of knowledge. But after three decades of doing this I'm starting to think I'm actually getting to be pretty good at it. ;-)
So I wasn't suggesting to just write bad code in my original comment. Just to have a broader view of where quality goals sit within the many competing stakeholder requirements. A programmer who doesn't let perfect be the enemy of good is a better programmer.
The idea that it doesn't matter seems like survivorship bias. People see all the companies getting their databases dumped without a hit to their stock price/valuation and declare caution a waste of time, but what of the companies you never hear about that died because someone messed up in the rush to ship?
Five years as an example is probably the outer limit and so it supports your case well. But I've seen badly designed software cost twice as much to put right (compared to putting the effort in at the start) within a few months. It's a false economy over and over but that gets buried in all the drama that follows.
>>For a website that will support a two week marketing campaign, you don't need anything talked about in this article
>[...]
>And then, five years after you've left the company and some system inevitably collapses with nobody having a clue as to what went wrong you'd finally realize the wisdom of all that.
So guy puts up a web site for $500 in consulting fees that is a 2-week project. It makes the company $7 million over the next 72 months because it becomes literally the biggest inbound channel.
Are you saying he shouldn't have built it for $500? What should he have done?
I suppose - and this applies generally, not just to programming - people (myself included) don't exactly like market's idea of "perfect". Because free-market perfect isn't just nice-sounding "solution that does the job for the lowest cost over time" - it's the borderline cheapest, ugliest, worst solution that's only barely fit for its purpose, and if it was any worse, it would be unsellable. It's aiming for the absolute lower bound - as any possible way to make it cheaper is taken.
The problems with market!perfection are many - mostly resolving around short-term optimization, externalities and lack of alignment between market values and human values. But we have brains that can be used to get better outcome than what the market incentivizes by default.
> The problems with market!perfection are many - mostly resolving around short-term optimization, externalities
Almost all bugs, security of otherwise, are just negative externalities. I think the public has been conditioned to accept that software not only comes with bugs, but many of them, and there's nothing that can be done about it. Companies/developers/publishers are not penalized much at all for buggy software, so much so that it's a common business tactic to deliver crappy software that doesn't even accomplish what it states it does, much less bug free and without security problems, with the understanding that it can be fixed up after delivery with little negative consequences.
Totally agree. After decades of programming my "aesthetics" around what makes a high quality program are far higher than what the market will (usually) pay for.
This applies to many professions we consider a craft. I'm sure some guys slamming up 2x4s for carbon copy houses in the suburbs would rather be building timber frame homes with inlaid custom woodwork.
The problem starts when your constructipn guys start to cheap out to the point they violate some expectations you may have about your house that you don't realize. Like that walls should actually support the load with some margin, so that the whole thing doesn't collapse after first hole you drill. Or that elecrical cabling shouln't me aluminum wrapped in paper. It may sound like a strawman, but we have tons of regulation (paid for in blood) in this space precisely because markets incentivize people to cheat if they can get away with it.
I see markets like combustion. A powerful force if you can contain and channel it, but an absolute disaster if you let it roam free.
I don't think that's correct. You're describing the marginal solution that the market produces, not the average solution. And because demands are always changing, the market never converges on that optimum you are presenting as a bogeyman. It's not realistic any more than all profits being competed away to zero.
Yes, I've described the solution in the limit, in the same sense a physicist could write: lim t -> ∞ ... A limit doesn't itself happen in real life, but shows you where a system is going.
As for a proof that this is happening, there are plenty of examples if you look at the highly competitive spaces and consider goods you usually buy, and how they evolved over past decades. Food and tools are two obvious cases that come to mind.
> One final stakeholder requirement that's always a priority: you have to be able to find qualified developers, and what developers learn is based on popularity and fashion.
But popularity and fashion are not disembodied forces controlled by the whims of the gods. They are the sum total of decisions made by people. We could make different choices, but the belief that this is futile is a self-fulfilling prophecy.
How could we make different choices? These choices are rarely made by developers unless it's their own startup. For most IT jobs these kinds of decisions are made by upper management who have to answer to a board of directors, and ultimately to shareholders. No one ever got fired for using Java. (I personally quit C++ and Java many years ago because I thought both were a bit of a mess. Maybe things have improved with Java - I haven't followed it much).
Upper IT management can't just say "we're switching to Haskell because fewer bugs". They have to sell the idea to the folks who will pay for those massive changes which include retraining existing employees. And management does take into consideration how big the hiring pool is. That affects costs. A surge in demand for Haskell programmers would of course create a higher cost for hiring and keeping them.
So they continue to hire Java programmers and students graduating from uni make sure they know Java.
Where is the opportunity for us to make different choices?
Should students refuse jobs in Java? Or without picking on a language, jobs at companies that have the highest software quality standards? That's a lot of idealism and responsibility to ask of someone whose is just hoping they can get a foot in the door and start their career.
Or is there someone else you have in mind that could be making different choices? It does seem like my own personal choice to give up on Java has had zero effect on its popularity.
There is far more at play than the "the belief that this is futile is a self-fulfilling prophecy" although I agree that too is a factor.
I do have hope though. One thing I've seen happen is more and more programmers who were introduced to functional programming at uni and who manage to sneak ideas from that in wherever they can. I think we are slowly moving away from the worst parts of the object oriented paradigm and adopting the easier and better parts of functional programming.
I'm looking forward to see what we end up with. So far, I think the evolution has headed in the right direction and we are getting better and better at this thing we call coding.
By exercising our free will. By having the courage to stand up and say, "Yeah, I get that we have a huge investment in Perl code, but Perl really sucks so how about we do this new project in Python instead? Or Scheme? Or Common Lisp?" Or, "Yeah, I get that we can get it done faster by doing it in Java, but that will incur a huge amount of technical debt, and also make it so that the really cool kids, the ones who get why Java sucks, won't want to work for us. So how about we make an investment in our future and try Clojure or Scala or Rust instead?" If enough people do that, eventually one of these overtures will get a green light. Whatever organization does that first will eventually accrue a competitive advantage, and that will in the fullness of time change the dynamic.
> By exercising our free will. By having the courage to stand up and say
Did you miss the part of my comment where I said I gave up Java and C++ for reasons of code quality? That was well over 15 years ago. I am doing my part actually.
I've had the "courage" to talk to management countless times in my career about what I think the best choices would be to improve quality. I don't even think it takes courage. It's just a conversation and attempt to sell an idea. Happens all the time in business.
Do you think my description of the challenges around getting change to happen were more accurate than the parent's "the belief that this is futile is a self-fulfilling prophecy."?
I hardly think my comment claims it's futile when I concluded with being hopeful that change has and is happening.
> Did you miss the part of my comment where I said I gave up Java and C++ for reasons of code quality?
No, but the bulk of your comment sounded pretty defeatist to me:
"These choices are rarely made by developers"
"No one ever got fired for using Java"
"Where is the opportunity for us to make different choices?"
"It does seem like my own personal choice to give up on Java has had zero effect on its popularity."
It's true: a single person's decision to stand up against the system is unlikely to have an effect. But if everyone makes that choice, it will have an effect. And that is more likely to happen if more people stand up and say that you should make that choice rather than whine about they tried and failed.
Being realistic about what you are up against is not defeatist. Is there anything I mentioned that's not a realistic assessment? Is it me that's negative, or is it the actual situation? And I also mentioned where I think the promising opportunities are coming from.
Realism and defeatism are not mutually exclusive. That's the thing about self-fulfilling prophecies: if you believe in them, then they are actually true.
You haven't yet answered any of my questions. Since this doesn't seem like a conversation but more like you trying to take my comments in the most ungenerous way possible, I'm going to stop here. But if you want to answer some of my questions and discuss the massive challenges around changing the global dev culture I'll be happy to continue.
"Is there anything I mentioned that's not a realistic assessment? Is it me that's negative, or is it the actual situation?"
Those are the wrong questions. The answers to those questions don't get you any closer to a solution to the problem. They only get you to the conclusion that the situation is hopeless and that you should give up.
OK, you want an answer? It is both you and the situation that is negative. But the reason that the situation is negative is because of people like you who have decided that the situation is negative and that the only reasonable thing to do is to give up. And you know what? You're right. That is the only reasonable thing to do. Which means that the only way to solve the problem is to be a little unreasonable and carry on despite the fact that it makes no sense.
I haven't given up. You've still refused to acknowledge that my very first post talked about where I think the promise is coming from. Universities and young programmers who have learned and embraced functional programming.
If you think you have the right answers to change a globe full of programmers and to make businesses focus more on software quality, then by all means you should go on to prove to the world you are right.
I'll continue to assess the situation realistically and do my thing. I've never once failed to point out to management when I think things are being done poorly. I'm often the new guy on the team who starts measuring and reporting tech debt to management. You're super quick to judge a stranger based on a few comments in a forum.
When I put the concerns of the company first (instead of being idealist and obsessive about quality as the only goal that ever matters) management takes what I say seriously. So when I do speak up about quality they listen. That's my path. Fine that yours is different.
Maybe if you asked more questions and attacked less you'd find that those who you think are your enemies are actually allies and just have a different idea about the best way to improve things.
> You've still refused to acknowledge that my very first post talked about where I think the promise is coming from.
So I went back and re-read your first post:
> How could we make different choices? These choices are rarely made by developers unless it's their own startup.
That sounds very negative to me.
> For most IT jobs these kinds of decisions are made by upper management who have to answer to a board of directors, and ultimately to shareholders.
Ditto.
> No one ever got fired for using Java.
Ditto.
> Upper IT management can't just say "we're switching to Haskell because fewer bugs". They have to sell the idea to the folks who will pay for those massive changes which include retraining existing employees. And management does take into consideration how big the hiring pool is. That affects costs. A surge in demand for Haskell programmers would of course create a higher cost for hiring and keeping them.
Ditto ditto ditto.
> So they continue to hire Java programmers and students graduating from uni make sure they know Java.
Ditto.
> Where is the opportunity for us to make different choices? Should students refuse jobs in Java? Or without picking on a language, jobs at companies that have the highest software quality standards? That's a lot of idealism and responsibility to ask of someone whose is just hoping they can get a foot in the door and start their career.
Ditto.
> Or is there someone else you have in mind that could be making different choices? It does seem like my own personal choice to give up on Java has had zero effect on its popularity.
Ditto.
> There is far more at play than the "the belief that this is futile is a self-fulfilling prophecy" although I agree that too is a factor.
Mostly ditto.
And then, after wading through that sea of negativity, we finally get to this:
> I do have hope though.
Very well, I acknowledge that in your first post you talk about where you think the promise is coming from. But do you see how someone might come away with the impression that you were not entirely optimistic about it?
Yeah, you're being very ungenerous. Just for example, it's not negative to say that startups have the power to make their own language choices.
I've never thought an accurate description of the current environment and the challenges around changing it are a pessimistic view of things. Nor optimistic. Just realistic. I prefer realism. Is that bad?
If you want to see it as negativity, I think that says something more about you than me.
One of the reason I described all of those things AND asked questions is because I was hoping someone would actually come back with some stories about how they have overcome those challenges.
Saying that Java is still the most popular programming language only because people haven't tried enough seems to me to be both inaccurate and dishonest, and does nothing to help change that. Clearly what we've done in the past has changed nothing. Time to look at what we missed and to try something other than "just do it".
If we really do want to use better programming languages and techniques (I'm pretty into TDD myself) then it's very important to understand what management thinks about those ideas and why, and how we might influence the real decision makers. I've tried the wrong way enough times in my career to to understand the right way. Selling an idea to upper management usually has to come with low risk and a guaranteed return on investment.
That's neither optimistic nor pessimistic, neither positive nor negative. It's just the simple truth. Don't kill the messenger.
> And a perfect solution according to markets is the solution that does the job for the lowest cost over time.
Yep. If we actually valued correctness, the market would place some cost on incorrectness. Security is a common example of this. If we wanted, we could punish software developers or publishers for shipping security bugs (for example, making them liable), and then we would see an immediate shift in how software was written.
That's not necessarily an endorsement of that position (it lacks enough nuance to even be remotely considered a good idea), it's just a fairly illustrative example.
The market can't solve solve the quality problem, that's a fantasy.
Sw quality is so poor across the board that there is almost no one producing high quality, reliable software. My bloody phone can't properly select text in this box for example.
Sw companies have optimised their production for decades to develop software that's just good enough to ensure profit and they've trained customers to expect and accept poor quality.
Is anyone surprised when Windows reboots for updates in the middle of an important presentation? No, that's how Windows works.
Companies will even reduce quality to improve profits and still get away with it if the customers don't notice, don't understand or are too apathetic to take action.
Often there is no action to take that will sufficiently punish the offending company.
Security is a great example: it's been bad for so long that core pieces of technology are fundamentally insecure and can't be significantly improved without total rewrites. Punishing developers and companies would result in a collapse of commercial software development, so humanity collectively accepts that software is by nature insecure.
> but first make sure a top priority is program correctness
That's the thing, this doesn't happen in real world and it is questionable if the concept of correctness is useful at all. As it is based on unvalidated and incomplete assumptions about the world, never fully correct themselves. Sort of correctness of incorrectness.
Correctness can just mean "does what it's intended to do". This is fairly unambiguous in cases such as if you intend to implement A* Search. If you hit a NPE and your program explodes, or if you aren't calling your heuristic (central to A*), then you have skimped on correctness. Furthermore, basically any time you hit something like a runtime error you likely did not intend to hit it. Therefore, regardless of problem domain, taking approaches that minimize these types of errors is an uncontroversial way that correctness can be prioritized.
Case in point: from reading discussions about the C standard, my impression is that a compliant compiler is "correct" if it reads your program that dereferences a null pointer and blows up the world.
I agree with your analysis, but please notice that this new trend has completely redefined the idea of programming job, to the extent that, had I known about it in advance (say, 30 years ago), I wouldn't have chosen this profession in the first place. So the honest advice to "well intentioned types" (as you put it) is to switch to something else, otherwise you will feel miserable for the rest of your life. (I just bring your post to logical conclusion, which I happen to agree with entirely)
It makes me a little sad to read this because I don't think there are zero opportunities for jobs where quality is the absolute priority. People write code for medical devices for example. In which case you have a legal obligation and an ethical obligation to write the highest quality code possible.
I wasn't making the case that we should always abandon quality. Just that we understand its relative importance to other business requirements. That we don't build a $2,000 lock for a $10,000 safe to protect $1.
I'd like to think the majority of code I write is high quality. I am proud of most of the code I write. And I spend a lot of time helping junior devs refactor their code to be more robust, deal with edge cases, and be more maintainable, etc.
I've also been in the industry for a few decades now, and I'm more excited than ever about the opportunities for quality. The code we were writing in COBOL back in the day is nowhere near the level of quality of code that can be written today. Hell, we weren't even measuring bugs and quality back then.
I encourage you to have a look around for opportunities where your employer's values and goals are better matched to your own values and goals. If you are super interested in software correctness, you shouldn't be building throw-away websites for short-lived marketing campaigns. But that doesn't mean those jobs aren't important too. Just that you shouldn't be the one doing them.
The world needs idealists and well intentioned types like these, because the improvements in software quality won't come from the burnt out, disillusioned, just ship it already types like you.
What software wisdom would you share with us instead? That projects have deadlines and budgets?
That the perfect solution does the job at the lowest cost over time?
That the right way to program is to take the customer's requirements into consideration?!
Honestly it looks like you gave up a long time ago and you're just trying to convince the rest of us that mediocrity is the way to go. No thanks.
And it's not like Meyer is advocating for some hardcore formal verification... he's merely pointing out that design by contract can improve software quality. The same DbC which has first class support and could be implemented in many languages. Even the C++ boost library recently added support for DbC; it's kind of ugly and probably bloats the object files, but it's there.
> burnt out, disillusioned, just ship it already types like you
I love programming. And I've been at it for three decades. Zero burn out. I don't believe in just ship it, and I take a lot of pride in my work. And yes, part of that pride comes from having a better understanding of customer requirements than someone idealistically insisting we cannot ship working software because the aesthetics of the code aren't pleasing enough yet. That's why I get paid a lot more than idealistic junior devs. Whose idealism I do appreciate, and who I enjoy mentoring. Most of that mentoring being around how to improve the quality of their code... I never said to ignore quality.
Does it sound like I'm saying the world doesn't need idealists when I said this in my original comment?
"So this article is good (although I think you're really looking at functional programming by this time in history) but first make sure a top priority is program correctness before getting into the mode suggested by the article"
Maybe it's your own cynicism that gave you such an ungenerous interpretation of what I thought would be helpful advice to other devs. Whatever it is, that's on you, not me.
Maybe you are a champion of quality in real life and mentor of the less experienced, but that's not how you come off in your comments and I'm not the only once noticing that.
Yeah we know that not all projects can be built to the most stringent quality standards. The reality is that a lot of them have piss-poor quality though: they crash, lose data, get easily hacked, etc.
And when that's the reality the software engineering profession dooesn't need somebody preaching about requirements and keeping a balance between quality and other concerns, it needs quality fanatics.
> that's not how you come off in your comments and I'm not the only once noticing that.
I agree there are some people willingly ignoring the positive things I said in my original comment, and the other positive things I've said in other comments. My guess is that they would rather my view not be nuanced because otherwise there's not much left to attack. I'm happy to try to correct any misinterpretation, but I don't think it's all on me because it's not all comments that disagree with me. Quite a few comments show that people are getting my nuanced view. I would encourage you to re-read my very first comment with an open mind and decide if it's a balanced view based on decades of experience, or if I was actually really saying quality isn't important. If I was, it's very strange I would say the article in question is good. Which I did say.
I disagree that the software profession needs quality fanatics. The software profession needs quality fanatics when it makes business sense. Nobody ever stayed employed by building a $2,000 lock on a $1,000 safe to protect a $1 bill. What you built might be of the highest quality, but when you build with blinders on and ignore every other business requirement you aren't a fanatic for quality. You're irresponsible. I take pride in my ability to do what's best for the company. It's often difficult and requires making tough decisions and tradeoffs and risk management and making sure everyone involved is aware of all of those things. That's what a professional looks like. Not a perfectionist obsessed with quality to the point they become difficult to work with. Don't let perfect be the enemy of good.
> The reality is that a lot of them have piss-poor quality though: they crash, lose data, get easily hacked, etc.
Not the projects I'm on. I refuse to ship code that is that bad. I will get fired first.
> Honestly it looks like you gave up a long time ago and you're just trying to convince the rest of us that mediocrity is the way to go. No thanks.
That's exactly how I summarized most of his comments yet he still claims it's not true. I don't know, I am interested if he will reply to some of my comments so we can better judge what he actually had in mind. Him claiming that everybody misunderstood him is not a helpful discussion starter. :)
I've been thinking on similar lines for a while. It seems like the number one thing it comes down to is that more deep thought is required for a number of these 'obviously better' techniques (I think they generally come down to being more declarative than imperative, saying how the results should look instead of describing how to get them).
So lots of people are inclined to say things like, "then don't be lazy, do the work and think deeply"—but for one thing, 'deep thought' should be considered as a conserved resource and you have to choose what to spend it on (it's not automatically better to always spend it); and for another, I think the actual depth required for perhaps the majority of real-life problem domains is impractical, or perhaps sometimes the domain is even fundamentally not amenable to an elegant mathematical solution. So you have to contort the 'better' language to do inelegant things it's not really suited for anyway.
I think the constraints on the 'triples_from' structure from the article is a bit misleading and makes a good example of what I'm talking about. It looks super simple, but if you note how this constraint
across tf.item as tp all tp.item.source = tf.cursor_index end
is actually working, it's not so much that the language gives a particularly elegant/powerful means of specifying these constraints as it is that 'triples_from' was structured in such a way that it would be easy to specify the constraint. My feeling is that trying to do this for real-life things would rarely ever be so neat.
If the domain you're modeling has mathematical elegance to begin with—absolutely, do the deep thought and uncover the invariants. But if it doesn't, then you'd be 'programming wrong' by trying to use such techniques.
The alternative I've been thinking about is just developing far more powerful/pervasive visualization tools so that you can write relatively mindless imperative code and just see where things are going wrong more easily. (consider the author's example of multiple data structures which must be kept in sync, but it's hard to tell when they get out of sync). Not trying to self-promote too much but you can follow my profile to the project if you're curious.
Regarding the constraints, the article also makes a very important point: it's not all or nothing, you can have _some_ invariants and pre/post conditions easily.
I.e. size of arrays or "nonblankness" of strings, or balanced invoices.
The simple fact of thinking about the constraints, will help design code.
> Based on empirical evidence, there isn't really a problem at all with the way people program.
I disagree strongly. I would say that software development is a tire fire. Every day there are reports of bugs in software causing all kinds of real-world problems. These days software bugs can kill people[1]. And there are only going to be more CPUs and more software and more bugs going forward.
> Markets have already mostly figured out the rare cases when such robustness is really needed.
I don't see how this is a supportable statement in a world that includes the "Toyota Unintended Acceleration" bug? (Among so many many others.) When you say, "a perfect solution according to markets is the solution that does the job for the lowest cost over time" aren't you effectively saying we should only expect software to be as good as has to be to protect the liability of some corporation? Not to take a cheap shot, but uh, I think the families of the people who died in the car accidents-- excuse me, collisions ("'Accident' implies there was no one at fault.") --might have a different attitude to software correctness?
In any event, your entire argument is predicated on the idea that correct software is expensive. But this is only true because we, as an industry, have not made effective use of the the available tools to write correct software!
It's not expensive to write correct code. It doesn't take longer either. We just don't do it.
Dr. Margaret Hamilton figured out how to write bug-free software as a side effect of her work on the Apollo 11 program years ago and nobody noticed.[2] (She's the person who coined the term "Software Engineer" BTW.) Byte-for-byte, flawless code is no more expensive than buggy code. And maintenance costs are near zero, so it's actually cheaper. You just have to use the right method (which has existed for about 30~40 years, nearly totally ignored.)
Please PLEASE don't make excuses. We've can get this right, even if "the market" doesn't notice or care.
To reuse my lumberjack metaphor from my other comment: Chainsaws exist. They are safer and faster than axes. There is in fact no way that cutting down trees with an ax is objectively better than cutting down trees with a chainsaw.
You are maintaining that, since trees are felled well enough with axes today, there's no sense in investing in chainsaws and chainsaw training, not now, nor ever in the future.
You're saying that "the market" only really needs wood cut by axes and doesn't value wood cut by chainsaws because all the furniture made with it is good enough, and we should just live with splinters and chairs that break and drafty houses, etc...
Frankly, it's kind of a lame argument to find on a pro-tech forum. Fancy sophisticated technology is awesome, unless it threatens to obsolesce your favorite ax, eh?
> you have to be able to find qualified developers, and what developers learn is based on popularity and fashion.
This is the source of the problem: we're not "engineers", we're not even car mechanics! We're barely above the level of kids building go-carts in their backyards out of junkyard parts. We should be ashamed of our fashion-driven-ness. The amazing thing is that we got the business people to go along with this![3]
Maybe someone should just start offering chainsaw-cut wood at cheaper prices than the hand-hewn crap and see if "the market" likes that noise? (In case I'm being too arch, that's exactly what I'm about right now. I'm so freaking passionate about this.)
"Program correctness" shouldn't have to be "top priority" because it should be the default. You should get it "for free" along with the software for the same price because it costs nothing extra.
[2] Nearly nobody noticed. I know it sounds crazy but it's true: There's a simple, easy method to develop bug-free software. James Martin wrote it up in a book "System design from provably correct constructs: the beginnings of true software engineering." in 1985. That's probably the best source if anyone wants to read up on it.
[3] A programmer in language or framework A who cannot or will not learn language or framework B is not a good programmer. (I don't mean putting "Won't do PHP" on your resume, I mean that you can't or won't learn it in the first place.) Q: "What if we can't find Python devs? There are so many Java devs. We should use Java." A: "Why would you hire a Java guy who can't do Python? Even to do Java?" I've had that conversation a bunch of times.
A lot of things wrong with your arguments. When you say correctness, you are pushing for a specific method for reducing amount of problems caused by defects, while these problems is what we actually care about when we talk about reliability, not defects, not bugs, not correctness. Yours is just one such method, and one of the expensive ones. This is very important distinction if you want to understand why "proving correctness" cannot get anywhere. For a lot of software even if we need reliability there simply exist more objectively better ways to achieve it.
Business is another thing, nobody convinced them to go along with anything, it's the other way around. Incentives to produce software the way it is produced actually come from businesses as they are the ones paying for it. Engineers just go along with it.
I get that software industry is very dogmatic and it's hard to see things for what they are. But we should at least try.
> When you say correctness, you are pushing for a specific method for reducing amount of problems caused by defects, while these problems is what we actually care about when we talk about reliability, not defects, not bugs, not correctness.
I'm using "correctness" to mean software that is bug-free.
The system that the software is a part of may still have problems, but none of those problems should be caused by bugs in the software components. It's also possible to build bug-free software that solves some other problem than the one you have. Both of those issues are orthogonal to the issue of why the industry doesn't adopt methods that generate bug-free code.
> Yours is just one such method, and one of the expensive ones.
I expect to be able to train normal, non-programmer folks to be able to use it (a HOS-like system that permits elaboration of a top-level spec into bug-free working code) to develop bug-free programs. If that works it may be so cheap that it depresses the market for "real" programmers. I should be so lucky.
But regardless, these methods (HOS, Cleanroom, etc.) just aren't that dreadfully expensive. And bug-free code, once written and paid for, can be reused. The cost analysis has been on the side of "provable correctness" for longer than my lifetime.
> This is very important distinction if you want to understand why "proving correctness" cannot get anywhere.
Well I don't accept your assumption that '"proving correctness" cannot get anywhere'. My whole point it that it has gotten places and we're all ignoring it because we prefer to use e.g. C and Java and {{POPULAR LANGUAGE}}...
> For a lot of software even if we need reliability there simply exist more objectively better ways to achieve it.
If you are talking about "reliability" of software I don't know what that means other than bug-free.
I know you can build reliable systems out of unreliable parts, but to do that there still has to be some reliable system orchestrating them and compensating for failures.
But again, I'm not saying there's a method for reliable systems, only reliable software. The absence of the former doesn't invalidate the existence or desirability of the latter.
> Business is another thing, nobody convinced them to go along with anything, it's the other way around. Incentives to produce software the way it is produced actually come from businesses as they are the ones paying for it. Engineers just go along with it.
The "suits" aren't to blame for this one. They only know what we tell them (to a first approximation.) If you tell your boss, "I can chop better with this ax than that chainsaw." you're lying or ignorant. How is management supposed to even know the possibility is there if the programmers doesn't tell them? Or they bring it up and the response is "Sure, I'd love to use {{TECH}} but {{EXCUSE}}."?
There's a cost for bugs. If you can get bug-free programs for the same upfront cost as buggy ones (and my whole argument is that we can, but don't) then there's no upside and only downside to accepting buggy software, and methods that permit buggy software to be written.
We could use chainsaws, instead we use axes and claim chainsaws are too expensive and axes cuts fine.
It's not because the bosses, who only care about board-feet[1] per worker per hour, are too cheap to buy chainsaws.
[1] "The board-foot is a unit of measure for the volume of lumber in the United States and Canada. It is the volume of a one-foot length of a board one foot wide and one inch thick. " ~https://en.wikipedia.org/wiki/Board_foot
I've never heard of many ax related deaths and I'd imagine an out of control chainsaw is far more dangerous than an out of control ax. The vast majority of lumber industry related deaths are due to falling trees. If we took the chain saws away and replaced them with axes and hand saws, far fewer trees would fall. It would greatly reduce production, but I'm not convinced it would result in more deaths. Quite the opposite.
I do get the point of your analogy, but it's really a very poor analogy.
> If you can get bug-free programs for the same upfront cost as buggy ones (and my whole argument is that we can, but don't)
This sounds like a massive market opportunity. One then wonders why absolutely no one has exploited the opportunity. There's something missing here.
(In this metaphor, you can even use your ax to carve a chainsaw out of a tree in your spare time, and then use that! But we don't. "It's too expensive."
As an example, here's a way to get Logical Paradigm programming in your favorite language http://minikanren.org/ It's simple, easy to understand and implement, and you can use it to do things like type inference and type checking with flexible and powerful constraints to define and ensure invariants and stuff like that. This stuff isn't expensive or even that challenging, we just don't do it.)
I did a little digging and even started reading "System design from provably correct constructs: the beginnings of true software engineering". From what I can see, it's mostly garbage used to sell the products of the HOS company. James Martin was on the board Higher Order Software, Inc. A company that despite claiming being able to provide "bug free software" went out of business.
Here's Edsger Dijkstra debunking the books written about HOS.
An evaluation by the US Navy concluded. "the HOS literature tends to advertise their ideas and products more than making a contribution in substance to the field of Computer Science. The author recommends that USEIT not be used in the TRIDENT program or any program development at NSWC. Even for a high level system specification, USEIT is not seen as a good choice. A mathematical functional notation or a PROLOG-like notation appears better suited for that purpose. The examples in the Appendices of this report, especially Appendices C, D, and E, show that a LISP-style mathematical notation is more compact and normally easier to read than the control map notation of USEIT. On a more positive note, the author considers the functional approach to system development and programing very promising. Systems so conceived and programs so constructed are more amenable to analysis and therefore, in principle, more reliable and better manageable."
So are we just talking about functional programming when we say HOS? If so, the pros and cons of functional programming have been discussed at great length.
Here is an interview with Simon Peyton Jones (one of the creators of Haskell) talking about why Haskell is "useless". It's a short interview worth watching if you want to understand the nuances and challenges and costs around creating high quality and bug-free software.
> I would say that software development is a tire fire. Every day there are reports of bugs in software causing all kinds of real-world problems.
All of human history is a tire fire then. Most stuff around the world is poorly engineered and just gets the job done. Wooden bridges with rope and wire holding them together and no analysis whatsoever done on what load it can bear. For most of history those bridges made up the majority of the bridges in the world. And it worked just fine until modern transport put higher demands on bridges. Yet, you still find clunky wooden bridges all over the undeveloped world, and they continue to work.
Should we try to do better? Of course we should. But someone has to pay for it. It doesn't happen magically.
Regarding the Toyota Unintended Acceleration bug, that's gross negligence if the top priority coming from management wasn't quality and if someone can prove that, they should end up in jail. And I would not excuse the developers either. I would never ship code that I know might kill someone. I would rather quit and work as a cashier. Please re-read my original post because I never said quality isn't important. I only said that quality is not always a top priority, and that in some cases it should be a low priority. A website for a 2 week marketing campaign will never kill anyone. It would be a waste of resources to insist on anything other than just shipping it once it works.
If you're honest with yourself and look around, you do it all the time in your own life too. You draw a diagram on a piece of paper or a white board to explain a concept and then you throw it away or erase it. You don't carve it in stone just because it will last longer and someone a hundred years from now might find it useful. You put up a simple rope barrier to keep people from stepping on newly planted grass. You don't erect the wall of China. Temporary "low quality" solutions that will later be dissembled and thrown out (or save the rope for reuse at least) are often the best fit based on the requirements.
> You just have to use the right method (which has existed for about 30~40 years, nearly totally ignored.)
I'm very skeptical that markets would completely ignore an opportunity to beat the competition if the costs were exactly the same but the results were higher quality. But I'm going to look into this. Thanks for the reference. I'm going to read James Martin's book and see if I learn something new that will help me write better code.
Yes. (We have spent the last 10,000 years recovering from the Younger Dryas and we are only just now getting back on our feet. Heck, most of us still think agriculture is a good idea when really it's about the dumbest way imaginable to relate to the soil. But I digress.)
> Most stuff around the world is poorly engineered and just gets the job done. Wooden bridges with rope and wire holding them together and no analysis whatsoever done on what load it can bear. For most of history those bridges made up the majority of the bridges in the world. And it worked just fine until modern transport put higher demands on bridges. Yet, you still find clunky wooden bridges all over the undeveloped world, and they continue to work.
Ah, but none of those bridges are built out of electrified math.
Software is electrified math and it can be perfect.
And it's self-referential: we can write perfect meta-code that emits only perfect code.
> Should we try to do better? Of course we should. But someone has to pay for it. It doesn't happen magically.
My point is not that we never try. My point it that the world contains many attempts and most of them have been ignored by most working programmers.
> Regarding the Toyota Unintended Acceleration bug, that's gross negligence if the top priority coming from management wasn't quality and if someone can prove that, they should end up in jail. And I would not excuse the developers either. I would never ship code that I know might kill someone. I would rather quit and work as a cashier. Please re-read my original post because I never said quality isn't important. I only said that quality is not always a top priority, and that in some cases it should be a low priority. A website for a 2 week marketing campaign will never kill anyone. It would be a waste of resources to insist on anything other than just shipping it once it works.
Let's assume, for the sake of argument, that I'm wrong and correct software always costs more than incorrect software. In this scenario (which may well be the REAL scenario) you have put your finger on the important bit: we're talking about the location of the inflection point.
Allow me to reference Randall Munroe, "Is It Worth the Time?" https://xkcd.com/1205/ It's a handy chart that shows, "How long can you work on making a routine task more efficient before you're spending more time than you save? (Across five years)"
It's not precisely what we're talking about, but it's got the same flavor: how much do you expect to use the buggy software vs. the cost of correctness...
Now my point would be: The industry should have had a house-on-fire urgency around reducing the cost of correctness to shift the infection point downward so that all but the most trivial software can be made correct economically.
We should have been doing that since forever (or at least sometime after the Apollo 11 mission.) Instead we generally ignore these sorts of things.
> If you're honest with yourself and look around, you do it all the time in your own life too. You draw a diagram on a piece of paper or a white board to explain a concept and then you throw it away or erase it. You don't carve it in stone just because it will last longer and someone a hundred years from now might find it useful. You put up a simple rope barrier to keep people from stepping on newly planted grass. You don't erect the wall of China. Temporary "low quality" solutions that will later be dissembled and thrown out (or save the rope for reuse at least) are often the best fit based on the requirements.
Have you been to Daiso? It's the Japanese dollar store. Pretty much any human problem that can be solved by ten ounces of plastic can be solved in Diaso for $1.50. I'm not generally into consumer culture, but I love Daiso.
You don't need 'temporary "low quality" solutions' if you have Daiso.
It's not that I don't use hacks, or don't respect them, it's that we're so far behind where we should be in terms of off-the-shelf solutions (to programming) and we don't seem to be quick on the uptake...
> I'm very skeptical that markets would completely ignore an opportunity to beat the competition if the costs were exactly the same but the results were higher quality. But I'm going to look into this. Thanks for the reference. I'm going to read James Martin's book and see if I learn something new that will help me write better code.
God bless you! (I collect powerful ideas and I cannot tell you how many times people have said, "If $FOO is so great, why doesn't everybody use it already?"... I don't know! I don't freakin know! It makes me sad. All of human history is a tire fire, indeed.)
- - - - - - - - - - - - -
This is my reply to your later comment on this same thread.
First, wow, I'm impressed. You are actually doing the homework and I tip my hat to you with great respect. Seriously, that's the nicest thing you could have done and I really appreciate it.
Second, yes the language and presentation around these "HOS" ideas has apparently always been really bad, with the issues you describe. It also doesn't help that the necessary background knowledge and jargon wasn't wide-spread at the time.
Third, yes it was panned by Dijkstra and the Navy, I've read both of those reviews, and their objections are not without merit. But, and I say this as someone who has huge respect for Dijkstra, they were both wrong: they both missed the fundamental advantages or "paradigm", if you will, of how HOS et. al. works.
(Also, have seen that Simon Peyton Jones interview. And no, we're not just talking about Functional Programming, that's kinda orthogonal. E.g. Haskell helps you write code with fewer bugs, HOS prevents them in the first place. Another way to differentiate them is that if you're typing text in a text editor to make software you're not doing HOS regardless of the langauge.)
So, poor marketing, bad reviews, obscure principles and the general disinterest of industry led to this powerful technology languishing.
Yet, I insist there's something there. Let me try to convey my POV...
In modern terms I can describe the crucial insights of the HOS sytem concisely. Here goes:
Instead of typing text into a flat file of bytes and hoping it describes a correct program, the HOS method presents a tree of nodes that is essentially an Abstract Syntax Tree (but concrete and there's no driving syntax because there's no source text.) The developer edits the tree using only operations that maintain the correctness of the tree.
This is like "Par Edit"[1] in emacs, or a little bit like some of what J. Edwards is attempting with Subtext[2], or the old "syntax-directed programming environment called Alice"[3] for Pascal. (Again, it's not that no one has ever tried anything like this, my whole point is that powerful techniques for writing software with fewer bugs in have been around for a long time and we, in general, don't use them.)
The main difference from these is that HOS uses a very simple and restricted (but Turing complete) set of operations to modify the tree: Sequence, Branch, Loop, Parallel. (There are some "macros" built out of these operation for convenience but underneath it's just these four.)
Starting with a high-level node that stands for the completed program you gradually elaborate the tree to describe the structure of the program and the editor/IDE enforces correctness at each step. You literally cannot create an incorrect program.
Apparently normal people, accountants and such, could sit down in front of the IDE and,with a little training and coaching, learn to describe their own work processes in it and essentially write programs to automate (parts of) their own work.
I've been working towards bringing this to market, on and off, for years now. In fact, my first programming job was the result of a talk I gave on a prototype IDE at a hacker convention about fifteen years ago. I have just finished implementing type inference and type checking for my latest vehicle: a dialect of the Joy programming language. It has been slow going (I lead a chaotic life) but I'm on the cusp of having something I think will be really great. If it works, it will revolutionize software development.
Quixotic, I know, but somebody's gotta tilt at those windmills...
Anyway, thank you again for taking the time to look into this. I can't tell you what that means to me personally. I know the "Provably Correct" book is terribly written, but I urge you to try to look beyond that. All I can really honestly tell you is that I'm convinced there's something really important and useful there.
[3] "In a syntax directed editor, you edit a program, not a piece of text. The editor works directly on the program as a tree -- matching the syntax trees by which the language is structured. The units you work with are not lines and chracters but terms, expressions, statements and blocks. " https://www.templetons.com/brad/alice.html
That's about all the time I have left to spend on this. If you truly have discovered a way to do what you're claiming and it's has just been a victim of bad luck and poorly written books in the past, then there is a huge market opportunity and I wish you the best in bringing it to market.
Interesting discussion. Thanks. I will keep my eye on this space from time to time.
I find that a lot of folks only think of programming in terms of variables and executable instructions that modify them. The notion of an invariant is not part of their toolbox, and the exercise of modeling a domain is not something they really engage in: if the program doesn't do what they want it to do, either the instructions are wrong, or there need to be more of them to handle the cases that aren't currently handled.
I honestly don't think this is true. The word invariant might not be part of their toolbox, but that doesn't mean they don't understand what a condition that holds true over some problem space implies. These things are built into existing infrastructure, they are bound to easily be rediscovered, noticed, understood.
I think the bigger problem is that when people work in large groups for implementing a business logic, things are assumed invariant in one component of a system, not tested for in another component that relies on that condition, and what used to work ceases to work when that condition changes.
If it literally isn't labeled "invariant" explicitly in some documentation, people learn to practice 'defensive programming', after enough frustration. Or they feel compelled to understand all the code in their code base, or migrate over to smaller companies, smaller projects where this is less of a headache (but not really, because we all depend on pieces of systems we can't always inspect in totality, therefore, can't always assume we know everything about the systems we are dependent on).
The real world has edge cases. If the instructions are wrong, it's because either the invariant isn't actually an invariant in the real world, or the domain is modeled incorrectly. The higher up you get in writing programs about programs, the less likely it seems like real world data works this way, because you literally don't even have the common sense or perspective to see that some things change without being able to specify an underlying order to them (implying some things change without there being an order to describe their changing).
It’s a really important distinction that the invariant has to be an invariant in the real world (e.g. in the problem domain). Programming “right” is context-dependent.
Some programmers try to program to some theoretically correct ideal, even if it doesn’t apply to their particular problem. Or, they may try to make invariants for things that they think they will need to hold true (possibly because of some future requirement).
All of these can lead people to shy away from having any form of invariants.
Agreeing on what invariants exist in the problem domain at a given time is really important for avoiding confusion. That kind of discussion happens all the time with teams who collaborate well and across disciplines.
“Should it possible for that menu and that modal dialog to be open at the same time?” (Invariant)
“A post can’t be published if it hasn’t been reviewed.” (Precondition)
“After submitting a payment, an audit log entry must be created” (post condition)
There aren't many invariants in the real world - beyond literal laws of physics, things that seem invariant are usually enforced by the threat of jail. So in the end, the only real invariants we can use are part of our business model - not the real world - and they only hold insofar our model is specified and implemented correctly.
the exercise of modeling a domain is not something they really engage in
This is what I've seen in interviews. The typical recent grad can understand an existing model. Some of them can't design a simple model from scratch and concretely specify functions to do operations on it.
if the program doesn't do what they want it to do, either the instructions are wrong, or there need to be more of them to handle the cases that aren't currently handled.
I've asked interviewees what they could do about them cycles in user-entered code, by giving them a 2-node example. Far too often, they reply with an if-clause to detect exactly a 2-node cycle. Then I ask what to do about 3 nodes, then far too many of them try to give me a 2nd if-clause!
The user-entered data is a bunch of expressions which can reference each other, forming a graph. The user needs to be prohibited from entering an expression which creates circular dependencies.
Do you have any visual example? Or textual? I'd very much like to see exactly what you mean and what would you consider a failure at that interview (and what would you consider a success).
I've asked interviewees what they could do about cycles in user-entered code, by giving them a 2-node example. Far too often, they reply with an if-clause to detect exactly a 2-node cycle. Then I ask what to do about 3 nodes, then far too many of them try to give me a 2nd if-clause!
It's understandable when it's a physicist or electrical engineer trying to code something up to get something working. What really upsets me is when it's professional software developers who never evolve out of that mindset.
I feel like at the end of the day the trick is to think algebraically -- Your data types and structures are some domain and you define operations over the elements of that domain in such a way that certain properties hold.
Your description of 'thinking algebraically' is similar to how I think of it. Seems to match something in human brains that partitions thought into subjects/domains which each have their own definitions of which things are fixed and which vary. I tried building a minimalist framework around the concept, adding one other element: 'converters,' which are used to move objects between domains. I wrote on it in more detail here if anyone's curious: https://github.com/westoncb/Domain-slash-Converter (probably not much of anything special in the source, though I think the concept may have something to it, and I've successfully built one pretty large and interesting thing on it.)
The big advantage of CS is the accessibility of the domain. You cannot take that away and hope nothing else changes. That, of course, means any idiot or 8 year old can just pick it up and run with it. They may become less of an idiot after a while, but this may not be desirable or expedient, and it's great that that's not much of a problem.
I mean, this is a bit like saying, why doesn't every physicist do everything by just coming up with a random differential equation for their problem then use fixed points to deduce the long-form non-recursive version and isolate out the wanted variables into long form ? It's, after all, usually by far the simpler process, especially since everyone can come up with a few differentials for any situation. Coming up with the correct long form directly, however, is absurdly hard. So if you simply learn to work with differentials, that's the way to approach essentially any problem. With a tiny caveat ... "learn to work with them" is 6 months of study and intensive practice, and that's assuming you already know a lot of math that isn't exactly high school level either, including a significant list of "tricks" that you just need to know by hard. But what you can do with it is amazing.
But the level of knowledge and understanding required is just too high for anything resembling general application.
Don't look at other videos until you've internalized the first sentence. Think long and hard about what that sentence means : differential equations allow you to find any function that you can make enough "what happens when it moves" observations about. Enough usually means one.
For instance you can find Newton's equations from the statement that "falling things keep going linearly faster" (because they're the simplest function that satisfies that differential equation).
On the more complex side, Google's pagerank is also the solution to a differential equation. Very technically it sort-of kind-of qualifies as a first-order one, just not in the real number space.
There's a separate branch of "differential equations" (let's call it "the physics branch") that studies how to work it with discrete time intervals rather than continuous ones, which is also interesting and useful.
> The notion of an invariant is not part of their toolbox, and the exercise of modeling a domain is not something they really engage in
Because this is not what writing software is about. Ultimately it is about trying ideas out, where the notion of an invariant doesn't have a place. Why do you believe it should?
Can you expand on what you are saying? It doesn't really make much sense to me.
Are you saying that if the code doesn't solve the problem the wrong thing to do is add more to it? What about when you have multiple options, you can't just avoid adding more instructions necessarily?
Can you give a real example about what you are talking about?
Part of me is thinking that you are saying new grads are not readily able to come up with a clever solution and instead brute force. But you are not saying it in so little words, or in any intelligible way to me.
I wish we had tools for proving object invariants widely available. But such tools tend to come from people pushing their latest type theory or functional programming or something. They're too complex and abstract for most programmers. I tried to fix this once, but it was too soon in the early 1980s.[1] (That was before objects. We would have had object invariants if objects had existed back then. We had function invariants and module invariants.)
This stuff is useful is when you have lots of modifiable data structures which need to be consistent. Window managers. Operating systems. Game engines. Database internals. Most of the things you need to prove are trivial. But you need tools to check that invariant A is maintained by code far distant from where invariant A is defined. Maintenance programmers will miss that.
This is usually one of the first things I try to add to projects that have stranded because of reliability issues or because there is code that nobody dares to touch for fear of upsetting some fragile balance.
It can be quite a bit of work but the pay-off can not be over-estimated.
The problem is that the programmers there can't make a ratchet: a monotonously improving codebase where each commit is like the click of a ratchet. If you know your invariants hold then it is fairly easy to establish whether or not a change is an improvement.
Testcases can supply this function, as well as counters set up for that purpose (statsd or something to that effect).
Once you have enough of those you can (slowly) start to make changes to observe the effects, and once you understand the codebase add more invariants that you have now determined should exist.
The last project where I took that approach (about 4 years ago now) went from 'intractable' to 'stable' in a relatively short time but it required a lot of thinking and some really hard work to get it there.
Much better if your language/platform supports that sort of thing out of the box, even better if it can be done across subsystems.
The poorly named but very interesting "hypothesis"[0] is a quickcheck implementation (extension?) for python that might provide the "easy" access to these tools that you're looking for.
So, I also really like it from a "cool statistics reference" point of view. However, I don't think it is informative, unique or google-able, and discoverability is pretty important imho.
Is there a viable solution that isn't type theory or functional programming or something? If it were feasible to do so, wouldn't we see progress in "everyday" common language?
There was for Modula III. DEC SRL in Palo Alto did a nice verification system for Modula III. But Modula III went down with Digital Equipment Corporation.
C/C++ has too much undefined behavior. Ada died off. The scripting languages don't need it as much. Rust had potential for proof work, but went off in a different direction. There are modern proof systems, but they're rarely integrated with the programming language.
On the verification side, it's never had more users that I'm aware of. SPARK and Frama-C are very active compared to almost non-existent use of formal methods in industry decades ago. Rust could similarly have a subset integrated with Why3 platform to make the formal methods easier to use. Further, I've seen extraction done from Coq to C and Rust. There's also one person modeling C in WhyML to write the algorithms in the latter but extract to former. Or something like that. Could be done for Rust, too.
The C++ type system is essentially a functional programming language, and it seems to have developed independently, indicating to me that there is something fundamental about functional programming. It turns out this has not just occurred to me and there is a deep relation between mathematical proof and functional expressions. Thus, if you are trying to prove anything logically, you will likely eventually end up with some kind of functional-looking notation or something that is very closely related.
For some reason, the moment you bring up mathematical thinking, most programmers shy away and claim that this isn't something they should work with 'everyday'.
I blame the lack of visibility of the accessible material... which I am still looking for. Any suggestions?
I never studied CS and I deeply regret that now. I want to up my game and I don't want to always just patch stuff around to infinity, but I honestly have no idea where to start.
> Is there a viable solution that isn't type theory or functional programming or something?
What's so wrong with functional languages? I am sure you have heard this cliche a lot but here it comes once more -- I became a much better programmer once I learned an FP language.
Still, back in my Java days I achieved this with a ton of defensive coding, always doing deep cloning before passing complex data structures to anywhere, and trying to enforce design by contract (interfaces and their implementation classes)... which are practically the patterns that any average FP language uses: pattern-matching, immutability and always copying data and thus never passing stuff by reference (always by value), and behaviours / protocols / macros (of the manipulating the AST kind, not in the way C does it).
So to be fair, you are bound to land at an FP language eventually. Maybe you just don't know it yet. Which is fine, learning is about the journey and not the destination anyway.
Cleanroom just uses functional decomposition, the simplest of the control flow primitives, and math you learned in middle or high school. Got results on lots of projects.
(Second link has a table and description showing the results they got. Stavely's book distills it into lightweight, less-processy form.)
Design by Contract itself is like assertions on steroids. If your language lacks support, you can build it into your functions at beginning, middle and end. If OOP language, you might use constructors and destructors. If trying to understand it, I have a link that you can give even to project managers.
One benefit of making it a formal spec is you can generate tests directly from your specs. This is an old, old technique being rediscovered recently. Various names include specification-based, model-based, contract-based, and property-based test generation. The last thing you can do is manually or automatically convert your specs into runtime checks for the above and/or fuzz testing. The failure takes you exactly to the property that failed if it was one of them.
Finally, for just the most critical things, you can try formal proof on them. SPARK Ada is a great example used in industry with the book being pretty easy to follow.
The nice thing about that toolset, esp the proprietary version, is that you can use automated provers to avoid having to do mathematical proof by hand. If something doesn't pass, you have several options: do some manual work on hints to the automated provers or actual proofs in a proof assistant; monkey around with the code to see if different structure or algorithm gets it through; put in runtime checks for just the properties you couldn't prove. If you do the last one and keep programming while solvers run, then the productivity is similar to Design-by-Contract with the extra benefit some properties might hold in all cases. You might also get performance boost by reducing unnecessary, safety checks via proofs that were successful.
I guess my default answer is: because I have to solve some business problem to get paid, and I never have enough information or time to do it right - or phrased differently: If I took enough time to do it right, someone else would have already done it wrong and moved on.
Worse is better doesn't sound so bad for just a single (small) project.
Where it starts to go badly off the rails is when there's a company culture of worse is better and one quickfix, bandaid solution is piled on top of another.
They then wind up with one or more Leaning Towers of Pisa made up of bandaids, hacks, and quick fixes, and everyone running around like headless chickens putting out fires and trying to bandaid all the failures as the towers are constantly in the process of collapsing.
This leads to an ever widening spiral of hacks upon hacks upon hacks as company culture, lack of manpower, and pennypinching never gives them the luxry of doing it right, and cutting the gordian knot by scrapping everything and doing it right from the start becomes ever more impractical.
The whole web is "worse is better". You can say "worse is worse", but that implies you think the web is worse than something that.. what, would have sprung into existence if the current web was suppressed?
My problem with the slogan is that it has become a catch phrase for those who either haven't read the original article, or have read it and either have forgotten what it really talked about or never understood it in the first place. As a catch phrase, it is often used to justify shoddy design, or following the crowd rather than doing what is right, or as short-hand for the real claim that our customers are too stupid to either appreciate or deserve high quality products. Why spend the time doing things right, this line of reasoning goes, when we all know that worse is better. You are far better off giving the customer something that you know is less than what you could produce, because they (those simple customers) will think that it is better.
(Be sure to copy-paste to avoid jwz's HN referer-trap)
This line also appears just before that paragraph:
> Of course, worse is better is a much catchier slogan than better depends on your goodness metric
Ultimately, this seems to be applicable even to this discussion, where one goodness metric is "correctness" and another goodness metric is "expediency".
In the original essay, it's simplicity of implementation rather than expediency, but I'm not sure that's all that huge a conceptual difference here (other than for the purposes of a slippery slope argument).
Yes. The fundamental weakness of the OPs view is that there are different types of „right“ depending on the correct tradeoffs to make. My main guide for that tradeoff is estimating how often code will be read vs. written vs. executed. All those combined plus the time it takes to do code and testing make up the total time to optimize.
That way, I‘ll write exploratory code different than a prototype, which again is different to a protoduction test, a small production system I‘ll maintain myself, a large production system with different teams and software I‘ll completely hand over or open source as a library.
Only an experienced engineer will realize that each of those versions of the same functionality will be „right“ in the context of their creation.
Better is generally better than worse is better. The passive nature of worse is better postpones problems till later on. The philosophy of "worse is better" works for those who are competing mostly on price, but are you sure you want to spend your career in that category? That can be a tough life, ground down by low cost bids. Among the consultants I've known in New York, the better one's definitely avoid the "worse is better" philosophy. I'm talking about those consultants who charge in the range of $200 to $400 an hour. These people are hired to implement a "best is best" approach. I'm working on a project like that now, where we brought in one of the best AWS consultants in this region to build out a complex infrastructure, following best practices. We are pleased to see something take shape that we know is very good.
Consultants of this type can not afford to be passive, because they are being paid, in part, to push pass obstacles, including political obstacles, and implement an architecture as close to idea as possible.
I've done consulting at that rate. There is selection bias in what you describe: the kind of client who will pay that rate tends to already know they want "best is better" which is why they hired the fancy consultant in the first place. I have also consulted at that rate to pre-revenue startups, a much harder sales pitch because you're literally decimating their burn. The pitch here is advancing their business to the next stage as fast as possible. For this, worse is absolutely better as they don't yet even know what "best" is. They do however know what "done, delivered your MVP in two days" is. BTW if any of you want your MVP delivered in two days, consulting portfolio is in profile.
The problems with that position and how your phrased it are that it is wrong, sounds condescending, is an easy way out and more importantly, completely ignores the real issue.
And the real issue is time vs result. That's the essence of worse is better. It's the well documented fact of diminishing return and exponential cost of additional quality.
You can deliver N features over a given fixed time. Given the urve of quality vs time, there is a true sweet spot where your maximize value, for whatever criteria you want for the value. Spending too much time on fewer items or too few time on too many will respectively waste time on unnecessary polish and deliver stoo many items of no value.
There is, but it definitely is NOT at one of the extreme ends of the curve where experienced bean-counters will negotiate down the engineering effort (and thus money) to the absolutely minimum necessary for the project not to implode 2 days after release.
Truth is, at one point in your career you have to start pushing back against those bean-counters. They will never get tired, and they are everywhere. At certain point you take a stand, put your foot down and say:
"No, this will NOT be done in 2 days. It will be done in 5 and thus it won't have to be patched 10 times in the space of a week. If you don't like it, my resignation is ready."
You would be surprised how often that works. Many of the business types and managers are bullies only because nobody ever fought back.
...Or you could take a more diplomatic, but still firm, stance. Like "the overall cost of this feature will be much lower if I work on this for 5 days instead of 2". But IMO that almost never flies so I became blunt with time and I simply don't care. I passed five job offers lately for similar reasons and I couldn't be happier about it.
Worse is better could be described as "easiest sufficient approximation."
But if I have a business problem with a small enough epsilon you might not get sufficiently close with your worse techniques in a reasonable amount of time. And to stretch the metaphor even more you may hit an asymphtote and never supply a solution.
Doing something "right" isn't necessarily always slower. Sometimes it is the only path to an acceptable solution.
>I guess my default answer is: because I have to solve some business problem to get paid, and I never have enough information or time to do it right
That's a red herring.
People working on a business program don't need to encode the "whole information to do it right" (e.g. for the eventual version of the program when every constraint is known).
Just the ones that have to hold at the time they right it, and they already know those -- from their current requirements.
>or phrased differently: If I took enough time to do it right, someone else would have already done it wrong and moved on.
That's also a red herring, unless you don't write tests either.
You mean "oracles" (from the blog post). Just program correctly amirite? eyeroll
If your example doesn't deal with user input, you're talking about a problem of modeling under known conditions, which is trivial and ivory tower arrogance.
>You mean "oracles" (from the blog post). Just program correctly amirite? eyeroll
I don't see the reason for the eyeroll.
You'd have a point if the "just program correctly" meant "just get it right".
But it's not. It's "just use contracts and invariants explicitly defined in your program, Eiffel style to check its correctness".
Which is even more powerful than writing tests.
>f your example doesn't deal with user input, you're talking about a problem of modeling under known conditions, which is trivial and ivory tower arrogance.
Not sure how tests are anything other than "modeling under known conditions". Are the assertions in your test in any way "unknown"?
If you mean fuzzing, you can do that trivially with Eiffel style programs as well.
And the "tests" you do that way are there in the code, available in the debugger, and so on.
Can you give a practical example on how to encode invariants explicitly in a more popular language?
As for contracts, basically 99% of the languages have some form of the Java interfaces, or Elixir behaviours/protocols, or LISP's several ways of doing it. But for invariants, I would appreciate an example if you are willing to provide one.
Well I don't know if it's a "better" answer, but in my case, I didn't know about Eiffel. I had heard the name, nothing else. Maybe I'll try writing something with it now.
I was an Eiffel fanboy in the early 90s, when the only alternatives were Smalltalk and C++. It was obvious to me that garbage collection was necessary for modular systems, and that Eiffel assertions were the sweet spot in formal specification: lightweight enough to be usable in the real world, but formal enough to let you state and demonstrate useful things about your code.
However I was still always bothered by mutation. Everything was fine as long as you didn't change any data structures, but as soon as you added mutation to the mix everything fell apart. You could state an invariant, but if you shared a reference to an internal structure then you had to trust everything else in the system not to change anything.
Then I discovered functional programming in the shape of Haskell. Its still not perfect, but it's fundamentally better than OO.
I have also learned eiffel and read Meyer's book. The terrible standard library with the deep inheritence trees was my main complaint about this language. All his critics about C++ were very insightful. At that time almost nobody was speaking about immutability (GRAAL was called a language without variable). Java was almost revolutionary with its immutable strings.
> The only way to achieve demonstrable correctness is to rely on mathematical proofs performed mechanically.
This is guaranteed to fail. What are you going to prove by your mathematical proof performed mechanically? That the program performs correctly? How do you define "correctly"? At bottom, it is defined by an informal specification in peoples' heads. You cannot mathematically prove correspondence to that, even in principle.
At best, you can prove correspondence to the most-high-level formal specification. But how do you prove that that specification is "what the program should do"?
The next problem with this approach is that it has costs. Having to write a mathematically rigorous formal specification for all the behaviors of a program takes non-trivial time and effort. As others have said, that effort could have been put into other things, like more features. Would we rather have more perfect software, or more feature-rich software? Above a certain level of quality, we'd rather have more features. (Yes, software often drops below that level of quality...)
You can prove performance properties, memory properties, and formal properties, or any combination of the above.
The question of 'what the program should do' is not a mathematical one; it's a philosophical one in the most general sense, and most likely a business one.
Well, yes. But they do that by replicating a lot of functionality that might as well be pushed down a level. Because every program with output that people or other systems rely on needs that level of reliability and yet only very few provide it to a degree that the company behind it would accept liability if it doesn't.
Usually 'working' and 'reliable' get redefined to 'working with what we've tested it with' and 'reliable insofar as our statistics indicate'. Without knowing for sure that you've really covered all your edge cases you're a typo away from some disaster. Fortunately most software isn't that important. But for software that is that important these strategies, even if imposed from the outside rather than embedded in the language will pay off.
Oh, I am not saying there are no problems. And I don't deny a certain emotional appeal to having safety features provided by the language.
However, great (quality) is delivered with those kinds of features and without, and crap software is delivered with those kinds of features and without. And more importantly, I have seen little to no evidence that having those sorts of features actually substantially changes the statistical distribution of crap/quality software, no matter what we feel should be the case.
People can use these safety features or not, and they can use them well or not. Just like they can use non-linguistic safety mechanism, such as really good test-suites...or not.
Elsewhere, he writes:
> This is where I stop understanding how the rest of the world can work at all. And so you probably need to upgrade your understanding.
If the world doesn't conform to your understanding of it, the thing that's lacking is almost certainly your understanding of the world. Because it does work.
> And more importantly, I have seen little to no evidence that having those sorts of features actually substantially changes the statistical distribution of crap/quality software, no matter what we feel should be the case.
I have. Our company has done a fairly large number of studies on the internals of companies producing software and the better companies are at the tech the better they do in the long run.
Note that there is such a thing as 'good enough', and once that bar is cleared I'm fine with cutting a corner here or there to meet a deadline. But I'm not fine with categorically ignoring quality and security in favor of short term wins.
We've all seen blue screens, sad macs, JVM segfaults, GCC internal errors, "aw snap" from Chrome, and "have you tried rebooting?" as a panacea. Where is this reliable software you're talking about? Because literally all of it seems pretty crappy to me, and it's frustrating to see people shrug it off.
> This is where I stop understanding how the rest of the world can work at all. Without some rigorous tools I just do not see how one can get such things right. Well, sure, spend weeks of trying out test cases, printing out the structures, manually check everything (in the testing world this is known as writing lots of “oracles”), try at great pains to find out the reason for wrong results, guess what program change will fix the problem, and start again. Stop when things look OK. When, as Tony Hoare once wrote, there are no obvious errors left.
Most businesses put their data in a DBMS, so you have integrity constraints "to ensure database consistency with the business rules or, in other words, faithful representation of the conceptual model of reality."[4] The relational model is both complete and a lot easier to understand than using the predicate calculus directly.
The other shortcoming of any language's invariants is that it lives in your source repo. As coders, we often forget that your production data represents contracts with customers, and you have to take account of the real data when migrating your business rules. That's why the M is in DBMS.
More directly answering the question, most languages can employ property tests [1][2][3] that, very closely to how Eiffel does it, allow you to specify the precise invariant and validate it under randomly generated tests.
But, honestly, a great deal of code, even math heavy code, doesn't have nice general invariants. Real functions dealing with business logic and bugs are piecewise and messy.
And then that only covers the handful of functionality that is even remotely expressible in mathematical form. You still have the user interface to deal with, dependency management, your entire deployment story, and all the problem extant between chair and keyboard.
Technology can't substitute for the grunt work of testing, testing, actually using your system and talking to your customers.
> But, honestly, a great deal of code, even math heavy code, doesn't have nice general invariants. Real functions dealing with business logic and bugs are piecewise and messy.
At the edges, the invariants are going to be messy, but it would seem like this gives you ample excuse to define functions with strong, narrow invariants as the core of your application, with input transformation functions at the edges.
Why not? Because provably correct software is not necessarily correct software. One of the biggest classes of defects in programs is missing details in specifications, and provable correctness against the wrong specification doesn't help with that problem at all.
Provable correctness is one technique in software development, it's not the only technique nor even the most important technique nor even a required technique.
Provable correctness can be useful iff you have a formal specification, and iff that formal specification is more likely to be a correct reflection of the requirements than the code + some tests.
While there are times when that's the case, those are rare.
I have heard of property based testing (e.g. QuickCheck). Is the main innovation of Eiffel that the author is referring to that these invariants are written alongside the actual code AND are configurably checked at runtime?
Because ordinarily, you might sprinkle assertions throughout your code, but these are checked only at the point you add them, rather than between statements as I assume Eiffel does.
In that case, I see some benefits in the form of documenting logical assumptions which are actually validation when code is run (also during manual testing).
I could imagine this being implemented in other languages by hooking into statement execution much like a profiler or debugger and executing the checks.
Does anyone know of such tools for other languages, e.g Python or JS?
Does anyone have real-world experience with this development technique on an actual project in production?
"The future is here, it's just not evenly distributed yet." ~W. Gibson
We are like lumberjacks who so love our axes that we refuse to even consider chainsaws. Many of us don't even know they exist, or if we do, think of them as esoteric and nearly magical. And then there's the fellow who says, "I'm too busy chopping down trees to learn use a chainsaw. I've got to get these trees chopped today!" (So learn to use a chainsaw on the weekends maybe? They really aren't that hard.)
I agree with you, however the chainsaws in your example are definitely hard as hell... consensus algorithms, net splits -- and you already have much more a single programmer could handle (most of us anyway).
One thing I'd really like to see is a debate between Meyer and his predecessor Niklaus Wirth. The latter was definitely willing to sacrifice some correctness if that means making the system easier to implement and learn. Just compare Oberon and Eiffel, or, heck, "Algorithms and Data Structures" and "Object-Oriented Software Construction".
I'm not sure Wirth has ever sacrificed correctness. He has been willing to sacrifice performance though, that is, if a simpler solution was fast enough he went with that (e.g. linked list instead of hash table for symbol lookup in a compiler). He's also not been afraid to question established wisdom and practice if it lead to simpler implementations (e.g. the file API in Oberon).
I suspect Wirth would not object to codifying pre/post-conditions and invariants per se. But I think he would object to using them as a band-aid around complicated implementations. It's hard to convince yourself that your implementation is correct when the pre/post-conditions and invariants are themselves complicated and hard to understand.
OT: what is the definition of a "root" of a multigraph? In a regular graph, the most common definition I've seen is that a node is a root iff you can reach every other node starting from it. In the graph in the article, that is true both of nodes 1 and 2, but he says 1 is the only root.
Yes, I get he thinks his language is the best, and maybe it really is. Constraint-based programming sounds cool. Then again, so does logic-based programming, but I'd wager most people have never written a single line of Prolog, and for good reason. It's hard and slow to write, and to run for that matter.
Maybe his language would be more popular if he wasn't such a snob about it.