I am always surprised when a developer prefers a solution that requires special maintenance or deep knowledge of a platform to make it work versus a simple solution that will work in all cases. It’s like leaving landmines for those that follow them. There are a ton of examples of this, but the most recent one that caused me to blog about it is the debate over semicolon inclusion/exclusion in Javascript. For those not familiar, people have “discovered” that there are some cases where it is “ok” to leave out a semicolon at the end of some javascript statements. The debate is whether this is a “good practice” or not. (See this EPIC discussion on the topic for a taste…)
Unless I’ve missed something, the main benefit of this is that you save one keystroke. The perceived benefit to the person who does this is pure vanity — they can show you how “deeply” they know JS. What this signifies to me is that they don’t care about the person who needs to read/use the code later. It also tells me that they would rather waste their energy on memorizing/recalling arcane rules than writing anything of consequence.
The only way that one can ever write anything worthwhile is if they can focus fully on the model/algorithm and focus as little energy as possible on the mechanics of typing the code. To be clear, the majority of the energy you spend on any development task is not “pressing the right keys.” (Note that I am not claiming to be the originator of this idea — and YES, Atwood is saying that you shouldn’t be wasting your brain power on typing…).
As a rule, I find the most direct, “works in the most cases” way to do something, and then I have freed up the brain cells for the more important work. Using semicolons even when the computer may insert them is one of those — there is no need to find out later that I mis-remembered the rules. I also include curly braces for all if/else statements (in languages that use them) — as a developer I can eliminate ambiguity in my intent by including just 2 extra characters, this seems like a cheap price to pay.
Another argument always surfaces when one discusses this topic: “If it’s in the language, I can (and should) use it.” I feel it would be a waste of time to attempt to explain why this is a totally wrong-headed view. Every single language that has been created to date (natural and artificial) has flaws. It is not reasonable to say that because a feature exists, that is enough justification for using it. The converse is reasonable: when a feature is shown to cause bugs, it should not be used. (I leave the “shown to cause bugs” as an exercise to the reader).
There is simply no excuse for a developer choosing a less robust construct over a more robust one whenever they have the opportunity. Characters are not in short supply, but brain cells are. Do yourself a favor and don’t waste them memorizing silly rules to impress your friends.