Software Development isn’t Math — it’s Communication

Many software developers start their career by obtaining a degree in computer science or engineering. These degrees typically start with a very strong mathematical foundation but focus much less on written communication. This is a mistake!

Now, before I get a bunch of nasty comments telling me math is the most important thing in the world — let me explain. Software development obviously involves a significant amount of abstract analysis and requires the ability to understand mathematical and engineering concepts. Without these skills, we end up with brittle software that performs poorly..

But, my experience  has taught me these skills are only useful if a developer also has the ability to communicate their ideas effectively.

The most important skill for a software developer is the ability to communicate abstract concepts in concise, logical and structured ways

In case you missed it, the last half of the above statement is also a good definition of higher-order programming languages. We could still be writing code in 0’s and 1’s. But, we’re not — because it’s much more difficult to understand binary than higher-order languages. In fact, we even call them — wait for it — ‘languages’.

If a developer’s only intent was to communicate instructions to the computer / operating system — binary or machine code would be sufficient.

Whether they realize it or not, all developers are writing code for three audiences:

The Computer, the Developer and the Maintainer

I know, it sounds like a C.S. Lewis book. But, just like a good writer knows his audience, all developers should consider their audience as they design and write code.

For any given software program, there is an infinite number of ways to design a solution. How you make use of things like composition, polymorphism and other design patterns indicates to other developers and maintainers how you intend the system to be used, modified and extended.

Take a break, look at your code through someone else’s eyes

For any real, meaningful project, your code will never run in a vacuum. Someone is going to modify it. Someone is going to integrate with it. It’s not enough just to make it possible–you need to make it as easy as possible. Specifically, you need to make it easy to extend the right way — and hard to modify the wrong way.

What do you think? Have you known someone who came up with viable solutions that no-one could understand? Are communication skills vital for software developers?



Using FP to prevent ‘Cannot read property of undefined errors’ (A.K.A. Array.reduce rocks)

‘Cannot read property of undefined errors’ are common when using JavaScript or TypeScript.  While some languages, such as CoffeeScript, have the existential operator which checks for the existence of an object, it doesn’t currently exist in JavaScript.

This means the following will throw an exception:

 const myObject = {};
  const zipcode = myObject.homeOwner.address.zipcode;

Of course it’s easy to see in this example.  However, if you receive myObject from a REST service call and it may not return the full object, it’s easy to miss.

The first approach to resolving this is typically:

let zipcode = {};
if (myObject && myObject.homeOwner
   && myObject.homeOwner.address) {
    zipcode = myObject.homeOnwer.address;

Of course, this becomes very cumbersome and obfuscates what you’re really trying to do. Functional programming techniques can provide shortcuts for preventing these kinds of errors.  This is where Array.reduce comes to the rescue:

let _ = function (instance, path) {
  return path.split('.').reduce((p, c) => p ? p[c] : undefined,instance);

let zipcode = _(myObject,'homeOwner.address.zipcode');

Super cool!  This 1-line function splits the path parameter into an array of accessors and uses reduce to iterate through them and either return undefined or the value of the final property.

Check out the safe-get github repo or npm package for more info, tests and examples.


Functional programming rocks.  Use safe-get


A project template for WebPack + TypeScript (A.K.A. Framework Fragmentation Frags Developers)


One of the wonderful things about the rise of open source is the corresponding increase in framework and library options.  One of the biggest disadvantages is the huge fragmentation caused by this increase in options.

If you’ve ever played a first-person shooter, you’re probably familiar with the term “fragging“.  Gamers frequently use this term, although it was coined during the Vietnam war to represent deliberate killing of an unpopular team-member.  The number of JavaScript library/framework combinations is staggering (26,000 listed on GitHub) and easily leads to developer brain implosion — hence the title: “Framework Fragmentation Frags Developers”.


After researching numerous various options and prototyping with several different frameworks / libraries, languages and build tools (Babel, TypeScript, Grunt, Gulp, Bower,Webpack, Aurelia, React, ad infinitum), I’ve found what I consider to be the best (read most likely not to be dead/deprecated within 6 months).

Truthfully, this is a fairly minimalist solution as it allows me to customize each project easily.  It includes the following:

  • A testing framework (Mocha + Chai  — run-away community favorites)
  • Build scripts (NPM scripts are powerful, cross-platform and minimalist)
  • A bundler (Webpack -relatively easy, powerful, well-supported)
  • A linter (TypeScript — okay, this is more a language than a linter but close enough)
  • An initial project structure (it’s amazing how much time teams spend arguing about this)
  • VS Code bindings (An excellent, minimalist, hackable editor)

So, without further ado, here’s how to use the template:

  1. clone the wet-template repository (WEbpack + Typescript – ‘wet’, get it?)
  2. From a shell / command prompt do the following:
    1. ‘npm install -dev’
    2. ‘code .’
  3. Create a new git repo in a separate folder
  4. Copy everything except the .git folder into your new repo.
  5. Modify the package.json in this new folder to account for your git repo, description, keywords, author, etc.

Dude, where’s my code

wet-template folder structure
wet-template folder structure
  • src – All of your source code (TypeScript files) should be in the src folder.
  • test – Tests, surprisingly, are placed in the test folder (again, TypeScript here).
  • build – The TypeScript compiler places files that are compiled into JavaScript (by pressing ctrl+shift+B) in the build folder
  • dist – bundled (via webpack) JavaScript ready for distribution (e.g. via npm) are located here
  • example – examples (html pages, etc.) are placed here.  This folder can also be used for hosting an app if desired

Tips ‘n Tricks

Automatic Provisioning (typings)

In Visual Studio Code, press ctrl+shift+B (or ‘npm run build’ from a command line).  This will build the example code so you can ensure everything works.  The first time the build task runs, it executes the provision.js script which installs TypeScript typings and then removes itself from the build script of the package.json file so typings aren’t reinstalled every time you build.

Webpack loaders

I’ve included an html loader (commonly used for requiring templates in various front-end frameworks like vue.js)


wet-template is a minimalist project template for WebPack and Typescript — clone it, npm install it and go.






Two Traps all Software Developers Must Avoid (A.K.A. Ned Ludd is the Devil)


Software development is a challenging task–one where the velocity of change is staggering.  Practices, ideas, technologies and languages that are considered best practice today may be relegated to the trash heap tomorrow.

However, at its core, good software development doesn’t really change that much.  That’s because, at its core, good software development isn’t about software development at all.  It’s about understanding a problem domain, finding ways to facilitate an activity in that problem domain and then breaking down the tasks and ideas involved in that activity into loosely coupled, well defined conceptual chunks.

There are two major traps all software developers should avoid at all costs:

Don’t become a ludditefactory

The Luddites were 18th century English textile workers who protested against industrial revolution technologies such as the spinning frame and power loom.  Software developers that refuse to investigate new technologies, languages or tools are similar to Luddites.

While it may come as a surprise, I’ve known many developers (particularly in the enterprise world) who have many years of experience in a particular tech stack and refuse to develop in anything else.  The rationales are varied but the crux of their argument always returns to: I can do it the way I’ve always done it and I know it will work.

This is, of course, true.  But the argument is specious.  You can write a modern web-app in assembly but you won’t be providing a very good return on investment to whoever’s paying for the development.  Ultimately, software developers are employed because they provide good return on investment–they create a better product, faster, than someone without their education and experience.

Over the years, we get better at decomposing things and assembling them in more useful ways: we create general patterns (e.g. the factory pattern, the singleton pattern, etc.) that help us identify common ways of decomposing ideas. We create new technologies that allow us to share components more effectively (e.g. NPM, Nuget, Maven, etc.).  And, we develop new languages that help us assemble things more quickly, allowing us to better leverage new patterns and tools.

Refusing to make use of these improvements is tantamount to dereliction of duty.  Exactly which tools, when and how you leverage them depends upon the project’s demands and constraints.  However, it’s your job to stay up-to-date and take appropriate risks to improve the quality of your product.  If you’re not willing to research, learn and experiment, you shouldn’t be a software developer.

don’t paint the duckbabyduck

Okay, I’m mixing metaphors here.  “Painting the bike shed” refers to an inordinate focus on trivialities.  The “duck technique” is when a developer throws in a sacrificial feature they don’t mind getting rid of to distract management from the features they feel strongly about.

“Painting the duck” is when a development team becomes hyper-focused on trivial features.  Yes, the color gradient of the help button is important.  No, you shouldn’t spend three months on it.

It’s surprisingly easy to become focused on the wrong things.  This can be due to a lack of clarity from management, lack of communication with the users, poor process or even a well-intentioned desire to make the best product possible.  A product that ships with a 90% solution is better than a 100% solution that gets canceled because it’s late and over-budget.


Keep learning.  Take risks.  Be pragmatic.


Is JavaScript Better than Java, C# and C++ (A.K.A. Why JavaScript is Like a Cockroach)

Okay, let’s get the controversial part resolved right out of the gate.  Why is JavaScript like a cockroach?

  • Cockroaches are practically invincible — they can withstand ten times the nuclear radiation humans can and they can survive for a week if their head gets cut off.
    • Corollary: several of the worlds largest corporations have attempted to kill off JavaScript and none have been successful.
  • Cockroaches are fast –proportionally they are three times faster than a cheetah.
    • Corollary: JavaScript enables very rapid development.  It’s mixed-paradigm (functional with object-oriented characteristics), dynamic (duck) typing and enormous developer community allow for very rapid development
  • Lots of people think cockroaches are ugly and try to kill them.
    • Corollary: Identical.

Obviously my use of cockroaches as an analogy gives the impression I don’t like JavaScript.  This, however, is completely untrue (albeit I do dislike cockroaches).  I thoroughly enjoy modern JavaScript and find it to be an extremely powerful language.  One I choose to work with more frequently than Java, C# or C++.

Is JavaScript better than C#, Java or C++?  Obviously this is a nuanced question that deserves a nuanced answer like: it depends on the problem domain, performance and platform requirements, etc.  But, I’m not going to give a nuanced answer.  I’m just going to say: yes.

Why?  JavaScript has several benefits over the other languages:

  • Paradigm Flexibility: JavaScript supports aspects of functional programming.  It also supports object-oriented capabilities through the use of closures and prototypal inheritance.  While the other languages started out class-based and migrated toward functional, JavaScript always supported functions as first-class members and has migrated toward class-based object-oriented syntax.
  • Extreme Object Extensibility: JavaScript supports duck typing, prototype modification and treats all objects as associative arrays which allows for dynamic insertion of functionality and data.
  • Ubiquity: Based upon data from GitHub and StackOverflow, JavaScript is the most actively used language.  This leads to:
  • Cross-Problem-Domain: Thanks to node.js, electron and every browser ever invented, JavaScript has strong support for developing services, front-end web applications and native applications.

While the points I’ve made above are true, I wrote this blog somewhat tongue-in-cheek.  In the real world, your choice of language and tech stack will depend upon many things: problem domain, risk tolerance, schedule and cost budgets, development team composition, etc.  But, don’t discount JavaScript as a viable option just because it’s got warts.  To use a carpentry metaphor: if your toolbox can only hold one hammer–JavaScript is the biggest, baddest hammer in the shed.


If you could only use one programming language for the rest of your life–choose JavaScript.


REST is dead! Long Live!

I’ve been working with recently and have been very impressed with it’s capabilities and simple API.  I’ve not been impressed with the documentation–it’s like someone at the fortune-cookie printing office decided to write a tech manual.

I’ve been working on a few projects in my spare time that involve real-time updates pushed from the server to the client.  Over the years I’ve become a big fan of REST and it’s simplicity and flexibility.  But, polling has always sat poorly with me: it’s cumbersome and network-inefficient.

My initial design approach was to use REST for the bulk of the API and merely push out notifications with  However, once I started using, I was impressed with it’s ease-of-use and flexible nature.  As I dug deeper into’s capabilities I realized it didn’t just support traditional pub/sub communication.  Through the use of ‘acknowledgements’ and rooms, it also supports RPC-like communications directly between a client and server .

This was when the proverbial light-bulb went off!  If supports distributed function calls along with pub/sub, why have the complexity of two different APIs?  Native web-socket support exist in nearly all modern browsers (see this chart) and I don’t see any of the modern front-end frameworks (aurelia, angular, react, etc.) throwing up any roadblock to the use of in lieu of REST calls.

In reality, this isn’t just about either.  There are other bi-directional network communication frameworks and standards such as crossbar and WAMP that can support full-featured APIs.  It’s really about a transition from a uni-directional, procedural style of API to a bi-directional API that supports both procedural and object-oriented-ish approaches (dynamic creation of rooms can allow for object-related state to be maintained within the server).

I’m starting work right now on a small library that facilitates dynamic registration of RPC providers to extend this idea of bi-directional function calls between distributed systems to allow for dynamic registration of function providers (dependency injection for distributed systems).

Obviously REST still serves a purpose and provides benefits such as caching.  But, if you’re creating a seriously interactive application that requires bi-directional communication, it may be worth simplifying your API and improving your network efficiency by using something like for your API.

What do you think?  Are bi-directional API’s built on technologies like and websockets the future?



Who Owns the Process (A.K.A. Please Just Let Me Do My Job)

I’ve worked at several places and each had a different process– each one usually very different from the others.  I’ve worked under CMMI level 5 waterfall processes as well as scrum agile processes.  Here is what I’ve learned from my experience: “Nothing can destroy a good product faster than a bad process.”

Now, let me guess.  You’re assuming I’m going to rant about CMMI level 5 waterfall’s rigidity and extol the flexibility of Agile.  You’re right.  On the other hand, if you think I’m going to rail on Agile’s tendency to lead to Cowboy Coding and praise the emphasis placed on thoughtful design by waterfall–you’re also right!  This is because:

The process is not important.

Whoa!  He didn’t just say that, did he?  I should clarify–it’s not that the process doesn’t have any impact on the product.  The right process reduces impediments, manages risk, helps maintain appropriate quality and productivity goals and fosters appropriate collaboration.  The wrong process can literally destroy high-performing teams and obliterate good software.

But, fixating on the process is like hiring a librarian to perform open heart surgery and worrying about the color of thread she’ll use to stitch you up when she’s done.

Here is my rationale:

  1. Every process is defined, modified and enforced by people
  2. Every step in the process is implemented by people
  3. Evaluation of the process and its appropriateness is performed by people
  4. Project and product goals are defined by people
  5. Evaluation of the product and its success is performed by people

Notice a pattern?  At this point it’s tempting to say: “It’s all about the people.  Hire the best people.”  This is true, but it’s a red herring–everyone knows this and it’s not especially helpful.  What I’ve learned is this:

“You must have the right people empowered to change the process as they see fit.”

The right people

This one isn’t as easy as it seems.  It really consists of two things: you need to have people representing the right concerns and you need to have people with the right personality characteristics (a topic for another blog post).  At a minimum your process team should include the following stakeholders:

  1. Representatives from the technical team
  2. Representatives from the customer perspective
  3. Representatives from the business / management perspective

The common exclusion of 1 and 2 led to the creation of agile.  The exclusion of 3 leads teams to return to waterfall.  All three must be represented and empowered for a software development team to be efficient and successful.


There are a large number of aspects that affect how a team should tailor its process.  This includes things like: requirements volatility, developer proficiency and risk tolerance.  The following chart illustrates how various project factors impact the choice of a process:


If you’re writing code that controls nuclear missile launches with a team of ninth grade gym class students — waterfall is the clear winner.  In the real world no canned process will be a perfect fit.  You will need a customized process that includes some elements taken from various processes.  And this leads us back to our thesis:  The appropriate people (technical, customer and management) must be involved and empowered to change the process as they see fit.

conclusion (TL;DR)

No process is perfect and yours is no exception.  Get the right people involved in the process and give them free reign to change it as needed.


Code · Uncategorized

Troubleshooting Aurelia (A.K.A. Where are my Custom Elements?)

Aurelia is a great JavaScript framework for creating Single Page Apps (SPAs).  It’s especially easy to learn for engineers that are familiar with WPF and the MVVM pattern-the documentation even uses the familiar terminology of views and view-models.

While it’s still in beta (as of today – Mar. 24, 2016), there is a decent amount of support available on Gitter, Stack Overflow and some documentation available on

Recently, I’ve been using Aurelia in conjunction with Cesium to develop a 3-D visualization app.  I’m using Aurelia for navigation, custom components for various forms of selection and the main navigational view component contains Cesium:


One simple, but potentially frustrating issue I’ve run into is when my custom elements don’t appear in the app.  This frequently leads me into the following train of thought:

  1. Why aren’t my custom elements showing up in Aurelia?
  2. Are there any warnings / errors in the browser console?
  3. Is there a bug in my View (HTML)?
  4. Why aren’t there any warnings / errors in the browser console?
  5. Is there a bug in my ViewModel (js)?
  6. Where are the darn warnings / errors in the browser console?
  7. Is Gulp watch working?
  8. What the heck is wrong with this thing — why aren’t there any warnings or errors!
  9. Why didn’t I become a Doctor like my mother always suggested!?!

After struggling with this train of thought for a while I finally realized I’d missed the following little snippet in my containing element’s HTML:

<require from="my-custom-component"></require>

As you may have guessed from the above rant, this does not generate any warnings or errors.  Instead, you may just end up with an empty component like this:


So, next time you start asking yourself “Why aren’t my custom elements showing up in Aurelia?” make sure you haven’t forgotten a <require> tag in the containing element.  Either that or choose a less frustrating, time-consuming career and go back to medical school.