When is an MVP not an MVP?

All life is an experiment. The more experiments you make the better.Ralph Waldo Emerson

Jim Collins is great for boiling deep truths that he’s come across with a great deal of study and research into memorable ideas.  One of my favorites is the “Fire Bullets then Cannonballs” idea.

The gist is – you make small investments (a lot of them), trying to find something that works, and then, once you’ve proved to yourself that you’ve found something, then you invest big.

There’s an underlying assumption in that, that you have any kind of an idea up-front how much your particular brand of ammunition will cost.  In software, this as we know, is false…always.

Collins’ tip of the hat to the unknown for the (ubiquitously lusted after) product-market fit is right on.  But, in software at least, the unknowns go a bit deeper.  Bullets can become cannonballs in the process of firing them. 🙂

The idea holds true though – one piece of wisdom needs to be layered on here though.  And it’s pretty straightforward.

As we are firing our bullets – we need to decide ahead of time how much a bullet costs.  And when it crosses that threshold we need to make a conscious choice to either continue firing (knowing that it has become a cannonball) or to consider ourselves to have fired a failed bullet and move on.

The options here are pretty obvious – the important bit is that we are making decisions intentionally, and not letting ourselves slide into the path of least resistance (which as we know, makes for crooked rivers and crooked men…..and I would argue, crooked organizations).

As for practical take-aways – it is of the utmost importance to make this decision as quickly as possible.  If we have month-long delivery cycles – we may find our bullet has become a cannonball early leaving us with a large bit of time wasted that we could have spent on more bullets.  Sprint cycles should be as small as you can possibly get away with – as should production release cycles.  And more importantly, both inside the business and out – we should structure the organization that there is plenty of review, so we know both the size and value of the bullets that we fire.


The ONE Lie Your Computer Science Professor Wouldn’t Stop Telling You.


, , ,

What is this lie?  It’s not subtle – and in fact like any great deception – it has been doubled-down on over and over again.  So much so, that it’s actually carried around as if it were the DEFINITION of truth – the one true way to design software. And like many of the most successful lies throughout history – it seems viable, even helpful in the short term but carries with it long-term, bad consequences.

What if I told you – everything you’ve been told about OOP is a lie.

The lie goes something like this “The purpose of an object is to encapsulate data behind behavior”.  The idea is that we expose the things we want to do, and then implicitly have some backing information with which we accomplish those things.

As with any good lie – it starts with a kernel of truth – viz. yes, exposing only one thing is important, so you shouldn’t expose data if you are going to expose behavior.  The lie begins with the idea, that the object, a single object, embodies both.

At a bare minimum, this will always represent a violation of SRP.  No matter how closely related, specific behavior and any data that are related to that behavior represent two reasons to change, and thus two “Responsibilities” in the SRP sense.

So to say this again generally, the old wisdom about the purpose of Objects being to hide data behind behavior, especially its implication that this means the same object does both, is a violation of SRP, and is actually more like an OO anti-pattern.

Why should you care?

For all the reasons you’d care about following SOLID principles in the first place.  Violating them means that your code will be less adaptable, and communicate its intent less clearly.  I’ve personally seen plenty of code that blindly follows this common wisdom – and it bares out the fact that adhering to this lie does us no favors.

But Morpheus’ approach to sharing deep truths like this is very applicable – as with SOLID generally, or TDD, or Agile Principles – one cannot really be told about the matrix – one must see it first hand.  So – I would challenge you – find some code that follows the one-object-hides-data-with-behavior anti-pattern.  Imagine modeling it, or even try modeling it in a way that separates out the responsibilities, and see if you don’t end up with clearer, cleaner, more flexible code.


Constitutional Agility

Twelve score and one year ago as our nation was born – a huge shift in thinking moved its way into practice with regards to how we structure ourselves as people and ultimately as nation-states.  This was, of course, the culmination of the thinking and experimentation that had taken place prior to this big occasion.  But none-the-less, it marked an important milestone as we put into practice the theory that had been slowly cooking over the preceding years.

The ultimate goal was the good of mankind – the freedom to think, act, and be who the Creator made us to be – which, as it turns out, is also highly productive way to be – and has led to much prosperity.

We have zeroed-in on a (reasonably) decent system of organizing ourselves at the macro level – government that balances the passions and energies of a massive society, while maintaining order and freedom.  There’s a fractal nature to this though – and as we’ve worked our way down into the smaller scales in the organizations that we work for  – we find that the same impulses that have led to despotic oppression and systematic mistreatment of people are still very much alive and well.  More importantly they remain unbalanced by thoughtfully laid-out systems that direct and harness our in-built not-so-perfect nature.

I have worked at a number of different organizations, of different sizes, industries, cultures and levels of distribution.  And I’ve found that with regards to delivering software – the problems and challenges are very much the same regardless of any of these variables.  And further, the challenges boil down to the EXACT same thing that the founders of the United States grappled with as they attempted to create a sustainable government.

The challenge and tension is this – we know a few things – we know that people should be given as much freedom to act as possible and as much context as is available to handle their affairs.  Our short-term human passions sometimes override our reason and our understanding about these things that we know.  That is, in the short-term, I can be convinced by how I am feeling emotionally that I should hold back context, and dictate specific actions.

I’ve seen the results of this personally.  I’ve seen something which I am betting, because it is based on the same fundamental realities, is an echo of the past.  I’ve seen organizations oscillate between liberty and despotism – to use the modern lingo, between “agility” and “waterfall”.  And it’s very much because of this singular reason – we know the right things we ought to do (providing liberty and context) – but we don’t do them because the passion of the moment can override our reason.

I propose that we begin to rethink how a modern knowledge-work-based organization should be structured.   I always say that good artists borrow and great artists steal.  I propose that we brazenly steal the methodologies and tactics that the founding fathers applied at the macro level and apply it in the micro.

For example, it should be difficult and consensus-building to change certain fundamental parts of working method.  One of the most fundamental parts of achieving liberty/agility in software is to ensure that teams are broken down by functionality or value actually provided to the customer, rather than by technology capability (e.g. api, front-end, etc).  WHAT IF…and hear me out, here, because this might sound crazy….WHAT IF – in order to move away from that (or to make any change) meant either gaining a 2/3 majority of all employees OR paying bonuses of half the net-worth of the company.

This would COMPLETELY remove the idea that “hey, short-circuiting things here is really a means to more money for the company” (a short-term passion overriding the bigger picture good).  And if it was REALLY important, it’d still be possible – but force either real serious consensus building…or a bunch of cash paid out to the employees.  Both of which would tend toward buy-in (or at a minimum anesthetization) for any short term pain.

So – perhaps we write a constitution about these things.  Perhaps we appoint people (that can’t be un-appointed on a whim) to interpret if something really is a violation of that constitution.  Perhaps we have a body of employees that are co-equal with executive managers – that can tweak the specifics of the rules.

I don’t know what all the details might look like – this idea only struck me this morning.  We’ve never thought this way in the past – but reality is shifting – more and more the expertise and intelligence about the operation of the business is in the hands of the leaf-nodes in the organizational graph – and so the old approach might be ripe for change.  This shift, coincidentally, is very similar to the shift of expertise and inherent productivity that was taking place just prior to and during the founding of the United States.

Anyway – thanks for listening – in the comments, please share thoughts, rebuttals, or any snide remarks.


Agile Is…

Agile is a moral imperative.

It is a measuring stick – a tool that lets you ask the question of yourself – how well am I treating those I work with?  How well am I leveraging my gifts and the gifts of the people around me for the benefit of the organization and the population that it serves.

Agile is not the path of least resistance. 

After a while in the software industry I developed what I think is a pretty common assumption – that we are slowly becoming more enlightened, and that the following generations won’t even remember what it was like to do software in a top-down, serialized, homogenized way (e.g. using the evil and ubiquitously feared “waterfall” method).  I’ve discovered over the last several years – that every generation rediscovers and reignites their passion for controlling, attempting to eliminate unpredictability and (intentionally or unintentionally) manipulating others for their own benefit.  Every generation reinvents Waterfall.

This is because it’s intuitively right on the surface – and because it takes deep curiosity, and tremendous amounts of earnest energy to get past that intuitive crust to the meaty, counterintuitive center of the issue.  Breaking past waterfall requires special people – especially invested in each other, longing to build something great.

Agile does NOT scale.

Again – a totally intuitive thing to look for, Scaled Agile simply doesn’t exist.  In our eagerness to produce as much as possible as fast as possible (I’m being generous here – the real blinding factor might more likely be a deep, abiding greed – looking to make as much money in the shortest amount of time), we want to scale our business rapidly, and by extension anything that makes our business run.

Scaling is the art of throwing more raw resources at a thing and getting more finished product.  People being the raw resource of Agile – you cannot simply throw more people at a software shop and get more agile.

The real question is – “How do we scale our business while continuing to have Agile characteristics (empowerment, full context, rapidly adapting to change, and generally treating people well)?”  The answer is to structure your organization such that all decisions are pushed to the lowest possible level and that context is not removed for the sake of short-term goals.

Agile is hard work.

The answer to the implied question “then what do we do?” – is that we think.  We choose not to do something because of its level of required effort.  We decide that we want to treat people well, add value to the greatest extent possible, and build a great organization.  Then we do the hard work to think through the myriad of counter-intuitive but high-leverage problems that make the rediscovery of Waterfall such an inevitability in a world with no shortage of people looking for the easy way.


Unit Testing, Scala and Functional Programming

The questionWhat place does unit testing have in the functional or hybrid world?

This is kind of a tough post for me to write – since the whole point is that I’m still a little fuzzy about how the idea of unit testing connects with the new (to me) world of functional and hybrid-functional programming languages.

I am not entirely sure how generalize-able the things are that I am discovering.  I come from a straightforward background of enterprise and web-scale software development.  I’ve done it mostly with Java and its ecosystem, with a quick walk through the C# and .NET world a few years back.

Throughout the course of my career – I’ve come to settle on a several things that I deeply believe in.  Two of which are pertinent here – that there are a collection of principles that are common to all software development regardless of the particular technology (currently, most precisely spelled out as the SOLID principles) and that unit testing, because of the impact it has on our thought process, results in better, more modular software.

I started slipping down the functional programming rabbit hole without realizing it several years ago.  I went on an interview – and the folks interviewing me were asking me about my exposure to Scala.  I hadn’t even heard there was a Scala 🙂  The job didn’t end out working out – but after googling Scala, I slowly fell in love with it.  I’m still not any kind of functional programming master – but the ability to think in terms of classes and objects, while beginning to leverage higher-order functions and all the goodness that functional programming brings to the table was and is incredibly inspiring.

Scala has been called a tasteful hybrid.  I like that.  Incidentally, As I’ve continued to stumble down the rabbit hole, I’ve started to become really excited by Haskell.

At any rate, I’ve also come to believe that there is a Great Convergence happening.  A convergence between Object Oriented thinking and Functional thinking.  We tend to think of these things as mutually exclusive – but I would suggest that not only are they NOT mutually exclusive – the thinking about both are zeroing in on the same thing.  They’re zeroing in on a set of underlying principles about how to write well-structured software that speaks to the human reading it as well as it does to the machine that is executing it.

The SOLID principles that I mention above are perhaps not a perfect representation of the actual underlying principles – but they are definitely the best and closest articulation currently available.

Just to address one problem that I believe obscures our vision here – when we think of FP being at odds with OO, I would suggest that we’re actually thinking of the Imperative style that a lot of OO practitioners leverage as a matter of habit.  But OO programming does not imply Imperative programming, which is the thing that is *actually* at odds with FP, and not necessarily converging on the same set of principles.

As an example – one thing that people think of as a commonly accepted OO practice, but that actually violates SOLID principles, is the idea of having a class that encapsulates BOTH data and behavior.  This is ALWAYS, necessarily a violation of SRP.  But when folks make the argument that OO is going a different direction than FP, they often cite examples of this.  Those that are seeking to understand and apply the underlying principles that lead to code that speaks to the reader and not only the machine, are not writing classes like this – they are writing OO code that is converging more and more into the space that FP is at or is heading to (I have to be careful here – I’m still a bit of a FP neophyte – so I want to be careful about speaking too strongly for any portion of the FP community).

So stepping back…

The questionWhat place does unit testing have in the functional or hybrid world?

So I’ve tried several approaches and thumbed through a number of GitHub repositories to try to gauge where things are at. And I have yet to come across anything particularly definitive.

Taking a couple more steps back – throughout my career in the mainstream OO world (again, primarily Java with a short stint in C#), I have observed that Unit Testing, for whatever reason, leads developers to write more modular code that more closely follows good underlying software principles (e.g. SOLID).  And because of this, the code is easier to understand and to think about.  I do not believe this needs scientific rigor or detailed data to prove – for me, it’s a matter of honestly assessing even a small degree of practical experience.

I can see no reason why the move from classes as the carrier of meaning to first-class functions as the carrier of meaning would change this effect at all.

So this being the case, as I dug into Scala, I began to try several approaches to bringing Unit Testing to bear…

As we’ve said a couple of times here – Unit Testing applies pressure toward more closely meeting those underlying software development principles.  One of the principles that gets a lot of play, since it has historically required sophisticated frameworks to meet in the Java world, is the Single Responsibility Principle with regards to dependencies.  That is, in order to isolate the choice and creation of dependencies, purpose-specific frameworks like Spring or Google Guice were needed.

So my first step was to pursue that isolation directly.  I tried Guice, then I tried the cake pattern, then I customized the cake pattern to be as minimal as possible, then I tried implicitly passing around dependencies as needed…getting more and more minimal as I discovered the implications of the FP constructs available in Scala.

And then at some point I even started asking myself why I even needed a class – if I’m really following SRP, a class seems to ALWAYS be too much….though, it provides a good organizing structure, much like a package.

My production code seemed to proceed in the same direction – getting smaller and smaller.  More meaning expressed in fewer keystrokes.

It didn’t click with me at first – but this has a massive impact on the meaning of unit testing.  Ten lines of for-loop code, with variables to track the iteration, logic to perform the iteration, boundary checks and of course actual business logic can be boiled down to a single line of code with functional constructs.  A single line of code that only consists of the new meaning that you’re looking to express.  This is a big deal.

My Java unit testing practice – was to always have a method doing one atomic thing – and to unit test every method.  In the Scala world – this tips the balance, even for a practiced unit tester, toward too much investment for the return on structural clarity.

The questionWhat place does unit testing have in the functional or hybrid world?

So my conclusion here is two-fold:

  1. We don’t really know yet – but we’re working on it.
  2. As with everything in FP, it’s a FAR more fluid, artful, subjective and personal thing than it was in the more structured world of the last generation of languages.

Also – I’d like to highlight, if the tone hasn’t come through in this post, that this is something of a cry for help.  If you have any additional insight on this – please email.  I’m very interested to see where this starts to land.  Or, if it already has landed and I’m just not aware of it – that would be great to know as well.




How To Make Software as Fun as Possible

Software engineering should be fun – really, really fun.  It is a totally engaging activity, that allows us to express our individuality, work together with others, and create something that people find useful and at the same time emotionally up-lifting.  Software is art.  Art is fun.

When software ceases to be fun, it means we are doing something wrong – something we should correct as quickly as possible.  Because, aside from being a good thing in and of itself, fun leads to faster delivery of higher quality software.  Lack of fun, well, leads to the opposite.

One of the big ways we get in our own way in this regard is by creating Cargo Cults.  Cargo Cults are a major fun-reducer, and they take a great deal more work to avoid than one might think.

A cargo cult happens when otherwise well-intentioned people choose to use some working method for no other reason than because someone else has told them that it is a good thing.  The way to avoid participating in a cargo cult is to really understand the underlying principles involved, starting with the problem that is being addressed,   In fact, many cargo cults start without even bothering with addressing a problem – the solution becomes so popular that people forget that one day in the distant past it was proposed as a solution to a particular problem.

Two of the biggest buzzwords in the industry in the last year (or two), DevOps and Microservices are ripe to be used as fun-sapping cargo cults.  Unexpectedly, they’re both solutions to the exact same problem.

So – say you read in your favorite development blog that DevOps and Microservices are Good Things(tm) and that all the cool kids are doing them.  How do we keep that from becoming a cargo cult and thus sapping the fun out of software, ultimately causing quality, morale, and throughput problems throughout your organization?

Step 1:  Make sure you understand clearly, for yourself, what problem or set of problems these solutions are useful to address.

Understanding clearly means being able to articulate it easily in a sentence or two.  Simplicity betrays understanding.  If you can’t express the problems simply – you don’t understand them well enough.

As I said earlier, in the case of DevOps and Microservices – we have two sides of the same solution-coin.  The problem that we are seeking to solve is how we scale while keeping an aligned delivery team focused on end-user value.  Every software delivery team starts out delivering what I like to call “Devopsy-Microservices”.  It’s what’s natural when you have a small group of people creating software – you focus on a small set of features, delivering them as a group out to an end user.  Everyone on the team has their head around the entire stack, and probably everyone can move the entire stack to production with a few keystrokes.

As we scale our delivery teams, there are two big mistakes that are super tempting to make, and that always seem like the intuitively right thing to do.

The first mistake is the perceived economy of scale of grouping together like skills.  The unintended consequence here, is that the skills based teams, while maybe slightly more efficient at their particular task, optimize for their skill specific task at the expense of the value delivered to the end user.

The second mistake (which is a result of the first) is that we layer our software much like we layer our teams – and thus any single feature necessarily requires multiple teams to deliver.  This obviously dilutes focus.

Step 2:  Make sure you understand clearly, for yourself, if you even HAVE the problem(s) that you believe the solutions solve.

If you are a two person development shop, and you write code, and you have access to make changes to production, how you think about DevOps and Microservices will be completely different from someone who has already scaled the wrong way.

You can simply choose to scale the right way as you add people…..if you add people.

Step 3:  Make sure you understand clearly, for yourself, what the underlying principles are that are in play and how the solution is addressing them.

There are a lot of complicated, counterintuitive effects related to how you might scale a software delivery team.  The very relationship between Microservices and DevOps is a prime example of this.  As I said earlier – they’re really solving the same problem – keeping a delivery team aligned and focused on end-user value.

The reason that they are both solutions to this is due to a counterintuitive law of software development called Conway’s Law – which, paraphrased, basically says that software mirrors the organization that creates it and vice versa.  DevOps seeks to align the organization and Microservices seeks to align the software. And again, both of these assume you’ve made some bad decisions already as you’ve scaled, and have to wind them back.

Step 4:  Take the long view.  And be persistent.

Finding the underlying principles and then steadily applying them is not a microwave thing – it’s a crockpot thing.  You won’t notice any instantaneous, dramatic, short-term effects.  What will happen is that as you make good choices along these lines, you’re software and delivery flow will slowly but markedly improve.

And this is definitely the case in Devopsy-Microservices.  If you’re way down the path in a (so-called) monolithic environment, with functionally siloed teams, it will take a lot of hard, diligent work to get back on to a good path.  If you are a small startup – and you’re just “trying to get product out the door” – I’d urge you to set yourself up for future success with the long, arduous work of scaling well rather than piling hack upon hack.

As with any accomplishment – the real way to win is to use time as your ally, steadily adding value rather than looking for the short-cut-promising Cargo Cult.  The Cargo Cult will tell you not to think about pesky, complicated things like underlying principles, but to just go ahead, click your heals together three times while throwing the magic beans in the ground before riding off into the sunset on your rainbow colored unicorn.

I say – let’s forget the unicorns and do the hard work of writing some incredible software!



You’re not thinking fourth dimensionallyDoc Brown

You're Not Thinking 4th DimensionallyWe can all be accused of this at times.  I surely can be.

Thinking three dimensionally is easy – we can view great works of art, sculpture, buildings, organizations, and even software easily. In the present, they are pinging our senses, giving us immediate impressions, touching our consciousness directly.

Thinking fourth dimensionally is more difficult – we must use our memory and/or our imagination to extrapolate the way things were in the past or how they could be in the future.

Further, when we do extrapolate from the past into the future – we tend to only see two states – how things were in the beginning and how they are in the end.  The great masterpiece of musical achievement was first nothing – then it was a masterpiece.  The great sculpture was first a piece of rock and then it was David.  The great man was a boy and then he was the legend.

To truly be fourth dimensional thinkers, we have to realize that whenever anything great is built, it is built slowly, over time, with a long series of intentional steps – slow, intentional steps that are discovered through the course of the creation.

Great software practitioners are great fourth dimensional thinkers with regards to the software they build, the organizations they exist within, and most importantly with their own character.  That is – as a software engineer we are constantly, with a series of small, intentional, discovered-as-we-go steps, building three things:  our character, our organization and our software.

The antagonist from Star Trek: Generations – was a rather surly alien, obsessed with escaping time, and so told Captain Picard (who had just lost his family to a house fire) – “They say – ‘Time is the fire in which we burn’”.  Nice, right?

I disagree – I say, time is the fire in which great works of beauty and accomplishment are forged!

The catch though is that the two faculties that we use to think fourth dimensionally are both muscles that we must exercise regularly – memory and imagination.  They don’t develop on their own.

So will you commit with me to exercising our memory and imagination in service of making ourselves better practitioners of the art of software, and making our software more amazing?


What is a Unit Test


, , ,

When I look at great works of art or listen to inspired music, I sense intimate portraits of the specific times in which they were created.

– Billy Joel

We’ve discussed the philosophy before – Unit Test writing is not just about checking that some of your code works.  It *is* that, to be sure.  But it is much more – and it is much more because of the discrete, low level of abstraction that the name “unit testing” implies.

Unit testing is checking a single method, on a single type.  We should completely divorce this from any kind of test of functionality – these tests are specifically checking to see that an object behaves according to its contract.


This is ridiculous you say.  Why would any sane person forget about functionality and check only that her set of objects behaves according to contract?!  Madness!

Well – for starters, my dramatic friend, we are only forgetting about functionality within the context of unit testing.  Automated acceptance and automated integration tests to a lesser extent are both focused on functionality, and are both very important.

Unit tests, and their focus on the integrity of the object model and the integrity of the thinking that has gone into constructing it provide two big advantages that tests focused on functionality do not.

#1 – They provide pressure in the direction of good design.

The complexity of a test grows disproportionately faster than the complexity of the production code that it is testing.  That is to say – if we have gnarly production code – our test code will have to be significantly worse in order to deal with it.  Three levels of branching containing five nested loops is not something that anyone is going to be eager to unit test.  And really, what’s more of a straightforward, testable unit than one that is doing a single thing … that is that is adhering to the “S” in SOLID – the Single Responsibility Principle.  And being able to cleanly mock any dependencies – obviously an important part of testing a single method of a single object, drives toward Open-Closed, Liskov Substitution and Dependency Inversion.

#2 – They separate the composition of the system from the programming of its components.

If you have a set of well-thought-out objects with a suite of unit tests verifying their contracts, composing your system into the functionality that you intend to deliver becomes a different kind of activity.  It’s meta-programming – or programming with a new language made up of the objects that you’ve created.  You no longer have to worry if the constructs of this new language have integrity – you just have to use them.  This language is cleaner, more domain specific, and simpler than the general programming language that your units are based on, so the complexity that you are having to deal with is far less.


Many a struggling rock act has faithfully played clubs and small venues to earn enough money to make a demo recording (less expensive, moderate fidelity) of their music to pitch to music companies.  In hopes of really making it big with their music they’ll record these demos, play them for anyone who’ll listen and within months, will have listened to their demo countless times.  Every so often a record company will “sign” one of these bands – and provide the resources to record their music in top facilities with top engineers and producers.

I’ve read that a funny thing happens, sometimes – bands will be so attached to their cheaply made recordings that they find it hard to be happy with the objectively top-notch (though undoubtedly different) recordings that were financed by the record company.  This effect is called “demo-itits”.

Software engineers are just as prone to get demo-itis as musicians.  We don’t like to admit it but we fall in love with sub-optimal code – I would argue almost as soon as we write it.  Suggesting that we change our code can feel like a personal insult at times.  …hmm, maybe I’m the only one.

Anyway – writing unit tests after production code has already been written almost invariably results in a need to refactor – that is, to change our precious code.  This is a significant, though all-together avoidable pressure against writing the tests at all.

The way to avoid this pressure is to just write your test first.  Your production code will then evolve with your test – and there won’t be a jarring need to change your code abruptly – that is – demo-itis won’t have a chance to set in.

So to recap – a unit test isn’t really just about checking “that code works” – it’s about helping us achieve simplicity through good design and appropriate handling of complexity.  And while it’s not impossible – we do ourselves a disservice with regards to simplicity by writing after the fact tests.

And, as we know, simplicity is what incredible software is all about!!



Basic Skills: Delivery Method


On this – all depends. Only a fully trained Jedi Knight, with the Force as his ally, will conquer Vader and his Emperor. If you end your training now – if you choose the quick and easy path as Vader did – you will become an agent of evil.

Many necessary basic skills are not entirely obvious while we are in the midst of our training.  This is true in any discipline – I was just lamenting to a long-time friend about how much I wish I had understood the necessity of practicing my trumpet when I was beginning – as it’s shocking how quickly one improves if one takes the time to practice.  She politely informed me that I *had* been told – *repeatedly* of the value of spending this time.  She was right, of course – but I can attest that it was not entirely obvious at first how valuable the skill of practice is in music.

The skills of course become even less obvious when there is a princess being held hostage by an evil overlord – or when the VP is camped out next to your desk asking, “are we there yet”.  Building incredible software is difficult – taking the easy path will result in us falling short.

A basic skill in software that engineers can miss out on is understanding and intentionally growing in their knowledge and use of Agile methods.  What is Agile?  I tend to abhor buzzwords – but in our industry we seem to generally understand that there is some kind of evil out there that needs to be dealt with and the ideas that tend to be associated with “Agile” have appeared to work to a certain extent.

To be more specific though – I would define Agile as the extent to which an engineer or engineering shop successfully balances the Four Tensions of Software.  The evil that we tend to see out there in the world occurs when we give up on one of these tensions and extremely common (though still unfortunate) structures and behaviors begin to take shape – all of which end in disrespect for people, low morale and less incredible software.  This is one reason I like to refer to Agile development more generally as “Humane” development.

The Four Tensions of Software

Here are the Four Tensions of Software, and the results of folding on either side of them:

1) The Tension of “The Whats and The Hows” – when making a decision about what to build, how you build it can inform but should not drive decisions, when making a decision about how to build a thing – what you are building should inform but not drive decisions.  Folding and letting The Hows drive The Whats means you will miss important business opportunities and/or build things that are totally unnecessary.  Folding and letting The Whats Drive The Hows results in lower-quality, less-considered code that will rot quickly and become increasingly difficult to maintain – this will ultimately end in missing business opportunities and spending big chunks of time cleaning up (which will mean yet more missed opportunities).

In both situations where we fail at maintaining a balance – micromanagement tends to occur.  This is because the in-depth skillsets involved in being a driver of The Whats or a driver of The Hows almost always necessitate that these are different people.  So, for example, if The Whats drive The Hows – the people behind The Whats tell the people behind The Hows task-specific work – which, by definition, is micromanagement.  And micromanagement communicates mistrust – and wastes a lot of precious time.

2) The Tension of “Delivery and Engineering” – when making a decision about order of delivery, the engineering can inform but should not drive the decisions.  Folding and letting engineering considerations drive delivery ordering means we are potentially delivering less important software first – this means potentially missed business opportunities.  Folding and letting delivery considerations drive engineering decisions means lower quality software that will rot quickly and become increasingly difficult to maintain – ultimately this means missing business opportunities and spending big chunks of time cleaning up (which will mean more missed opportunities).  And for the same reason as The Tension of The Whats and The Hows – folding on either side of this tension will result in micromanagement.

3) The Tension of “Progress and Adaptation” – a sense of progress – or momentum – is a gut level, intuitive understanding about the state of the collective accomplishment that an organization is pursuing.  This is incredibly important for everyone – but is especially important for folks tasked with the product’s vision – and for the organizational leadership.  If this sense is missing – even if progress is being made – it leads instinctively to a need to inspect – to aggressively find out what is happening – and to control.  This instinct is particularly strong in the kinds of individuals that drive product vision – or the kinds of people that provide organizational leadership.  Over-inspecting and over-controlling – is micromanagement.

Creating software is…well…creative.  When we build software – we never know what we’re going to have to do to make it happen – and further, our users don’t know what they want (fully) until they start to use something.  So – adaptation is *always* necessary – if an organization clamps down on adaptation in order to create a sense of progress for itself – well, all sorts of ridiculous things can happen – not the least of which is writing software that is in no way a match to the need it was meant to meet.  And if an organization considers itself flexible and allows for infinite adaptation with no need to show progress, well – the sense of progress evaporates – and folks will either start to feel hopeless about their ability to deliver, or as we went over earlier, they’ll start to be more aggresive about inspecting and controlling (that is, micromanaging).  Neither of which is a particularly good morale generator.

4) The Tension of “Focus and Engagement” –  Writing software is sort of difficult.  One of the primary reasons for this is the large collection of details that need to be kept in short term memory at any given moment.  For this reason, focus is rightly a precious commodity to developers.  Focus, however, is easily broken by simply changing the set of details that we feel we need to keep at hand – but worse than that is having to switch from creative-and-detailed mode to dealing-with-people mode.

Dealing with people, however, is an important part of creating incredible software.  People define what need we want to meet, people help to define how we’ll solve a problem, people ensure our work is up to snuff.  People, people, people.  I like to joke that software would be so much better if it weren’t for all the people.  This – though – is just ridiculous – because it is for people that we make software.

So let’s restate the problem – we need to focus while engaging with people, though engaging with people tends to destroy focus.  Easy enough to understand.

So – to fail on the side of focus, if we follow our instincts as engineers (at least they are my instincts) – we crawl into a hole with a couple of FogBugz or Jira tickets – and don’t emerge until our work is perfect and complete.  The problems this might create are obvious (and we’ve probably all tripped on them) – we have no idea if what we are creating is appropriate, because it might have changed 15 times since we started, we have no idea if it’s accurate (because we didn’t engage anyone to test), etc.

If we fail on the side of engagement and create a lot of formal structure “forcing us to communicate” – we have a bunch of meetings that destroy our focus and we don’t get any work done.

Failing to balance any of these “Four Tensions of Software” leads to bad things – mostly micromanagement and bad software…and if carried to extremes, failing to balance them can lead to death marches, mounds of spaghetti code that stack all the way to the ceiling and pervasive mistrust.  Maintaining balance with these Tensions is what Agile is all about.

The Solution

Luke learns to use the Force

…an agent of evil? Really? Are you sure?

Organizational dynamics happen in real-time.  You usually don’t get to stop and think for very long about how you’re balancing a particular tension.  You don’t get to deliberate precisely how to best manage balancing Focus and Engagement or The Whats and The Hows and then act based on that deliberation.  Many times it’s just a matter of seconds.

It’s like fighting someone or improvising a jazz solo – you don’t get a chance to plan out your next response to your opponent’s attack – or to the piano player’s last riff.

So the ideal solution here is to understand these tensions as deeply as possible and have a set of rehearsed responses that balance them in every possible set of circumstances.

Just like that fight or that jazz solo though – while these principles are simple on the surface but have an unfathomable number of permutations of circumstances that we have to be able to handle.

The *Real* Solution

This is really where software is a lot like other arts – and like other arts there is a pattern we can follow to pursue mastery…the pattern goes something like this:

1) Find people who know what they are doing – watch what they do.
2) Find out if there are specific techniques that you can just copy – to help you learn the underlying principles.
3) Use these specific techniques and anything else you can find and do real art.
4) Do whatever you can to learn as much as you can about the underlying principles.  Read, inquire, think.
5) Teach someone else.

This all applies directly to software….some advice about each of these steps:

1) Look for the best people you can find – as always, don’t assume because someone is a practitioner that they are a skilled practitioner.
2) I recommend the Scrum framework as a good set of training wheels here – there is a lot of literature out there about it – and it really hits the mark in a lot of good ways.
3) Even if you’re not perfect – just DO.
4) Find any information you can – internalize it – think about it – bounce it off of others.
5) Everyone wants to know how to make more incredible software – there should be no shortage of folks available for you to teach.  But make sure you have enough depth to really add value to whomever you are mentoring – wasting peoples’ time is a big no-no.

Delivery method is an oft overlooked basic engineering skill.  It’s also a thing that will take a lifetime to master – so it’s best to get started today.  Invest in yourself – find resources, spend the time working projects where you are able to learn more.  The more we do this – the more incredible software there will be!!

Here’s to making more incredible software!!

The Craft of the First Creation



Jobs aimed for the simplicity that comes from conquering, rather than merely ignoring, complexity. Achieving this depth of simplicity, he realized, would produce a machine that felt as if it deferred to users in a friendly way, rather than challenging them. “It takes a lot of hard work,” he said, “to make something simple, to truly understand the underlying challenges and come up with elegant solutions.
Walter Isaacson

Stephen Covey, in his uber-important “The Seven Habits of Highly Effective People” lays out a general principle that is as true in Software Development as it is in any other craft. The principle I’m referring to is that of the “First Creation”. You always create a thing twice. The first time, you create it in your mind, the second time you create it out in the world.

This principle is one that is very poorly understood by software developers. The reason that it is poorly understood is likely due to the lack of friction that exists between our ideas and and getting a representation (however crude) out into the real world. That is – with minimal explicit thought we can start typing into our ide – and get the Hello World version of whatever we have in mind to create in very short order.

Unix - I Know This

We’ve all seen this – she sits down and immediately is able to accomplish things without really thinking things through. Though in her defense – there were man-eating dinosaurs after her.

And really – isn’t that how we love to start working on a project – we take pride in being able to sit down at a computer and start typing – and magically things begin to work. Not only do we love the praise and the instant gratification that comes from this – even on our favorite movies and TV Shows, this is how computer work happens. And of course – it’s really the obvious way to begin when we first learn to code.

So what we end up doing is a very quick, very intuitive “First Creation” – and then riding off like Code Cowboys into the sunset, sure that we’ll save the world.


There’s a problem that comes from this though. A quick, intuitive first creation gives us only an intuitive, vague understanding of the problem space. If we haven’t explicitly thought through all the road bumps we are likely to run into – we will only have a gut feel for them.

Intuition is a powerful thing – and our ability to solve problems without a complete understanding of them is one of the things that is really cool about being a Human. However – though you CAN solve a problem without understanding it entirely does not mean that you will solve it in a particularly good way.

Practically speaking – if you don’t spend some time *explicitly* thinking through a “First Creation” – you will invariable start coding like crazy only to run into many, many decisions to make that catch you by surprise. You will be able to solve them – but because you didn’t think clearly through things to begin with – the decisions will undoubtedly compromise your structural vision (which again, you’ve made implicitly).

The end result is – you end up with code rot long before you even get anything into production.

The State of Things…

Is the world going to end if you don’t put in the effort to perform a “First Creation”. No.

Will you limit your craft and fall short of your potential. Yes.

The systems you write DESERVE to be elegant, well-crafted works of art that not only perform the function for which they were created but also clearly communicate your intent to the developer that comes after you. Even if that developer is you – don’t you want to inherit elegant, well-structured code that takes minimal effort to understand?

Or if you like thinking in terms of negatives – the warning I’ve always heard is – Code like the developer that is going to follow you is a murderous madman that knows your address.

The Solution…

Part I

So how do we do this first creation? Am I suggesting months of documents and powerpoints to ensure we have exactly the right thing before we begin to code – NO!!! IN NO WAY AM I SUGGESTING B.U.F.D. (Big Up Front Design).

What I am suggesting is explicit forethought – I usually do this with some scratch paper and a pen – or with the ubiquitous white-board. Write out an object model – ensure that your objects are abiding by SOLID principles – and think through all the interactions until you feel you are comfortable and have a handle on everything within the scope of what you are building. Then start to decompose … then, and ONLY THEN – Code.

This could take minutes – or it could take a couple of days – but every developer involved in actually writing code should be comfortable with things prior to coding. Will it change along the way – certainly. Should you be explicit in your “First Creation” with regards to solutions to any changes – yes.

Part II

The first section here discussed having a First Creation for the whole feature you are creating – another aspect of the First Creation in software development is at the granularity of the actual code. When you create a class or a method or even a for loop – you should explicitly think it through before writing code. This would be obnoxiously difficult if we didn’t have a technological tool to make this possible.

That technical tool is called the Unit Test. When we write a unit test – we are creating client code for our actual production code. This forces us to switch perspectives to think about what it is we are creating and how it completes its mission. This change of perspectives allows us to quickly and with minimal context shift do a First Creation for every bit of code we write.


Creating elegant, simple (but not simplistic) solutions to the problems we face is hard work. It’s ultimately far more satisfying (and profitable) than hacking code together quickly and throwing it out the door. If you are up for it – if you are up for doing the work to intimately understand your problem space and deliver amazing solutions – well you are well along on the way to creating more incredible software!!

And the world is a better place when we create more incredible software!!

Happy Coding!!