Killing Software Development Capacity, Part 3: Superficial Thinking

In the previous installment of our instruction about how to kill your software development capacity (, we discussed the danger of spending all your time thinking about software without actually delivering any.

It is important that we also look at the opposite side of the tension between thinking and doing.

As we said in the previous piece, making rapid, intuitively driven decisions can be a powerful way to make progress. In a lot of cases, it is perfectly effective – since human beings can rapidly perform with a fairly high degree of accuracy without explicitly thinking through the things they are doing.

In his biography about Steve Jobs, Walter Isaacson said of the man that he sought the simplicity that comes from truly understanding the problem, not that which comes from ignoring complexity.

Gaining understanding about a thing is a matter of learning about its details and the logical connections between them. In any real world situation this means understanding a vast quantity of information. And the sum effect of this understanding is that when it is acted upon, the action is more thoroughly optimized for the context it exists within.

All action that isn’t harmonized for a specific purpose is eliminated. This leads to simplicity, elegance and most notably effectiveness.

Think about hitting a baseball. It’s an activity that is fairly obvious on the surface. You grab a bat, someone throws a ball, and you swing at it in order to propel it back at the field. To be really great at hitting a baseball, though, takes a mastery of a number of different details. Control of your own body, from your feet to your head. Ability to make rapid judgements about the speed and trajectory of the ball as it approaches. And the timing to not only hit the ball but to place it between the right angle centered on the batter’s location.

Players spend years thinking about and mastering all of these details. And through this they gain a very, very informed intuition. An intuition so powerful that in the short amount of time that it takes for the ball to travel from the pitcher’s mound to the batter, any number of minor adjustments and corrections can be made.

The simplicity and elegance of a professional ball player hitting a baseball is the culmination of study and practice. The sum effect of understanding countless variables. Showing itself as an informed intuition supporting the subtle but really mind-boggling act of hitting a baseball.

Again, a 10 year old might pick up a bat and swing it at a slowly-tossed ball. But hitting a ball traveling at 90 miles per hour while subtly changing its direction such that it isn’t caught by one of the highly-skilled opposing players – that is something different.

And the same is true for software delivery. Any 10 year old with a keyboard can print “Hello World” on the screen. It takes an intuition informed by years of study and practice to adapt to the whims of screaming stakeholders, delivering software that meets a need while operating amongst an internet intent on bringing it down.

The things we study are agile, and TDD. SOLID and Site Reliability practices. And much more. And each of these domains are worlds of details within themselves. With every single detail we master we add to a powerfully informed intuition that will make our delivery of software that much more valuable to the people we build it for.

And after all – making amazing software is what it’s all about.

So let’s – Play Ball!!



Killing Software Development Capacity, Part 2: Philosophizing

Several years ago – I would regularly travel from Washington State to New Jersey for business. I worked at a wonderful little telecommunications company whose headquarters were down by the “Jersey Shore”. :)The team was fantastic, the product was fun and highly technically interesting.

Whenever I’d head over there I’d inevitably find myself in a Panera with one of my teammates, who happens to be a very good friend. And we’d see if we could run them out of coffee before we run out of philosophical problems that we wanted to solve. Usually the restaurant would close and we’d have to leave…..

There’s a spectrum in software development. A spectrum relating to how much philosophy, talking and deep thinking people want to do about the work that they do. At one extreme – typing in code and seeing the computer ask “how high” when it is told to jump is viewed as the single most valuable and important thing. On the other extreme – if left to their own devices, these developers would sit around talking about the meaning of agile, the best possible way to structure code and how the two fit together for hours while at the same time not actually writing a single line of code.

As someone who has an in-built wiring for the latter end of the spectrum, a thing I’ve been learning lately is that operating off of an intuitive, not fully explicit understanding of a thing can be very effective. Even as I type this my instinct begs me to reconsider. But it is in fact the truth; furthermore a lot of information can be gleaned by those of us that want to learn from someone who operates in this space. By observation and by imitation and by active questioning – that is, by helping the potential teacher put their intuition into words.

Not only can it be very effective. It can be MORE effective than having a deep explicit understanding of the topic or action. Because turning intuition into words is expensive in terms of time.

I was helping my brother-in-law to build a deck a few years back. Already, for those of you that know me, you might be giggling at this point. But it’s a true story. Anyway – I somehow ended up with the task of building steps for the deck. And so, I started with what I knew about parallel and perpendicular lines from geometry, then started doing some sines and cosines from trigonometry. When he saw what I was doing he hid it well – but I’m pretty sure he was laughing at me. He said, “just use the template”. Basically, trace along a pattern that has been pre-cut and then cut it into the steps. My first, gut reaction, I kid you not was that that sounded like cheating.

We were building a deck, not pursuing understanding of physics or geometry.

Again, even now, my gut disagrees with me on this but I’ll say it; we were not cheating.

In agile – when we build things, we break them down into user stories for PRECISELY THIS REASON. I do not think I am alone amongst programmers who, when they are given a task, take the one that will end in the most thorough understanding and most considered approach. Stories put a boundary around that impulse, help us to define “the template” by limiting ourselves to what is directly useful to the person we are building for.

So it goes with philosophizing about software development. To the extent that we justify our thinking by our work, its value is defined by its multiplying effect on our production. But not all thinking multiplies the things that we are trying to do right now. It might help us in the future or it might help others – and both of those are valuable. Knowing, then, what are goals are (immediate production, helping others, future production) and what their respective weights and priorities are will put us in a better position to decide if we continue to think, or if we move on to coding.

Don’t get me wrong; deep thinking about software development is not something we in our industry have an over-abundance of. But what I am saying is that there is a time to remember that sometimes you’re just building steps and thorough understanding is overkill.

So in an effort to maximize velocity and the immediate value we are creating, sometimes there’s great wisdom in the words of Crash Davis, “don’t think, it can only hurt the ball-club”.

How To Use Systems Thinking To Destroy Your Team

It was one of my favorite rooms ever. The sound was that of whirring floppy drives – and it had that delicious, though faint smell of 1980’s electronics. The room was clean, and probably fairly recently carpeted and painted. And entering it felt like entering a spaceship and departing on a journey to exciting and unknown places. It was the computer room at the youth center on the Army base in Berlin, Germany. The room was empty in the center with computers on the walls so that you were seated facing the walls.

Maria ran the computer room – she introduced us to all the unique and exciting experiences held just beneath the keyboard of the Apple ][. Things that you might not discover – like the fact that there were multiple versions of the BASIC programming language floating around. Or the power of the GOTO keyword.

It was around this time that I reached the pinnacle of my programming career. I made the machine print “Kyle is Cool” over, and over, and over, and over again.



“Making the machine do something” is really the fun of it. And the more complex and interesting, the more fun the experience is.

Though we all realize at some point that there’s a level of complication and sophistication, no matter how clever we are as individuals, where working as an individual maxes out and working with a team begins to appeal.

And this is where Systems Thinking becomes IMMEDIATELY valuable. We may not realize we are doing it at first, and there is an infinite world of connections and systems to explore and to manipulate to help create better software. But one way or another, thinking of the people building the software and the way they’re arranged and working together is a powerful tool, even if the primary system you are passionate about is the one turning ones and zeroes into useful behaviors.

The Downside

Over the years I’ve seen examples of the human system not working to its own advantage – but one case stands out to me. And the thing that should be noted about this is how far out of the immediate day-to-day situation the system extends. So pay particular attention to that as we dive in…

I was on a small team of developers – there were four or five of us delivering software together. We weren’t an ideal team but we generally worked well toward common goals.

At the highest levels of the organization – probably four to five management levels above my teammates and I – a decision was made to use a performance management system that included the usual unfortunate cliches. Most importantly it included a feature that was regularly trumpeted during “town halls” and other communications – pay for performance. “He (or she) with the best list of accomplishments in the previous year, gets the most money”.

The effect was powerful..and not in a good way.

Team members began making decisions that buoyed the initiatives that they most wanted on their list of accomplishments. At the team, day-to-day level. Instead of working to arrive at common goals and then pushing toward them together, the environment became immediately individualistic.

This is one of the most common ways that corporate environments can ruin software delivery. And I have discussed it in depth, from a philosophical perspective in

But more importantly, the take-away from this practical situation is how broad the “System” is that you participate in. And how easy it is to maul a software team with the axe of good intentions.

The Upside

To use Systems Thinking to your advantage you have to realize several basic truths. The first of these is that there are truths. There is an underlying reality in the way that humans work together and think together. And that reality is expressed in fairly predictable ways in software development

The second, though, is that when it comes to actually creating software, it’s almost impossible to predict. And because of this…

The third thing is that the primary principle to build into a software team is distributed authority. Make the teams small and skilled and let them make their own choices about how to deliver. Because there will always necessarily be too much information to funnel through any information bottlenecks (see “Managers” – also see:

And the final thing is realizing that skilled means skill with the computer system and skill with the people system. People that can make the computer do something are a dime a dozen. People that can work with other people to make a computer do something – those are the gems. Build a team of gems and ensure that the loosest possible guardrails are in place.

And then just sit back and watch the magic happen…

The Text-Native Communicator, Remote Work & Serendipity

I’ve been a computer geek since it wasn’t cool.

“Learning to code” when I was a kid would bring quick looks of confusion, then judgey disapproval.

That didn’t stop me and many in my social circle. Not only did we continue to hack away on our Apple ][‘s, IBM-XT’s and AT’s, 286’s and 386’s, we began to get much of our social fulfillment through them as well.

Through a magical venue called the BBS – the Bulletin Board System. Virtual mail and text based video games were available. Often with levels of asynchronaity that would boggle the mind of one of these young whipper-snappers that carries a super-computer around in their back pocket. Because it was most of the time, one dial up line at a time. I login, do stuff, logout. Then you login, respond, and logout. Primitive, I know.

My favorite BBS’s were the ones with multiple lines – I could chat *IN REALTIME* <gasp> with people far far away from me. Like two or three towns away.

We learned to communicate with text. Text that was sent and received extremely slowly by today’s standards. And we learned to communicate emotion – with sideways smiley faces, sad faces, confused faces. Everyone had their own style too.

One of the greatest compliments I remember receiving, was when one of the adult BBS users told me that I was particularly good at getting emotion across in that tough setting. I appreciate the compliment even more now, because it was just what we did back then, but I now realize how much of a skill it is.

Anyway – enough of the “get-off-my-lawn” sermonizing. If we cut forward a few years (2007 or so), I worked at a software development shop that was (admirably progressive for the time) fully remote. I didn’t realize it at first, but some of us had radically higher capacity for communicating via the primarily text-based avenues available at the time.

For several folks it was such a distinct disadvantage that it created some serious interpersonal problems. The leadership (again, admirably) had monthly and quarterly in-person meetings to make sure we were all connecting and getting the advantage of in-person communication. Though, the majority of us connected, related, socialized, and produced well through primarily text-based channels (and some voice-based channels).

And that is one of the keys – we all had a lot of just relating and socializing that we did with text, which, as it does in person, greases the way to productivity. A lot of people treat text-based communication as a necessary evil and use it only when it’s necessary to produce. Having informal, human conversation over text is integral to building chops for better including emotional content, but it also solves one of the big arguments that gets put against remote productivity; serendipity.

“But you don’t have the same kind of serendipitous conversations that you do when you’re in person”.

But WHY do you have those serendipitous conversations in person – because you leave your cube, and engage your human need to interact with people. Since many folks treat text communication only as a tool of last resort – and don’t engage in it simply to meet their basic needs, they don’t find themselves in the same kind of serendipitous conversations.

When you treat text based conversation as a first-class interaction mechanism, rather than one of last resort, the same serendipity will happen.

As a society we’re moving forward in this. It is a skill. And it is an important one – since it opens up a world of productivity that we wouldn’t otherwise have access to.


Silos Are A Good Thing ™

As in any human social network, we in the software world collect this bank of common wisdom that we can begin to believe and follow without question. One of the biggest, partially true, but mostly wrong bits of common wisdom in our space is the sense that “silos” are always a bad thing.

First – let’s start with a bit of a working definition, so we have a reference. And so that we have something to question if we don’t agree on the conclusions that we reach here. A silo is simply a group – a limited set of social connections that we leverage to complete work.

The “limited” nature of the set of social connections is the first part of this that people tend to find implicitly disagreeable. It feels exclusionary. Isn’t inclusiveness an absolute good? No – it’s not. For two reasons.

Firstly, opening a working group to the wrong individual can lower the output of the other individuals in the group. This can happen for a number of reasons – the wrong skill set, the wrong attitude, the wrong vision.

Secondly, no matter how aligned an individual is to the values of the group…no matter how skilled they are….no matter how much of a positive, team player they are….at a point social connections – that is, communication network nodes, slow progress more than they help. It’s the reason we have the beautiful idiom; and why we understand so painfully well what it means to design software “by committee”. Committees are good for vetting all the angles of a thing. But as a group grows in size, its ability to execute diminishes rapidly, because its communication paths multiply. And this is particularly problematic in software where the things we are communicating are so detail rich, context sensitive, and highly technically sophisticated.

So it’s important to build silos…..the right kind of silos. Silos that have just enough people to accomplish what we want to accomplish and no more. We should have the skills and authority to design, build, test and put to use the software that we intend to create embodied in as few people as we can get away with.

And we shouldn’t feel guilty about it. Limiting communication networks to the minimum possible to deliver well and rapidly is the utmost that we can do to add value to our organizations and its customers. This is the foundational piece of our work.

We can’t let “common wisdom” keep us from this.

Now there is a bad type of silo – and if it isn’t obvious what that is at this point, it’s the kind of silo that creates hard separation between the people and skills necessary to deliver our software from top to bottom. That is – if we create organizational units out of those that build a database for the system, from those that build the application code, and also from the people who put the software into operational use – if we do that, we are creating the bad kind of silos that should rightly be frowned upon, and more importantly – avoided.

So we should recognize the importance of structuring our organizations properly. And we should avoid fear of an abstract notion (the silo) that is only bad if it is poorly implemented.

Here’s to building amazing software!


What The Big Data Revolution Teaches Us About The Innovating Organization


, ,

One of the fascinating technical advancements that’s been happening recently has been around distributed data, and distributed data processing. The whole “Big Data” revolution rode in on the back of Hadoop. Hadoop is a tool that used a simple paradigm, the idea of combining two logical set operations (viz. map and reduce) into one clean way to heavily parallelize processing of massive sets of data. Now given, the data and the problem have to be a fit for the solution – and not all data and problems are. But none-the-less, if one’s problem space can be fit to the solution shape, the payoff is huge. There are newer techniques that follow the same broad pattern, but the underlying principle is that problems are split into pieces that don’t rely on each other, and can thus be run in or out of order, on the same computer or a different one, while still eventually arriving at meaningful results.

Data storage has been similarly advancing, largely along the same lines. Several years ago – the line of “no-sql” tools started to crop up offering different technical trade-offs that weren’t available when we relied solely on relational databases for our production data storage. Most interestingly – they allowed for a thing called “eventual consistency”. The dogma throughout my early years as a software geek was that data should always be what it’s supposed to be – it should be – in the fastest way possible – completely and utterly consistent. This turns out to be unnecessary for large swaths of problems. And so our beliefs about this and thus the tools we build for ourselves evolve to support it. The general outcome is that if we say that our database can be running on a number of computers, geographically dispersed, and they don’t even have to be immediately consistent, suddenly storing that data becomes far less burdensome. We can scale by throwing hardware at the problem – so that we don’t have to spend so much expensive thinking time on making the data storage as fully optimal as possible. And now we are commonly in the range of storing more than a Petabyte of data.

I want to highlight that just like with Hadoop and distributed processing, the distributed storage problem has to match the solution shape. Eventual consistency (in this case – and please forgive the radical simplification I’m doing to keep this blog post under a Petabyte itself) has to be ok.

And this is the big idea – the thing that big data teaches us about team organization:

In order to scale out massively, we have to place particular limits on the shape of the problem we are solving.

Demanding 100% control over our data – so that we always know it is entirely consistent – is comforting in a way. But it demands a lot of things about the technical structure of the database. You can’t parallelize operations, you can’t do them out of order, you can’t distribute them. The benefit requires a trade off in restriction. Conversely, if you shape your application and your business logic such that it is useful regardless the immediate consistency of the data, you can scale broadly.

On the data processing side – if we serialize processing, depend on the results of previous operations, and share state, we limit ourself to nearly one thread of processing, with shared, controlled memory to hold state, and scaling is something we can only do by adding more power to the single computer.

In general we give up immediate control for immediate chaos – with an understanding that the answer will emerge from the chaos, because of how we’ve shaped it.

With a computer it’s easy to see why this is true, memory and processing on a single computer, or a tightly coordinated few computers has an upper limit. It always will, though that limit will increase slightly over long periods of time. Designing a solution that can arbitrarily scale to any number of computers makes us only limited by the number of computers we can get a hold of, which is a completely different, and much easier ceiling to push up.

With humans, it’s even worse, until we make a breakthrough and figure out how to implant the cybernetic interfaces directly into our brains, our capacity has been the same for thousands of years. And more discouragingly it will remain the same for many years to come. We can not buy a more powerful processor; we cannot add memory. Some models come installed with a trivial, incrementally larger amount of one or the other or both, but it’s not enough to really make a difference.

To solve large problems, you need more than one person. To solve problems requiring creativity and human ingenuity – humans need autonomy to explore with their intuition. This means that every human being involved in solving the problem will have to saturate their ability to think. No matter how smart the genius – it is a significant stunting of effort, if we pretend like they can serialize all the thinking through themselves. Even if it’s a couple or maybe a small team of geniuses that are tightly coordinated. To really achieve anything great in a modern software organization – or rather, in a modern information-work-based organization – and actually that’s EVERY organization – the thinking has to be fully distributed.

For the EXACT same reason as it has to be with distributed computing and distributed storage. The problem size has a limit if we want full control. It doesn’t if we can allow the solution to emerge from the chaos of coordinated but distributed activity.

And just like the distributed example above, this simply means we have to choose to forge our problems to match the solution shape. Organizationally this means at every level realizing that the leadership and initiative necessary to drive something to completion – the thing that drive us so quickly to top-down, command-and-control structure – has to be embedded in every doer. That positional leaders in the organization should be few and should be “Clock Builders” – to use Jim Collins’ brilliant phrase – shaping the system and providing just enough structure so things don’t fly apart.

And when we get down to the software – the teams have to be shaped so that there are no dependencies on one team to another. Not for architecture, database programming, testing or moving something into production. The small software team that works together day to day, to fully distribute the innovation and creation that we humans are designed so well for, should have everything it needs to handle the software from cradle to grave – including but not limited to – the necessary authority and skill sets – with every team member possessing the leadership and initiative to drive anything through, even if the rest of the organization burned to the ground.

Failure to distribute our creativity and innovation in this way will rightly be viewed as just as backwards as a software development team committing to only storing megabytes of data in a fully consistent relational database is today.

So let’s get organized and build some amazing software!!


What Are Managers Good For Anyway (The Role of Leadership in Software Development)


, ,

As an on-again-off-again software development manager, I have asked this question of myself on more than one occasion..what *are* managers good for anyway?

It’s a great question – and the answer is not entirely intuitive.

Software development managers, and the expectations about their duties have been modeled after earlier times, and different types of work. The measure of the success of a manager is often directly tied to the software within the purview of the team that is managed. Which, even, from the title of the role seems like a patently obvious way to do things. After all – software and manager are both in there – so it seems like the thing one manages should be the thing one is measured by.

But take a second look even at the title – software development manager – what is it that is being managed? Software? No – software development. And while software isn’t a bad proxy to measure to find the potency of our software development, it can often be a misleading one.

The reason for this is due to the nature of software. Recognized and memorialized by the Scrum framework, the duty of organizing and defining the nature of the software to be developed is vested in a member of the development team known as the Product Owner. This role is envisioned as a partner with the other members of the team and not one that controls the specific actions of the team. One that collaborates to find the proper priority of software features to be developed, but leaves specific action to be defined by the team itself.

The “what” question is separated from the “how” question. And both are meant to be collaborations on a team consisting of both types of roles.

Where is management? Where is leadership? The traditionalist that takes their cues from car factories and construction sites might say that someone, somewhere has abdicated their responsibility. We should fire whomever it was, and find someone that can do their job. And by the way – this Agile stuff only works in textbooks.

They’d, of course, be wrong.

If we’re looking closely – there’s a huge need for leadership in this situation. It’s just not with the first-order concerns of the specific software output. It’s with the second-order concerns of the system that the software is being built by. The software development manager – should be managing the development of software – setting up the systems and practices, hiring the right kind of people, and ensuring the cleanest code. Then, they’ll be managing software development, and then the software will practically write itself.

But let’s put this magic aside for a sec, and take a quick step back.

Let’s parse out management and leadership here for a second. Because the difference starts to make a difference.

When we talk about management, we talk about ticking HR checkboxes. For starters, in most organizations, we can’t very well have everyone reporting to the CEO. Training, timesheets, whatever regulatory things that need to be “managed” – that’s the job of the manager. With this comes a certain degree of leverage. This isn’t bad, but it is interesting as we will see in a little bit.

Leadership is about change. If an individual’s or an organization’s inertia will carry them through to whatever thing it is that they want to accomplish, leadership is unnecessary. It’s very similar to physics – acceleration is defined as changing velocity, which is a two-part vector that includes both direction and speed. If either of those change, you’ve accelerated – and more importantly, a force has acted upon you. Leadership is that force – whether applied by someone else, or by yourself – if your velocity – your speed or direction or both – if they change, leadership has been applied to your situation.

I won’t cover the various ways that this force can be applied, or the tools used to generate it. But it is sufficient to say that the organizational leverage that a manager might wield is one of the most obvious though least powerful methods.

So management for a leader is a tool to the end of changing momentum.

Software is an interesting beast, if I hadn’t been clear enough about it earlier. And it is so for several reasons.

Firstly, if left alone there is a natural drag that will decelerate its delivery until it stops entirely. That’s right – if nothing is done to push back against this natural force, an organization that starts to deliver software, given a certain amount of time, will be unable to continue.

Secondly, the problem solving capability of the software team is embodied in the knowledge of the individual team members and in the communication networks within the team. Unless the team is very weak or severely restricted, no single individual would be able to hold enough information to make a better, quicker decision than the team itself.

Following from this second situation, it’s important to recognize that direct manipulation of the team – ordering a specific action – will lead to a reorganization of the team’s collective understanding. It will also create a sense of dependence on the one giving the order — since blindly following a directive is easier, quicker to execute, and necessitates a lower level of responsibility for the results – all of which are pleasant in the short-term.

So hopefully the hole that the software leader can and should fill is becoming obvious. But to pull all of this together…

Our challenge is basically, as a software development manager, what do we do when output isn’t reaching potential.

The answer is that we have two options – the path most taken would be to exercise direct manipulation of the team. That is – we might give specific commands about how to deliver…break out the carrot and the stick, threaten, entice – go directly for the output and work back to specific actions that can get that for you.

This is problematic because it makes the system worse, as we said above, it necessitates a reorganization of the team to accommodate the adhoc input (which will be less optimal than it was prior to the command) and it creates dependency. Both of these make “software development” – the thing we are managing – the system that delivers the software, worse. Which means, next time you will be delivering less well.

The second option is to work on the system itself to make it better.

What is the system? It is all the things that go in to creating software, from hiring the team, to the workflow the team uses to create the software to the delivery methodologies the team has for putting software into production. It includes paring and mobbing, TDD and the taste your team members have to create modular, well-crafted software that’s a snap to understand and change. And probably more that I’ve forgotten to mention.

So – you focus on the system and the next time around your output is better. The only challenge is having the character to not stress out and react incorrectly and too quickly to the pain of disappointed expectations (whether or not they were in any way realistic).

Fantastic – so that’s a win. But it sounds like a one time thing. That’s not exactly a full-time job for a software development manager.

Right – except the whole thing was predicated on the challenge that “the output didn’t meet the potential”. Which, as much as this may hurt to hear, will in fact always be the case. Further, as we said above, software development is funny because it rots – the system itself that is. It becomes slower over time. So even to maintain the momentum we have currently, requires a “force”, requires leadership to overcome.

So – even something as boring as maintaining the status quo in software requires leadership. Certainly to make headway and to become a truly great example of software delivery won’t happen under anything less than heroic exertions by a gallant generation of brilliant software leaders and the brave souls that follow them.

Let’s be them!

Happy coding!


The Great Convergence: Part II

The Single Responsibility Principle.  It’s my desert island principle.  If I had my brain cleansed of all software wisdom except for one thing – it would be the good ol’ SRP.  Every software creator should seek to have a tight grasp of this principle.  It provides a huge amount of clarity in exchange for the simple task of thinking about things in this particular way.

And that is, of course, that every thing (method, function, class, package, etc..) should have only one reason to change – it should have a single responsibility.  I won’t re-hash the entire principle write-up that I linked to above – but it creates clarity when you read the code because your brain doesn’t have to shift contexts as it looks at a single thing, and when you go to change code, you don’t risk changing a responsibility you’re not interested in changing.

The SOLID principles were originally, frequently referred to as Principles of Object Oriented Development.  And they are that.  But they are actually deeper than that – they are Principles of Software Development that apply equally to any development paradigm.  Of particular interest, in addition to OOP, because of its impact on the current state-of-the-art is Functional Programming.

These two paradigms, OOP and FP, are often pitched as being radically different.  They aren’t – they stem from the same need, take two different paths, but arrive ultimately in the same place.

I often say that there are two audiences that you write your software for; the user, and the reader.  Making a computer do something useful these days is relatively straightforward.  The art – the thing that distinguishes the true Craftsman from the amateur – is how you treat the second audience – the reader of the code.  And how one reads and understands code is as subjective, if not more so than how one uses the functioning system.  It’s an exercise in aesthetics.  It is understanding vocabulary and idioms, and it’s making intent and reasoning clear.  When all these pieces come together, the reader has a positive emotional response and is able to understand and modify the code quickly.

Making the intent of the reasoning clear means making concepts crisp and logic obvious.  And using abstraction to eliminate unnecessary details (because we’re limited in our ability to store information and reason about it).  If we didn’t have to worry about humans understanding and reasoning about our code, we could write it all in binary – there would be no reason to bother with  higher level languages.  They’re an accommodation for our inherent weakness.

Human reasoning happens on a number of levels – from levels that we are explicitly aware of, and can consciously arrange syllogisms to get to the result we want – to levels that are so unconscious, that we barely even know we’re doing (though we might be able to feel ourselves doing).  We romanticize the unconscious reasoning – because can look and feel like magic – because we’re pulling answers out of a hat without being able to describe how we got to them.  We tend to think of the more conscious versions of our reasoning as more scientific, more rational.  The truth is they’re both very powerful and useful in different ways – some of us balance between the two, some of us have a severe tendency toward one or the other.

In software development – these two styles are approached with either of these two high-level paradigms we’ve mentioned: OOP with its mantras about nouns that match the real world, having things that do things, is an approach that appeals to the unconscious style of reasoning – more purely to intuition.  And FP, with its mantras about equational reasoning and functional purity appeal to explicit rationale – to the conscious style of reasoning.

The great part is that our consciousness of the reasoning we’re performing doesn’t change the nature of the reasoning.  We abstract and then draw causal connection and then deduce the outcome.  Subconscious reasoning, with its speed, may sacrifice accuracy.  And conscious reasoning, with its accuracy, may sacrifice speed.  But the ultimate reality is it’s the same thing, just done differently by us.

The SOLID principles circumvent our mechanisms and are connected to the underlying reality of reasoning.  Since this is the case, they apply directly to any kind of programming we may do.  And further, the more we follow SOLID principles, the more our code is inherently rational.  So whether we start from a FP perspective and follow SOLID Principles or start from an OOP perspective and follow SOLID Principles – we end up in the same place.  Code that is inherently more rational, which will on the surface mean, OO code that starts to look like good FP code and FP code that starts to look like good OO code.

Anyway – back to SRP – step with me into the shoes of an OO programmer.  One of the articles of common wisdom held by OO programmers is that you’re supposed to “hide data behind behavior”.  That is, if you believe the textbooks on the topic, you might think it’s mandatory if you have a class, it should have state *and* behavior in that class.

But if we look closely at the SRP – and think about what it means to “have a single reason to change”.  We realize that a class having state and behavior, *always* has two reasons to change.  A class that has a Single Responsibility really only represents one or the other.

And further – a good class that’s doing only one thing, shouldn’t really have more than a method or two.  And two is pushing it.  Unless, maybe you’re using the class as a kind of package, in which case you might have related methods that do independent things.

And both of these apply across the board.  There are always edge case things, where the flexibility is nice to have – but if we’re really being honest with ourselves, we’re not really SRP if we don’t do these two things.

And if you look at it – if you follow SRP closely like this – immediately the code is easier to reason about.  But it also is surprisingly close to good FP code.  There’s no state, so the method’s result (barring depending on state elsewhere) is always only the product of it’s parameters.

FP and OO are converging – simply because they are both appeals to human rationality.  But they are inspired by different phases of the same reality.  SOLID pokes at the root of this and asks us – how do we really make great code that clearly displays intent, and gives the reader the best possible experience.

Happy Coding!


Killing Software Development Capacity, Part 1: The Performance Review

I’ve been on both sides of this morbid practice with several different organizations at this point.  The similarities are striking everywhere I’ve been.  The process usually goes something like this:

  1. Set “goals” starting at the top of the organization
  2. Make those goals more specific as you move down the org chart
  3. For each individual
    1. Make sure that there’s awareness that financial incentives rest on meeting this
    2. Ostensibly allow for modifications to goals throughout the year
  4. At the end of the year (and sometimes throughout the year)
    1. “Calibrate” performance across all disciplines and parts of the organization
    2. Use the goals starting at the leaf nodes and working up to the top of the org as the baseline
  5. Individual managers rate their employees, but are held to the calibration done by the larger management team.  And the calibration is typically used to formulate a stack rank – a bell curve of performance that must be fit

As I noted above – there has been remarkable consistency in the different organizations I’ve been a part of with regards to the thinking about and implementation of this process.  I have to assume that there’s a common source of knowledge that leads to this – simply because the coincidence is a bit too stark.  Though, I am unaware of what the source is.

There are a lot of similarities, both in the form and the results of the practice between this system and the software development approach commonly known as Waterfall.  And as with Waterfall, I would concede that on the surface, it’s intuitively very appealing.  And I can imagine a lot of scenarios where it may be a very helpful technique.

Software Development is not one of these scenarios.

It cannot be, due to its nature.  I will do my best here to spell this out specifically for software development, though I imagine that the arguments apply equally to any information work.

Software development is a learning activity.  It tends to get conflated in the popular mind with the act of typing in a computer program (“coding”).  The act of actually entering the code is the least time consuming, least important part of developing software to meet a human need.

Understanding the problem space and the logical bounds that must be worked within, and how one might put the two together and model the specific solution is the bulk of the work.  Because of our cognitive limitations, that will always involve entering some code, seeing how well it serves the problem, and how clearly it reflects a model of the problem, fixing whatever we weren’t able to get right the first time and trying again.

That is, software development is always experimental, and the real value that is created as a part of that experimentation is learning;  learning that is reflected in the individual, and in the organization, and also in the software code itself.

The Appeal

The appeal of the system laid out above is two-fold.  And it is based on some undoubtedly real needs.

Appeal #1: Organizational Leaders want to be able to set direction.  That is – at every level the management team has a real desire to be able to feel comfortable that they’re driving the people working for them in a direction that is consistent with the direction that has been laid out for them.  The filtering down, with further specification of goals is a neat, clean way to do this, that almost seems to remove the messy human stuff from the system.

Appeal #2: Organizational Leaders want to have the sense that the organization that is accountable to them is moving with urgency and with an appropriate amount of energy.  The financial awards system is seen as the carrot that can be consistently applied, again in a neat and clean way that removes as much controversy as possible.

Again, both of these appeals come from genuinely important functions of authentic leadership – to provide direction, and to generate the collective energy to get there.

The Problem

Seeing the set of problems here requires some Systems Thinking.  I’d recommend (re-)reading Senge’s seminal work on the topic “The 5th Discipline“.

Problem #1: The Impossibility of Planning Software – Anyone who has worked in software for more than a day has come to realize with the rest of the industry, that predicting precisely what software we will be working on even as little as a couple of weeks out into the future is close to impossible.  Unless there is an extensive artificial structure in place to prevent it – the software you build today (again, because software development is primarily learning) informs the software you build tomorrow;  tomorrow’s software affects that of the next day, etc.  The uncertainty multiplies the further you get into the future.  This is why Agile has emerged from the software development world.  The  only way to build software that really meets needs is to try something and adjust.

If as a software developer I predicted what software I would be working on just 6 months into the future, even in the broadest terms, I would need to adjust many times over.  That adjustment would need to be fed back into the goals that had come down, potentially adjusting those with the learning that has occurred, disrupting a ton of work that was done by a ton of folks to figure out what those goals were in the first place.  And almost precisely like the Waterfall methodology – the force of this inertia makes this a practical impossibility.

Practically what happens is that the goals that an individual software developer writes has nothing to do with the actual software they’re writing, so that any changes won’t be disruptive to the larger system.  Which, in case it’s not obvious, completely blows away the usefulness of this tool as a mechanism for setting direction.

Problem #2: Individual Focus – the goals for a specific individual are always, almost as a logical extension of the system, formulated only between the individual and their immediate supervisor.  This means that teams aren’t being built around the things that their members are ostensibly working toward; because one team member is fully unaware of what any other team member is most interested in accomplishing.  And because of this it is almost impossible to not send the message that how someone performs as an individual is the key expectation from the leadership team.  This was heartbreaking as a software development manager, because no matter how hard I would focus on building a team, the system wouldn’t have it.

Problem #3: Motivation – One of the primary appeals of this system is that it gives a nice, clean way “to motivate people”.  Attach money to whatever it is you want people to do, and voila, problem solved – people will produce whatever you want.  This betrays a couple deep misunderstandings of human nature.  Firstly, money is not a real motivator – people will be anxious to make more money until they feel they are being paid fairly (to which, accomplishing more things is only one solution – leaving is another).  Secondly, it presumes the organization to take on the initiative to motivate someone.  People can be demotivated by others, but naturally we want to accomplish – we want to make a difference.  Ironically, attempting to bribe someone into doing more is really introducing a significant demotivating factor.

This is multiplied when the things that are being incentivized by financial pressure are unrelated to the work that they are primarily interested in doing (an unfortunate outcome of Problem #1).

Problem #4: Manager Bottleneck – Unlike in prior times or in other types of work, in information work, the doers possess the specific knowledge required to make something happen.  There is no way that a single individual would be able to possess a team’s worth of knowledge about a particular piece of software that is to be delivered.  Yet if the manager is expected to drive the team in a particular direction using the goals as his steering wheel, it requires that he know exactly what every single job requires, and how to execute it.   And further, if he or she is not actively directing the work, because their goals will be tied to the execution of the goals of those under them, they will have a powerful pressure on them to review the output of their reports.

This is absolutely antithetical to a strong software delivery team, where the full team shares the knowledge and responsibility necessary to deliver software.  And it makes the software development manager a bottleneck to delivery.  Which, in addition to being a major limitation on all but the most junior teams, means that there will be wide variance from team to team based solely on the manager’s skill level. (And none of this accounts for the burnout that will be the manager’s lot for being put in this kind of a situation)

Further, any technological innovation, process improvements, or other enhancements that the team is interested in doing to the way it works all require up-front planning with the manager, since it will have to fit in as a part of goals.  Which means, that instead of being group exercises, they are necessarily top-down driven initiatives that will rise or fall on the manager’s say rather than on the collective voice of those doing the work.

Problem #4 is probably the most dire.  But like Problem #1, the vent that usually lets the pressure off is that the actual goals are placed on things that are less relevant and less important than the actual software that is being delivered.  And since there is more and more pressure adding up to this end, less and less value gets placed on the usefulness of the tool.  And ultimately ….

The Ultimate Outcome

Because the upper levels of management create a system that actively necessitates avoidance, reduces its own value in the eyes of those using it, while creating an illusion of control and motivation, the resulting cynicism should be unsurprising.

The Solution

I’m a firm believer that you shouldn’t whine about a problem without pointing to the solution.

The solution here is to – and I steal from Jim Collins here – recognize that there are no silver bullets in leading an organization.  Directing people toward a common goal is a messy, human situation that requires a deep commitment to communicating unceasingly, to really understanding your business and the way the people fit into it.  And most importantly it requires a deep understanding of how software delivery (and really information work) differs from the old style of work that didn’t fall apart as readily under a heavy top-down hand.

  1. Fully eliminate this idea (Performance Review) from any part of your organization that engages in software delivery.
  2. To meet the direction needs, re-evaluate the role of the software development leader.  Make it their role to work on the system – to formulate a delivery practice based on agile principles.
  3. Re-evaluate the manager to staff ratio.  The software development manager should have almost nothing to do with the specific software and everything to do with creating a healthy system.  Because of this the number of staff reporting to a single manager can be far higher.
  4. With this decreased need for management, flatten your organization – eliminate as many intermediate layers of management as possible, so that top leaders can be more readily in contact with doers.  So that feedback can REALLY flow back up to the leaders making the strategic decisions.
  5. Take all that money you’re saving – and really pay the doers what they are worth.

None of this is particularly exciting – especially because it involves a lot of communication.  Something, that if we asked people how they really feel, they might opine that it is not productive – know that it is.

It also eliminates a lot of positions of authority.  Authority is a thing that can attract some people.  Though, I would argue that it attracts the wrong kind of people.

Organizing like this will do nothing but lead to better software, happier people, and much more value being delivered to the world we serve.

Happy coding!