The Great Convergence: Part II

The Single Responsibility Principle.  It’s my desert island principle.  If I had my brain cleansed of all software wisdom except for one thing – it would be the good ol’ SRP.  Every software creator should seek to have a tight grasp of this principle.  It provides a huge amount of clarity in exchange for the simple task of thinking about things in this particular way.

And that is, of course, that every thing (method, function, class, package, etc..) should have only one reason to change – it should have a single responsibility.  I won’t re-hash the entire principle write-up that I linked to above – but it creates clarity when you read the code because your brain doesn’t have to shift contexts as it looks at a single thing, and when you go to change code, you don’t risk changing a responsibility you’re not interested in changing.

The SOLID principles were originally, frequently referred to as Principles of Object Oriented Development.  And they are that.  But they are actually deeper than that – they are Principles of Software Development that apply equally to any development paradigm.  Of particular interest, in addition to OOP, because of its impact on the current state-of-the-art is Functional Programming.

These two paradigms, OOP and FP, are often pitched as being radically different.  They aren’t – they stem from the same need, take two different paths, but arrive ultimately in the same place.

I often say that there are two audiences that you write your software for; the user, and the reader.  Making a computer do something useful these days is relatively straightforward.  The art – the thing that distinguishes the true Craftsman from the amateur – is how you treat the second audience – the reader of the code.  And how one reads and understands code is as subjective, if not more so than how one uses the functioning system.  It’s an exercise in aesthetics.  It is understanding vocabulary and idioms, and it’s making intent and reasoning clear.  When all these pieces come together, the reader has a positive emotional response and is able to understand and modify the code quickly.

Making the intent of the reasoning clear means making concepts crisp and logic obvious.  And using abstraction to eliminate unnecessary details (because we’re limited in our ability to store information and reason about it).  If we didn’t have to worry about humans understanding and reasoning about our code, we could write it all in binary – there would be no reason to bother with  higher level languages.  They’re an accommodation for our inherent weakness.

Human reasoning happens on a number of levels – from levels that we are explicitly aware of, and can consciously arrange syllogisms to get to the result we want – to levels that are so unconscious, that we barely even know we’re doing (though we might be able to feel ourselves doing).  We romanticize the unconscious reasoning – because can look and feel like magic – because we’re pulling answers out of a hat without being able to describe how we got to them.  We tend to think of the more conscious versions of our reasoning as more scientific, more rational.  The truth is they’re both very powerful and useful in different ways – some of us balance between the two, some of us have a severe tendency toward one or the other.

In software development – these two styles are approached with either of these two high-level paradigms we’ve mentioned: OOP with its mantras about nouns that match the real world, having things that do things, is an approach that appeals to the unconscious style of reasoning – more purely to intuition.  And FP, with its mantras about equational reasoning and functional purity appeal to explicit rationale – to the conscious style of reasoning.

The great part is that our consciousness of the reasoning we’re performing doesn’t change the nature of the reasoning.  We abstract and then draw causal connection and then deduce the outcome.  Subconscious reasoning, with its speed, may sacrifice accuracy.  And conscious reasoning, with its accuracy, may sacrifice speed.  But the ultimate reality is it’s the same thing, just done differently by us.

The SOLID principles circumvent our mechanisms and are connected to the underlying reality of reasoning.  Since this is the case, they apply directly to any kind of programming we may do.  And further, the more we follow SOLID principles, the more our code is inherently rational.  So whether we start from a FP perspective and follow SOLID Principles or start from an OOP perspective and follow SOLID Principles – we end up in the same place.  Code that is inherently more rational, which will on the surface mean, OO code that starts to look like good FP code and FP code that starts to look like good OO code.

Anyway – back to SRP – step with me into the shoes of an OO programmer.  One of the articles of common wisdom held by OO programmers is that you’re supposed to “hide data behind behavior”.  That is, if you believe the textbooks on the topic, you might think it’s mandatory if you have a class, it should have state *and* behavior in that class.

But if we look closely at the SRP – and think about what it means to “have a single reason to change”.  We realize that a class having state and behavior, *always* has two reasons to change.  A class that has a Single Responsibility really only represents one or the other.

And further – a good class that’s doing only one thing, shouldn’t really have more than a method or two.  And two is pushing it.  Unless, maybe you’re using the class as a kind of package, in which case you might have related methods that do independent things.

And both of these apply across the board.  There are always edge case things, where the flexibility is nice to have – but if we’re really being honest with ourselves, we’re not really SRP if we don’t do these two things.

And if you look at it – if you follow SRP closely like this – immediately the code is easier to reason about.  But it also is surprisingly close to good FP code.  There’s no state, so the method’s result (barring depending on state elsewhere) is always only the product of it’s parameters.

FP and OO are converging – simply because they are both appeals to human rationality.  But they are inspired by different phases of the same reality.  SOLID pokes at the root of this and asks us – how do we really make great code that clearly displays intent, and gives the reader the best possible experience.

Happy Coding!

Kyle

Killing Software Development Capacity, Part 1: The Performance Review

I’ve been on both sides of this morbid practice with several different organizations at this point.  The similarities are striking everywhere I’ve been.  The process usually goes something like this:

  1. Set “goals” starting at the top of the organization
  2. Make those goals more specific as you move down the org chart
  3. For each individual
    1. Make sure that there’s awareness that financial incentives rest on meeting this
    2. Ostensibly allow for modifications to goals throughout the year
  4. At the end of the year (and sometimes throughout the year)
    1. “Calibrate” performance across all disciplines and parts of the organization
    2. Use the goals starting at the leaf nodes and working up to the top of the org as the baseline
  5. Individual managers rate their employees, but are held to the calibration done by the larger management team.  And the calibration is typically used to formulate a stack rank – a bell curve of performance that must be fit

As I noted above – there has been remarkable consistency in the different organizations I’ve been a part of with regards to the thinking about and implementation of this process.  I have to assume that there’s a common source of knowledge that leads to this – simply because the coincidence is a bit too stark.  Though, I am unaware of what the source is.

There are a lot of similarities, both in the form and the results of the practice between this system and the software development approach commonly known as Waterfall.  And as with Waterfall, I would concede that on the surface, it’s intuitively very appealing.  And I can imagine a lot of scenarios where it may be a very helpful technique.

Software Development is not one of these scenarios.

It cannot be, due to its nature.  I will do my best here to spell this out specifically for software development, though I imagine that the arguments apply equally to any information work.

Software development is a learning activity.  It tends to get conflated in the popular mind with the act of typing in a computer program (“coding”).  The act of actually entering the code is the least time consuming, least important part of developing software to meet a human need.

Understanding the problem space and the logical bounds that must be worked within, and how one might put the two together and model the specific solution is the bulk of the work.  Because of our cognitive limitations, that will always involve entering some code, seeing how well it serves the problem, and how clearly it reflects a model of the problem, fixing whatever we weren’t able to get right the first time and trying again.

That is, software development is always experimental, and the real value that is created as a part of that experimentation is learning;  learning that is reflected in the individual, and in the organization, and also in the software code itself.

The Appeal

The appeal of the system laid out above is two-fold.  And it is based on some undoubtedly real needs.

Appeal #1: Organizational Leaders want to be able to set direction.  That is – at every level the management team has a real desire to be able to feel comfortable that they’re driving the people working for them in a direction that is consistent with the direction that has been laid out for them.  The filtering down, with further specification of goals is a neat, clean way to do this, that almost seems to remove the messy human stuff from the system.

Appeal #2: Organizational Leaders want to have the sense that the organization that is accountable to them is moving with urgency and with an appropriate amount of energy.  The financial awards system is seen as the carrot that can be consistently applied, again in a neat and clean way that removes as much controversy as possible.

Again, both of these appeals come from genuinely important functions of authentic leadership – to provide direction, and to generate the collective energy to get there.

The Problem

Seeing the set of problems here requires some Systems Thinking.  I’d recommend (re-)reading Senge’s seminal work on the topic “The 5th Discipline“.

Problem #1: The Impossibility of Planning Software – Anyone who has worked in software for more than a day has come to realize with the rest of the industry, that predicting precisely what software we will be working on even as little as a couple of weeks out into the future is close to impossible.  Unless there is an extensive artificial structure in place to prevent it – the software you build today (again, because software development is primarily learning) informs the software you build tomorrow;  tomorrow’s software affects that of the next day, etc.  The uncertainty multiplies the further you get into the future.  This is why Agile has emerged from the software development world.  The  only way to build software that really meets needs is to try something and adjust.

If as a software developer I predicted what software I would be working on just 6 months into the future, even in the broadest terms, I would need to adjust many times over.  That adjustment would need to be fed back into the goals that had come down, potentially adjusting those with the learning that has occurred, disrupting a ton of work that was done by a ton of folks to figure out what those goals were in the first place.  And almost precisely like the Waterfall methodology – the force of this inertia makes this a practical impossibility.

Practically what happens is that the goals that an individual software developer writes has nothing to do with the actual software they’re writing, so that any changes won’t be disruptive to the larger system.  Which, in case it’s not obvious, completely blows away the usefulness of this tool as a mechanism for setting direction.

Problem #2: Individual Focus – the goals for a specific individual are always, almost as a logical extension of the system, formulated only between the individual and their immediate supervisor.  This means that teams aren’t being built around the things that their members are ostensibly working toward; because one team member is fully unaware of what any other team member is most interested in accomplishing.  And because of this it is almost impossible to not send the message that how someone performs as an individual is the key expectation from the leadership team.  This was heartbreaking as a software development manager, because no matter how hard I would focus on building a team, the system wouldn’t have it.

Problem #3: Motivation – One of the primary appeals of this system is that it gives a nice, clean way “to motivate people”.  Attach money to whatever it is you want people to do, and voila, problem solved – people will produce whatever you want.  This betrays a couple deep misunderstandings of human nature.  Firstly, money is not a real motivator – people will be anxious to make more money until they feel they are being paid fairly (to which, accomplishing more things is only one solution – leaving is another).  Secondly, it presumes the organization to take on the initiative to motivate someone.  People can be demotivated by others, but naturally we want to accomplish – we want to make a difference.  Ironically, attempting to bribe someone into doing more is really introducing a significant demotivating factor.

This is multiplied when the things that are being incentivized by financial pressure are unrelated to the work that they are primarily interested in doing (an unfortunate outcome of Problem #1).

Problem #4: Manager Bottleneck – Unlike in prior times or in other types of work, in information work, the doers possess the specific knowledge required to make something happen.  There is no way that a single individual would be able to possess a team’s worth of knowledge about a particular piece of software that is to be delivered.  Yet if the manager is expected to drive the team in a particular direction using the goals as his steering wheel, it requires that he know exactly what every single job requires, and how to execute it.   And further, if he or she is not actively directing the work, because their goals will be tied to the execution of the goals of those under them, they will have a powerful pressure on them to review the output of their reports.

This is absolutely antithetical to a strong software delivery team, where the full team shares the knowledge and responsibility necessary to deliver software.  And it makes the software development manager a bottleneck to delivery.  Which, in addition to being a major limitation on all but the most junior teams, means that there will be wide variance from team to team based solely on the manager’s skill level. (And none of this accounts for the burnout that will be the manager’s lot for being put in this kind of a situation)

Further, any technological innovation, process improvements, or other enhancements that the team is interested in doing to the way it works all require up-front planning with the manager, since it will have to fit in as a part of goals.  Which means, that instead of being group exercises, they are necessarily top-down driven initiatives that will rise or fall on the manager’s say rather than on the collective voice of those doing the work.

Problem #4 is probably the most dire.  But like Problem #1, the vent that usually lets the pressure off is that the actual goals are placed on things that are less relevant and less important than the actual software that is being delivered.  And since there is more and more pressure adding up to this end, less and less value gets placed on the usefulness of the tool.  And ultimately ….

The Ultimate Outcome

Because the upper levels of management create a system that actively necessitates avoidance, reduces its own value in the eyes of those using it, while creating an illusion of control and motivation, the resulting cynicism should be unsurprising.

The Solution

I’m a firm believer that you shouldn’t whine about a problem without pointing to the solution.

The solution here is to – and I steal from Jim Collins here – recognize that there are no silver bullets in leading an organization.  Directing people toward a common goal is a messy, human situation that requires a deep commitment to communicating unceasingly, to really understanding your business and the way the people fit into it.  And most importantly it requires a deep understanding of how software delivery (and really information work) differs from the old style of work that didn’t fall apart as readily under a heavy top-down hand.

  1. Fully eliminate this idea (Performance Review) from any part of your organization that engages in software delivery.
  2. To meet the direction needs, re-evaluate the role of the software development leader.  Make it their role to work on the system – to formulate a delivery practice based on agile principles.
  3. Re-evaluate the manager to staff ratio.  The software development manager should have almost nothing to do with the specific software and everything to do with creating a healthy system.  Because of this the number of staff reporting to a single manager can be far higher.
  4. With this decreased need for management, flatten your organization – eliminate as many intermediate layers of management as possible, so that top leaders can be more readily in contact with doers.  So that feedback can REALLY flow back up to the leaders making the strategic decisions.
  5. Take all that money you’re saving – and really pay the doers what they are worth.

None of this is particularly exciting – especially because it involves a lot of communication.  Something, that if we asked people how they really feel, they might opine that it is not productive – know that it is.

It also eliminates a lot of positions of authority.  Authority is a thing that can attract some people.  Though, I would argue that it attracts the wrong kind of people.

Organizing like this will do nothing but lead to better software, happier people, and much more value being delivered to the world we serve.

Happy coding!

Kyle

The Generalization Myth

Generalization is beautiful and exciting – and offers many glorious health benefits.  It reduces the amount of code we have to write, and solves problems before they even arise.  With just a tiny amount of forethought – we can make any future work we do trivial, simply by achieving the right generalization.  It makes you more attractive to the opposite sex – and if done *just* right even grants you eternal life.

elsa-reaching-2

“I can almost reach it Indie”

No wonder, like Elsa in the Indiana Jones movie, we neglect our own lives to attain it.  “I can almost reach it, Indie”…

The Holy Grail, is of course, a myth.  Though it makes for a good tale – sprinkle in some Nazis, some betrayal, some family tension.  What a story.

In reality – the generalization that we all grasp for is also a myth.  There is no way to know ahead of time what generalization will meet all of (or even one of) our upcoming use-cases.  This is based mostly on the fact that we don’t know even what our upcoming use-cases are – let alone what kind of structure will be necessary to meet them.  And the generalization you choose ahead of time, if it doesn’t match what you need in the future, is wasteful, because it limits the moves you can make.  This is what generalization does, it limits expressive options to the abstraction that you select.  And since you can never know the generalization you need, this always results in negative ROI.  (Not as bad as falling into a bottomless pit, but still not exactly what we are going for)

The challenge is that it always SEEMS so obvious that it will result in nothing but advantage in the context we are working in, to generalize.  This instinct is good – if you let it push you toward moderate flexibility in design, and refactoring (after you’ve solved a use-case) to a more generalized structure.  Generalization that you arrive at AFTER you’ve learned what the use-cases will be (that is, after you’ve tested and coded them), and moderate flexibility in your design are both highly profitable.  But they are both the result of disciplined, after-the-fact thinking; not a result of the magical thinking that we can somehow avoid work if we divine the right generalization before-the-fact.

This is also another reason that before-the-fact generalization seems so appealing – because it appears to give us something for nothing.

lastcrusade

“Let it go…”

After-the-fact generalization that results in clean, easy to maintain code, that has a very positive return tends to seem simply like the diligence of a mature adult.  Obviously the former, while maybe not tied to reality, is far more Rock-n-Roll.

As mature Craftsmen – we should do like Mr. Jones, and listen to the advice of his dad – “let it go, Indie, let it go…”

Once we’ve let this temptation go, we can take the following methodical approach – which will satisfy our impulse to generalize, but do it in a way that will result in a powerful, positive outcome.

  1. Solve the use-case(s) at hand, directly, with the simplest possible code.  Use a test (or tests) to prove that you’re doing this.
  2. Solve with designs that are SOLID.  SOLID leads to flexibility – flexible systems are easier to change systems.
  3. Refactor: remove anything creating a lack of clarity, generalize where there is unnecessary duplication.
  4. Rinse and Repeat

If we do this we will be creating amazing software!

Happy Coding!

Kyle

 

 

On The Nature of Reason

WHAM!

That was supposed to be the final nail on the final board of your son’s new tree house.  Instead, it was your thumb.

Colorful language passes through your mind.  The sensation in your thumb evolves and seems to regenerate a new, fascinating kind of pain with every passing second.  You lose the grip on your hammer…and it falls 20 feet from your perch.

The pain, while it should be driving you down the ladder and probably to the emergency room, seems to be making you reflective.  Finding yourself sitting cross-legged on the floor of your creation – you start to think about how amazing it is, that you know exactly how fast that hammer accelerated to the ground.  9.8 m/s^2.  In an effort to reconstruct the pain-driven highly-scientific experiment, you pick up the baseball sitting next to you and drop it as well.  What do you know, it hit the ground in about the same amount of time.

Rockets, airplanes, buildings, circus acts, and numerous other things and activities, reason from and rely upon this principle that you, in your intuition grasp quickly, and re-prove to yourself in short-order.  This intuition is powerful – because it’s been built up over a lifetime of such experiments.  In fact, one of the first things I’ve seen little kids do is start to just arbitrarily drop things.  In doing that, they’re not spelling out the mathematics and the precise nature of this behavior, but they are creating intuition.  Which leads to an ability to reason and act in alignment with the way this thing happens.  If as a 10 year-old I fall out of a newly built tree-house, I realize about the rate I’ll be traveling by the time I hit the ground, and I’ll have intuition I can reason from that will hopefully prevent me from trying that experiment in the first place.

Imagine for a second though that we met an alien, freshly arrived from an alternate universe where this pattern didn’t occur with the same regularity – and thus, didn’t have the intuition built up around it.  And as a new arrival to Earth they begin to comment on the fact that when you drop something it always accelerates to the ground at the same rate.

They know all about electrons and protons and the various interactions that draw them together and push them apart.  They know that past the atomic level, we can’t even really observe things without changing them.  Atoms and their constituent parts are constantly in motion, heading in every direction.

How could it possibly be that at the higher levels of abstraction, that there is this consistent behavior.

It turns out it’s an emergent behavior based on the curvature of space around highly massive objects (in our universe anyway, not in the alien’s apparently).  And it leads to this particular behavior.  It would be VERY difficult to predict the motion of an atom, and impossible to precisely predict that of an electron, but as a collection – the “object” (whatever _that_ is) moves with uniform acceleration.

This emergent behavior has particular characteristics and applies broadly – even if we don’t fully understand the dynamics that create it.  We as humans have an instinctive capability to handle things like this called generalization – we notice this emergent behavior, understand its characteristics, and then can apply another of our powerful instincts, reason, to it.  And we do all of this without even being conscious of it most of the time.

When we look at groups of people creating software – this same thing happens.  Humans are nearly impossible to predict.  At a low level of abstraction – when one human is going to schedule a meeting with another, what one human will say or do to another, is fairly difficult to predict.  It’s like the electron.

But as humans apply themselves to working together….working together on software, behaviors emerge.  Ones that we can generalize, and thus reason about.

This is important to realize as we deal with delivering software – since it is absolutely essential to exercising the best possible craft, that we understand and reason about the world with every available tool.  How we work together can create space for craft or it can destroy it.  The tools and techniques in the marketplace – Agile, Scrum, Kanban, to name a few, work to the extent that they leverage the “forces” (emergent behaviors) toward ends that we like – e.g. making space to craft great software and thus meeting needs.

What forces are in play and how they interact is highly sophisticated, and so a given situation requires a deep understanding to be able to focus and manipulate them toward specific goals.  Out of the box tools help with this, but they aren’t the last word.

Ultimately, though, the point is that this isn’t something we can delegate to someone else, or assume has been covered by the larger organization.

It is fully in our hands.

Here’s to creating great software (together)!

Kyle

The Superfluous Story Point

The exercise of putting “point” values on stories is a hallowed one in Scrum circles.  And rightly so – it is a powerful exercise.  Because of its marked simplicity and the underlying wisdom it embodies – it yields three important fruits, while avoiding some common but deadly software-delivery traps.

Story Pointing is first and foremost a discussion around the details of creating a particular piece of software.  The Story Point (and I’m assuming some familiarity with this exercise here) is a metric of relative complexity.  A team giving itself the mandate to arrive at this metric will instantly create deep conversation around details.

This brings impressions about the impending implementation to the forefront of peoples’ minds, turns intuition into explicit discussion, and generally drives powerful group learning.

Secondly, it pushes toward understanding the amount of scope (e.g. what size of story) is meaningful to entertain in this kind of discussion.  So, for example, a story point value of over 60 (or whatever the number is for a specific team) may mean that the story needs to be broken down into smaller parts in order to have meaningful discussions around the details of the implementation.

And lastly, the number of points in a sprint can begin to give a rough prediction of future throughput.  This allows a certain degree of anticipation and planning to start happening for stakeholders.

It does all of this while avoiding setting unrealistic expectations (which happens a lot when estimating with time values), and while not assuming or mandating the specific individuals working on the story.

Story Pointing is awesome.  But what I really want to do with this post is to save you a little time and effort.  And I want to do this by suggesting something ostensibly radical, but that I believe if you look a little deeper is only the next logical progression.  I’d like to suggest that you…

Do the Story Pointing Exercise but get rid of the points.

Huh?  Have you finally lost it, Kyle?

No – well I don’t think so – but follow me on this for a sec…

The usual series of Point values available goes something like: 0, 1, 2, 3, 5, 8, 13, 20, 40, 100 and (Infinity).  Not quite Fibbonacci – but it captures the idea that the bigger something is the less we can think specifically about small steps in complexity.  Great – so far so good.  If something is “Infinity”, we need more information and it needs to be broken down to make sense out of it.  It’s easy to see how the exercise works; we assign points to stories.  And it’s easy to see how the advantages listed above follow.

Now what if we took the 100 and INF cards and threw them out, and just accepted as a new rule that if it’s more complex than a 40, we have to break it down smaller before we can make sense of it.  Does that meaningfully alter the advantages that we noted above?  No – discussion will still be driven, velocity predicted, and all without triggering any of the pitfalls.

Practically speaking, the last few teams that I’ve worked on – they went further.  We never really  used anything beyond 13.  And even 13 is typically looked upon skeptically, in terms of being able to analyze the story in a meaningful way.  So what if we throw out 20 and 13 as well?  Anything over an 8 needs to be broken down smaller.  Have we lost out on any of the advantages yet?

Before we go any further – I’d like to highlight that the act of breaking a story down is as potent in terms of driving conversation as putting complexity numbers on them.  If you need to understand the details as a group to put a complexity estimate on a story, you even more need to understand details to break a story smaller.

So – if we would have otherwise had any stories larger than an 8 – we will have broken them down, and thus driven the conversation around them to a greater degree than if we’d put those higher level points on them.  So not only have we not lost anything by reducing our potential point values to 0, 1, 2, 3, 5, 8…we’re actually getting into higher quality discussion because of the breakdown of the larger things.

And if we remove 8 – have we lost anything?  Nope – again we gain.

5? Same.

3? Same.

2? Same.

Now we’re down to 0 and 1.  0 is trivial – we know when something doesn’t involve work, and there’s no reason to talk about that. Which leaves us with 1….if something isn’t a 1 we break it down further until it is.

Our pointing exercise is now – what point value is it?  Oh it’s more than a 1 – let’s break it down again.

Though, that wording is confusing – I’d suggest we make a slight semantic transformation, and simply ask “is this a small, nearly trivial story, or is it not.”  If it’s not, we break it down further.

It follows – but to make it explicit – that velocity planning is now simply counting stories, because they only have one possible value.

The time-saver with this – is the up-front savings of not having to teach people about the story pointing exercise (if it’s a shop that hasn’t done scrum before).  And the ongoing savings of simply giving a thumbs-up/thumbs-down on every story when you groom, as opposed to trying to arrive at a point value and also break the story down.

And thus we have story pointing….without the points.  And with all the same advantages.  And streamlined for the new millenium.  Story-Pointing++, if you will.  All the same great taste, half the calories.  Ok, I’ll stop.

Here’s to writing great software!

Kyle

 

 

 

 

The Great Convergence: Part I

In my journey as a software craftsman, I’ve learned a few things.  One of them is that software is an art – and an aspect of that art is the interesting tension of serving two distinct audiences.  Audiences whose interests sometimes, but don’t always, align.  I’m talking of course about the audience that puts your software to use, and the audience of builders that will come behind you and make your software do new things.

In the normal world of software – the second audience is the one most neglected.  Serving this audience, though, is the measure of true craft.  It shows that an individual has developed the ability to hear the quiet impulse that has driven the greats of the past to their heights of achievement – the lust to build something great, just because we can.  And it shows an ability to think beyond the instant gratification of a pat on the back from a boss, or relieving the pressure of a driven project manager.

The other thing that I’ve learned is that, as with any art, finding the underlying principles is generally a matter of trying a thing, observing the aesthetic quality of the result, and then judging based on that if you should use that thing again in the future.  The thing you try may come from a flash of your insight – though – as I like to say – good artists borrow, great artists steal (which, incidentally, I’m pretty sure I stole from someone).

Using that approach, I’ve applied the SOLID principles to my art and I’ve discovered an intensely aesthetically pleasing result – in terms of just looking and reading the resulting code, and the ability to adjust it and modify it as situations and needs change.  These results both directly apply to the two audiences mentioned above.

Recently, a wise craftsman brought an interesting aspect of one of the SOLID principles to my attention.  He pointed out that with regards to the Open-Closed Principle, if a class exposes a public member variable it is necessarily not Closed in the OCP sense.  This is because the purpose of a member variable is to be used by methods to keep state between method calls.  That is to say, the purpose of a member variable is to alter the behavior of a method — so the behavior of the method is no longer pre-determined – it can be changed  at an arbitrary point in time.

Now technically, if the class, and more specifically the particular member variable, is simply acting as a data, and there is no behavior dependent on the member variable then changing it doesn’t alter behavior.  But, practically speaking, the expectation of member variables is that they’re used by methods in the class.

I had personally always thought of being “closed for modification” to be meant in strictly a compile-time sense.  That is, “Closed” referred specifically to source code.  But as I thought about my friend’s assertion more, a question occurred to me.  What is SOLID good for?  Well returning to what I arrived at in an experimental fashion – it is good for making source code aesthetically pleasing and easy to change.  And then a second question occurred to me – how does OCP contribute to that?  It contributes by allowing the reader to ignore the internals of existing code, which brings focus to the overall structure and to the overarching message that it is communicating.  This is artistically more compelling and it makes the overall code-base easier to understand.

So I would suggest that changes at run-time as well as compile-time are important to eliminate in this respect.  And as such – OCP does in fact include run-time closure in “closed for modification”.

Having this settled in our mind – another interesting question arises.  Does “encapsulating” the change of the member variable by making it private and only modifying it through a method call make the class “closed”?  There are two differences between setting a member variable directly, and encapsulating it.  The first is that you don’t actually use an assignment operator.  But this doesn’t do anything to eliminate the fact that you’re changing the variable.  And the second is that you might potentially limit the values that the state may take on, and thus you potentially have a better idea about the nature of the behavior.  While this may be true, the fact that the state can change at an arbitrary time means that the internals of the class can no longer be ignored – since a given method may have more than one potential behavior.  This means that we clearly don’t have a “closed” class.

To take this just one step further – another thing that I’ve discovered is that the more SOLID I make my code, the more my OO code looks very much like FP code.  Because of this, I’ve said for some time that the two paradigms are converging.  This has been based primarily on the experimental approach I’ve talked about here.  But if we look at this situation with OCP – what we’ve basically shown is that a class isn’t SOLID if it maintains state (again, barring strictly-data types from this discussion).  A class with just behavior is very close to being just a namespace with a set of functions in it.

All this being said, I believe even more strongly that the paradigms are converging.  Furthermore, I’m fairly convinced that there are underlying principles that dictate this.  Both paradigms seek to make code “easy to reason about” (to use the FP mantra) though they come at it from different angles.  But in the end, they’re shooting to engage the same mechanism – our human instinct to reason – in the most efficient way possible.  After all, what’s more aesthetically pleasing than that which fully engages the instincts it targets in us.

Do You Even ScrumMaster, Bro?

Language is rough sometimes.  It messes things up, and changes the way we perceive things in substantive ways.

This is certainly true with the title “ScrumMaster”.  It implies that it is a role – a full time job even, just by virtue of being a title.

What would happen if we assigned a title of “UnitTester”.  Imagine the implications – we’d start isolating individuals, having them do nothing but write unit tests, they’d start to defend their turf and prevent others from doing it.  Managers would make job postings.  Recruiters would be all like “Ya – lookin’ for a Senior UnitTester to fill a really transformative role” ….. ok, well something like that.

The mechanisms that scrum uses to balance all the natural, competing and complementary forces that arise during the course of software delivery is brilliant.  But the mechanisms need to be facilitated – groups of people aren’t good at maintaining momentum without someone focused on that.  Because of Conway’s law – and we can get into the mechanics of this in another post – methodology and the structure of the software are very closely related.  The best equipped individuals to be facilitating these mechanisms are the members of the team that are delivering the software.   Further, having one finger on the pulse of the methodology gives every developer a sense if it starts to head in a direction that’s not aligned with the collective vision for the architecture (again, tightly connected via Conway).

So the language that we’ve historically used to designate the person who is facilitating scrum mechanisms, is the very thing that’s driven facilitation in a very inappropriate direction.

For action items – I got two things for you:

#1 – everyone on the team should regularly facilitate.  It’s not hard.  Just do it.

#2 – to use language to our advantage here – we can refer to the activity, “facilitating”, rather than some imaginary role “ScrumMaster”.  It will create the correct perception that this is just an activity that everyone does.  Just like unit testing.

Partial Mocking and Scala

I don’t know how to get this across with the level of excitement it brought to me as I discovered it.  But I’ll try.

I love TDD – the clarity, focus, and downright beauty it brings to software creation is absolutely one of my favorite parts of doing it.  I love the learning that it drives, and I love the foundation it lays for a masterpiece of simplicity.  It’s unparalleled as a programming technique — I only wish I would have caught on sooner.

I love Scala.  I can’t seem to find the link – but there was a good list about how to shoot yourself in the foot in various languages – in Scala, you stare at your foot for two days and then shoot with a single line of code.  The language is amazing in its ability to let you get across your meaning in a radically concise, but type-safe way.  I often find myself expressing a thorough bit of business logic in one or two lines.  Things that would have taken 20-30 lines in a typical C-derivative language.  It’s a fantastic language.

Writing Scala – I’ve gotten into a situation that finally, powerfully, crystallized in an experience this morning.  I spent probably an hour struggling to get at the most understandable, most flexible solution.

The situation is this – I have a class that’s definitely a reasonable size – 30-50 lines or so.  In this case, most of the methods were one-liners.  And they were one-liners that built on each other.  The class had one Responsibility, one “axis of change”.  I liked it as it was.

One problem that arose was that one of the methods was wrapping some “legacy code” (read: untestable – and worse unmockable).  In my Java days – this wouldn’t have even arisen as a problem, because the method using the legacy code would have probably warranted its own class, and thus I could have easily just mocked that class.  As it was, I considered it.  But as I said, the class was very expressive, and said as much as it should have, without saying any more.  To cut a one line method and make it a one line class…would have bordered on ridiculous – it would have been far too fine grained at any rate.

So what’s a code-monkey to do?  Well – I tripped across this idea of a partial mock.  Which, I would have derided as pointless in my Java days – and in fact, the prevailing wisdom on the interwebs – was that partial mocking is bad.  I don’t want to do bad.  By the way, if you haven’t googled it already – partial mocking is simply taking a class, mocking out some methods, but letting others do their original behavior (including calling the now mocked methods on the same class).

Anyway – the more I stared at the problem and balanced the two forces at play, the more I realized how right the solution really is.  In my experience, in Scala, the scenario I just laid out is common, and the only real way to solve for it is with partial mocking.

(Big thanks to Mockito for providing this capability – so awesome!)

Why Patterns Suck

Tags

, , ,

I was at a jazz jam session a few years back. I didn’t know it then – but I was learning a valuable lesson about design patterns.

Music, in its similarity to software, has a class of design patterns called scales (think Sound of Music – “doe re mi fa so..”).  Scales help musicians to have an intuitive understanding of the harmonic relationships between different pitches.  These relationships have mathematical connections that lead to a lot of simplification and a lot of carrying lessons learned in one context to another context.  There is a lot of value in deeply thinking about these relationships, and getting them “under your fingers” – getting an intuitive feeling for them by playing them.

The jazz solo is an interesting thing – it’s a time for a musician to attempt to convey feeling to the listeners, following as few rules as possible.  Though there are a lot of underlying laws to how to create certain feels….most musicians, in order to be able to express feeling in real-time, work hard to have an intuitive grasp of these laws.  Thinking through the specific details of the performance while performing would be impractical, and it would destroy the original inspiration.  Hence, musicians have located patterns (such as scales) – that allow them to work on that grasp when not performing.

After stepping down from my solo (which was undoubtedly life-changing for the audience) … another soloist took the stage.  He played scales.  For the whole solo.

A fellow listener leaned over and whispered in my ear about the ineffectiveness of the approach….in more colorful language.

Scales…..like design patterns in any domain are for developing intuitive understanding of the space.  They are NOT to be included for their own sake, thoughtlessly in the actual creation.

I’ve seen this a couple of times, at grand-scale, in software.  In the early 2000’s – I can’t remember how many singletons I saw cropping up all over the place (yeah, I may have been responsible for a few of those)…many, many times very unnecessarily.

These days there are a number of patterns that get picked up and used wholesale (with little thought) – MVC, Monad, Lambda, Onion, etc..  This is not how great software is written.  Like music – the domain has to be well-understood, and then the thing created from that understanding.  Picking up the design patterns, whether they’re scales or singletons, and instead of using them in private to gain understanding, we pass them off as creation, we are using them in exactly the most wrong (and harmful) way.

It will make our software worse – decreasing our understanding, and increasing the complexity of our software by creating code that doesn’t match the problem.

 

 

Oxygen

Tags

“I would sooner destroy a stained glass window than an artist like yourself. However, since I can’t have you follow me either…”The Dread Pirate Roberts (shh – actually it’s Wesley)

Wesley proceeds to bonk Inigo over the head (saber-whips him?) rather than killing him. It’s fortunate for Inigo that Wesley had such an appreciation for his art and the calibre of craftsman that he was fighting against. A lesser man may have gone ahead and destroyed the stained glass window.

In software – it’s not (at least in my experience) so dramatic – we don’t find ourselves in life or death situations based on the level of our craft. But an understanding and recognition of the level of our craft is an important and powerful thing. It’s almost like oxygen to our sense of contentedness with the world, to our self-worth, to the level of fun we’re having crafting software.

This is important for two reasons. Reason number one is that – the craftspeople that we work with share this need – and as we grow and progress in the craft, we are able to provide it for more and more people. The embedded thing here though is that we are only able to provide this oxygen to people whose level of craft we understand and can truly appreciate – folks that are at or below our level. And we should take every opportunity to do this – because it’s good to do this for our fellow human, and because it increases by untold amounts the effectiveness of those around us.

The second reason is because many times we will find ourselves going without oxygen. We need to recognize this – because if we are not careful, it can have massive negative effects on every part of our being – even including our physical health.

What can we do about this..

First – be aware that it is a thing. And be ready to remedy it when it happens. Second – know what some of the remedies are.

They include…

1) Holding your breath – we can for a time go without oxygen without permanent effects – know your limits, but be prepared to hold your breath.

2) Surrounding yourself with craftspeople that are at or ahead of your level. They are the only ones that will recognize your craft – and thus the only ones that can provide the much needed oxygen. This is a hard one though – it may mean leaving comfort for an ultimately better situation in a number of different ways; choosing a different team, engaging people that you don’t have a natural affinity for, or leaving an organization.